Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
id_paragraph
stringlengths
20
26
parag_1
stringlengths
101
3.02k
parag_2
stringlengths
173
2.77k
annot_1
dict
annot_2
dict
id_source
stringlengths
8
11
id_target
stringlengths
8
11
index_paragraph
int64
0
26
list_sentences_1
listlengths
1
36
list_sentences_2
listlengths
1
36
7_CwM-IzWd.zcm6f5HDI.04
During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until the highest accuracy o...
During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until ˆ y = y for all samples...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_08" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
4
[ { "text": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 )" }, { "text": "+ CE( y, ˆ y 1 ) , where CE stands for cross-entropy." }, { "text": "We refer to each of the cross-entropy losses as a modali...
[ { "text": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 )" }, { "text": "+ CE( y, ˆ y 1 ) , where CE stands for cross-entropy." }, { "text": "We refer to each of the cross-entropy losses as a modali...
hegI87bI5S.fL6Q48sfx8.09
The task was created with reference to the previous study [25]. Fig- ure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray back- ground. First, participants clicked on the start area, and the cursor was fixed at the center of the start area. Assum...
The task was created by referring to a previous study [28]. Figure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray background. The participants clicked on the start area; the cursor positioned at the center of the start area. We strictly fixed t...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the middle part of the paragraph to make it more better. Replace some words in the paragraph.", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Slightly revise for readability, you can reorganise ideas in sentences if necessary.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
9
[ { "text": "The task was created with reference to the previous study [25]." }, { "text": "Fig- ure 3 shows a schematic of the task." }, { "text": "A pink circular start area (251-pixel radius) and a green target were displayed on a gray back- ground." }, { "text": "First, participants cl...
[ { "text": "The task was created by referring to a previous study [28]." }, { "text": "Figure 3 shows a schematic of the task." }, { "text": "A pink circular start area (251-pixel radius) and a green target were displayed on a gray background." }, { "text": "The participants clicked on th...
SyGfyinsH.I2YVGmIp0.00
A + C + D refers to our approach. In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor produces a large gain and using the direct bound produce a large gain on average; these gains are most noticeable in the tails. Using the accumulated confidence pr...
A + C + D is our approach. As before, we omit results for the ablation using the VC generalization bound since n is so small that the bound does not hold for any k for the given (cid:15) and δ . In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor p...
{ "annotation": [ "Content_addition", "Concision" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
SyGfyinsH
I2YVGmIp0
0
[ { "text": "A + C + D refers to our approach." }, { "text": "" }, { "text": "In (b), we show the same ablations over the entire trajectory until t = 20 ." }, { "text": "As can be seen, using the calibrated predictor produces a large gain and using the direct bound produce a large gain on ...
[ { "text": "A + C + D is our approach." }, { "text": "As before, we omit results for the ablation using the VC generalization bound since n is so small that the bound does not hold for any k for the given (cid:15) and δ ." }, { "text": "In (b), we show the same ablations over the entire trajector...
WldWha1MT.LL2ZsGpJga.03
A well-established metric to evaluate the topological performance of a segmentation network is the Betti number error, see appendix I, which compares the topological complexity of P and G . However, it is limited as it ignores the spatial correspondence ofthe topological features within their respective images (see F...
Betti number error The Betti number error β err (see App. K) compares the topological complexity of the binarized prediction P and the ground truth G . However, it is limited as it only compares the number of topological features in both images, while ignoring their spatial correspondence (see Fig.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite this definition in a more direct and academic style.", "annotator": "annotator_07" }
WldWha1MT
LL2ZsGpJga
3
[ { "text": "A well-established metric to evaluate the topological performance of a segmentation network is the Betti number error, see appendix I, which compares the topological complexity of P and G ." }, { "text": "However, it is limited as it ignores the spatial correspondence ofthe topological feat...
[ { "text": "Betti number error The Betti number error β err (see App. K) compares the topological complexity of the binarized prediction P and the ground truth G ." }, { "text": "However, it is limited as it only compares the number of topological features in both images, while ignoring their spatial cor...
7_CwM-IzWd.zcm6f5HDI.03
We implement the fusion module as a multi-modal transfer module (MMTM) (Joze et al., 2020). The first step in MMTM is to squeeze feature maps from each uni-modal branch to vector representations via global average pooling over spatial dimensions. Next we concatenate these representations and applya linear transformatio...
We implement every fusion module by a multi-modal transfer module (MMTM) (Joze et al., 2020). Each MMTM connects two layers from the two uni-modal branches. There is first the global average pooling applied over spatial dimensions to transform feature maps into a vector. We concatenate the two vectors and apply linear t...
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rearrange the structure to make the structure clearer.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite this paragraph completely to make it clearer.", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
3
[ { "text": "We implement the fusion module as a multi-modal transfer module (MMTM) (Joze et al., 2020)." }, { "text": "The first step in MMTM is to squeeze feature maps from each uni-modal branch to vector representations via global average pooling over spatial dimensions." }, { "text": "Next we ...
[ { "text": "We implement every fusion module by a multi-modal transfer module (MMTM) (Joze et al., 2020)." }, { "text": "Each MMTM connects two layers from the two uni-modal branches. There is first the global average pooling applied over spatial dimensions to transform feature maps into a vector." }, ...
uJRtLYIOIq.e9xxGlB_c.00
Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if they are added with large enoughconstants; for example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-...
Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if a large enough constant is added. For example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determine...
{ "annotation": [ "Concision" ], "instruction": "Rewrite some formulations, giving preference to shorter ones.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Shorten this paragraph a bit while keeping all the informations.", "annotator": "annotator_07" }
uJRtLYIOIq
e9xxGlB_c
0
[ { "text": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if they are added with large enoughconstants;" }, { "text": "for example, c − ∥ x − x ′ ∥ p for large enough c ." }, { "text": "Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant pr...
[ { "text": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if a large enough constant is added." }, { "text": "For example, c − ∥ x − x ′ ∥ p for large enough c ." }, { "text": "Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of...
xV0XmrSMtk.sYfR73R9z.02
Discrete Variational Auto-Encoder. In a discrete variational autoencoder (DVAE) (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to...
Discrete Variational Auto-Encoder (DVAE). In a DVAE (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of ...
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise by introducing acronyms earlier.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Introduce the acronym DVAE earlier to avoid repeating it.", "annotator": "annotator_07" }
xV0XmrSMtk
sYfR73R9z
2
[ { "text": "Discrete Variational Auto-Encoder." }, { "text": "In a discrete variational autoencoder (DVAE) (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder." }, { "text": "We consider the task of training a DVA...
[ { "text": "Discrete Variational Auto-Encoder (DVAE)." }, { "text": "In a DVAE (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder." }, { "text": "We consider the task of training a DVAE on the M NIST dataset wher...
PDvmJtmgQb.gGrpxbc7UI.02
Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown (Alon et al., 2019; Bassily et al., 2020a) that a combination of private and public data samples can yield asymptotically better w...
Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown [3, 10] that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees t...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "I want to use numbers for in-text citations. ", "annotator": "annotator_09" }
PDvmJtmgQb
gGrpxbc7UI
2
[ { "text": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically." }, { "text": "On the theoretical side, it has been shown (Alon et al., 2019; Bassily et al., 2020a) that a combination of private and public data...
[ { "text": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically." }, { "text": "On the theoretical side, it has been shown [3, 10] that a combination of private and public data samples can yield asymptotically b...
E2pFUCGYZ1.5hMS4Fg2b_b.00
ADO iterations in the Bayesian framework are shown in Sec. 3.3 and Appendix A.3. Finally, with theestimated posterior, the predictive uncertainty can be quantified by evaluating the identified systemwith an ensemble of parameters. To further improve the prediction capability, especially for chaoticsystems, we propose t...
ADO iterations in the Bayesian framework are shown in Sec. 3.3 and supplemental materials. Finally,with the estimated posterior, the predictive uncertainty can be quantified by evaluating the identifiedsystem with an ensemble of parameters. To further improve the prediction capability, especially forchaotic systems, we...
{ "annotation": [ "Rewriting_light" ], "instruction": "Use \"supplemental materials\" instead of \"Appendix\"", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Lightly revise for readability.", "annotator": "annotator_07" }
E2pFUCGYZ1
5hMS4Fg2b_b
0
[ { "text": "ADO iterations in the Bayesian framework are shown in Sec." }, { "text": "3.3 and Appendix A.3." }, { "text": "Finally, with theestimated posterior, the predictive uncertainty can be quantified by evaluating the identified systemwith an ensemble of parameters." }, { "text": "T...
[ { "text": "ADO iterations in the Bayesian framework are shown in Sec." }, { "text": "3.3 and supplemental materials." }, { "text": "Finally,with the estimated posterior, the predictive uncertainty can be quantified by evaluating the identifiedsystem with an ensemble of parameters." }, { ...
MXi6uEx-hp.rdZfFcGyf9.14
AGILE clearly outperforms all the baselines demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQNbased architectures because the top...
AGILE outperforms all the baselines, demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQN-based architectures because the top-K gre...
{ "annotation": [ "Rewriting_light" ], "instruction": "Remove unnecessary words and fix the words if they are not in the correct form", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Remove terms that might be considered biased. Make the writing more clear.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
14
[ { "text": "AGILE clearly outperforms all the baselines demonstrating that relational knowledge of other available actions is crucial for an optimal policy." }, { "text": "RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys." }, { "text": "Additi...
[ { "text": "AGILE outperforms all the baselines, demonstrating that relational knowledge of other available actions is crucial for an optimal policy." }, { "text": "RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys." }, { "text": "Additionally,...
mFNezF8ubW.g-sOkbqBcm.00
Each concept in the hierarchy corresponds to one set of hidden nodes which are connected to the hidden nodes representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those acc...
Each concept in the hierarchy corresponds to one set of hidden nodes that essentially represent the concept. These hidden nodes are connected to those representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden ...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
mFNezF8ubW
g-sOkbqBcm
0
[ { "text": "Each concept in the hierarchy corresponds to one set of hidden nodes which are connected to the hidden nodes representing its children, if any." }, { "text": "For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidde...
[ { "text": "Each concept in the hierarchy corresponds to one set of hidden nodes that essentially represent the concept. These hidden nodes are connected to those representing its children, if any." }, { "text": "For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will ...
CVRUl83zah.I75TtW0V7.25
• Instead of using a relation network [] – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach does improve our results sl...
• Instead of using a relation network (Santoro et al., 2017) – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach would i...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
25
[ { "text": "• Instead of using a relation network [] – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely." }, { "text": "This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) ." }, { "text":...
[ { "text": "• Instead of using a relation network (Santoro et al., 2017) – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely." }, { "text": "This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) ." ...
lLwt-9RJ2tm.XJsauLjck.03
That said, one might still question whether it is possible to match the solution quality of a givenψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative for at least the dissimilarity objective of [15]; ourstructural decomposition of ...
That said, one can further question whether it is possible to match the solution quality of any given ψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative; we can in fact achieve even stronger performance guarantees for both objective...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
3
[ { "text": "That said, one might still question whether it is possible to match the solution quality of a givenψ -approximate offline algorithm for the maximization objectives in the models of computation we consider." }, { "text": "We answer this in the affirmative for at least the dissimilarity objective...
[ { "text": "That said, one can further question whether it is possible to match the solution quality of any given ψ -approximate offline algorithm for the maximization objectives in the models of computation we consider." }, { "text": "We answer this in the affirmative; we can in fact achieve even stronger...
9ALnOEcGN_.4eEIRZ-dm.00
We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 36]. However, thereare several major distinctions between the existing methods and our proposed one. Previous workgenerates heatmaps based ...
We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 14, 36]. However, there are major distinctions between the existing methods and our DIMES. For instance, Fu et al. [17] learn to generate he...
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
9ALnOEcGN_
4eEIRZ-dm
0
[ { "text": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 36]." }, { "text": "However, thereare several major distinctions between the existing methods and our proposed o...
[ { "text": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 14, 36]." }, { "text": "However, there are major distinctions between the existing methods and our DIMES. For ins...
atxti8SVk.3K9AmPwALM.16
Pascal: Scribble annotations. Table 3 shows that, without CRF post-processing, we get 74 . 1% mIoU, achieving 97 . 6% of full supervision performance; with CRF post-processing, we reach new SOTA: We get 75 . 9% mIoU, achieving 98 . 6% of full supervision performance.
Pascal: Scribble annotations. Table 3 shows that, our method consistently delivers the best performance among methods without or with CRF post-processing. We get 74 . 2% ( 76 . 1% ) mIoU, achieving 97 . 5% ( 98 . 4% ) of full supervision performance in these two categories respectively.
{ "annotation": [ "Content_substitution", "Rewriting_light" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
16
[ { "text": "Pascal: Scribble annotations." }, { "text": "Table 3 shows that, without CRF post-processing, we get 74 . 1% mIoU, achieving 97 . 6% of full supervision performance; with CRF post-processing, we reach new" }, { "text": "SOTA: We get 75 ." }, { "text": "9% mIoU, achieving 98 ....
[ { "text": "Pascal: Scribble annotations." }, { "text": "Table 3 shows that, our method consistently delivers the best performance among methods without or with CRF post-processing." }, { "text": "We get 74 ." }, { "text": "2% ( 76 . 1% ) mIoU, achieving 97 ." }, { "text": "5% ( 9...
ByZyHzZC-.HktKf7-AW.01
Our work is also related to other work on the importance of noise in SGDs, which have been previously explored. The main inspiration for having a learning rate schedule is to anneal noise (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our ana...
Our work is also related to the importance of noise in SGD, which has been previously explored. The main inspiration behind learning rate schedule has been shown to be noise annealing (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our analysis...
{ "annotation": [ "Content_deletion", "Rewriting_light" ], "instruction": "Remove unnecessary content in the last sentence.", "annotator": "annotator_09" }
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Make the last sentence shorter, only keep the main idea. Slightly concise this paragraph and improve the english.", "annotator": "annotator_07" }
ByZyHzZC-
HktKf7-AW
1
[ { "text": "Our work is also related to other work on the importance of noise in SGDs, which have been previously explored." }, { "text": "The main inspiration for having a learning rate schedule is to anneal noise (Bottou, 1998)." }, { "text": "Neelakantan et al." }, { "text": "(2015) o...
[ { "text": "Our work is also related to the importance of noise in SGD, which has been previously explored." }, { "text": "The main inspiration behind learning rate schedule has been shown to be noise annealing (Bottou, 1998)." }, { "text": "Neelakantan et al." }, { "text": "(2015) observ...
u9NaukzyJ-.hh0KECXQLv.11
Design A supportstwo sorts of medication entries: drug or phys- ical activity. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashe...
Design A supports medication (or drug) entries and physical activ- ities. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashed bor...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make this paragraph a bit more fluid.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "I want to rewrite the first sentence.", "annotator": "annotator_09" }
u9NaukzyJ-
hh0KECXQLv
11
[ { "text": "Design A supportstwo sorts of medication entries: drug or phys- ical activity." }, { "text": "Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage." }, { "text": "The suffix -WF indicates that the drug should be administered wit...
[ { "text": "Design A supports medication (or drug) entries and physical activ- ities." }, { "text": "Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage." }, { "text": "The suffix -WF indicates that the drug should be administered with foo...
CVRUl83zah.I75TtW0V7.04
Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass,Zhang et al. (2019) backpropagate through the gradient descent iterations in order to c...
Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass, the goal is to differentiate Equation 7 with respect to the input vector z and the par...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Add a sentence to explain the last sentence.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Improve the logical flow of the last half of the paragraph.", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
4
[ { "text": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L ." }, { "text": "In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps." }, { "text": "In the backward pass,Zhang et al. (2019) ba...
[ { "text": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L ." }, { "text": "In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps." }, { "text": "In the backward pass, the goal is to differ...
cW17DDjQa_.6iDdN7-bYz.00
We propose an algorithm to solve above optimization problem (3). The optimization problem contains non-continuous indicator function in constraint (3d, 3c), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we first reformulate the inequality constraints as soft regularizations and int...
To address the optimization problem (3), we adopts the alternating direction method of multipliers (ADMM) for the reformulation. In details, the optimization problem contains non-continuous indicator function in constraint (3c, 3d), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
cW17DDjQa_
6iDdN7-bYz
0
[ { "text": "We propose an algorithm to solve above optimization problem (3)." }, { "text": "The optimization problem contains non-continuous indicator function in constraint (3d, 3c), and non-convex constraint (3b), which make the problem difficult to solve." }, { "text": "Therefore, we first refor...
[ { "text": "To address the optimization problem (3), we adopts the alternating direction method of multipliers (ADMM) for the reformulation." }, { "text": "In details, the optimization problem contains non-continuous indicator function in constraint (3c, 3d), and non-convex constraint (3b), which make th...
33RNh69fYq.kMvWVl725x.02
Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [3]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[37] respectively have the cha...
Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [4]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[39] are resized and concatena...
{ "annotation": [ "Concision" ], "instruction": "Remove some details on model training to make the paragraph more concise.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Remove unnecessary details to shorten this paragraph.", "annotator": "annotator_07" }
33RNh69fYq
kMvWVl725x
2
[ { "text": "Setup ." }, { "text": "Anomaly detection aims to detect whether an image contains anomalous regions." }, { "text": "Theperformance is evaluated on MVTec-AD [3]." }, { "text": "The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 ." ...
[ { "text": "Setup ." }, { "text": "Anomaly detection aims to detect whether an image contains anomalous regions." }, { "text": "Theperformance is evaluated on MVTec-AD [4]." }, { "text": "The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 ." ...
MXi6uEx-hp.rdZfFcGyf9.21
In the experiment of Fig. 5, we found that in RecSim the relation of items is easy to model such that AGILE could not outperform the ablations whereas AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. Then, we hypothesized that the existence of the ...
In the experiment of Fig. 5, we found that in RecSim, the relation of items is easy to model such that AGILE could not outperform the ablations. In contrast, AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. We hypothesize that these environments re...
{ "annotation": [ "Rewriting_medium", "Content_deletion" ], "instruction": "Make this paragraph shorter and easier to understand", "annotator": "annotator_10" }
{ "annotation": [ "Concision" ], "instruction": "Simplify the less essential ideas of the paragraph to make it more concise.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
21
[ { "text": "In the experiment of Fig. 5, we found that in RecSim the relation of items is easy to model such that " }, { "text": "AGILE could not outperform the ablations whereas AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making." },...
[ { "text": "In the experiment of Fig. 5, we found that in RecSim, the relation of items is easy to model such that AGILE could not outperform the ablations." }, { "text": "In contrast, AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making....
NwOG107NKJ.0PPYM22rdB.02
Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) Weber and Luo [2014]. Other features includeproject volume, document...
Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) [Weber and Luo, 2014]. Other features include project size, file vol...
{ "annotation": [ "Rewriting_light" ], "instruction": "Make the use of a citation in the second sentence correct. Update the third sentence.", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the readability of this paragraph.", "annotator": "annotator_03" }
NwOG107NKJ
0PPYM22rdB
2
[ { "text": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users." }, { "text": "Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest)" }, { "text": "Web...
[ { "text": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users." }, { "text": "Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest)" }, { "text": "[We...
ByZyHzZC-.HktKf7-AW.00
The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In parti...
The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In parti...
{ "annotation": [ "Development", "Content_addition" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
ByZyHzZC-
HktKf7-AW
0
[ { "text": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014;" }, { "text": "Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et a...
[ { "text": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014;" }, { "text": "Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et a...
7_CwM-IzWd.zcm6f5HDI.05
During training, the uni-modal branch largely focuses on the associated modality. The fusion modules generatecross-modal context information from the uni-modal branches and pass it back to them. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 , corresponding to...
During training, each uni-modal branch largely focuses on its associate input modality. The fusion modules generate context representation using all modalities and feed such information to the unimodal branches. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 ,...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make the sentence understandable.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the wording of this paragraph.", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
5
[ { "text": "During training, the uni-modal branch largely focuses on the associated modality." }, { "text": "The fusion modules generatecross-modal context information from the uni-modal branches and pass it back to them." }, { "text": "Both ˆ y 0 and ˆ y 1 depend on information from both modalit...
[ { "text": "During training, each uni-modal branch largely focuses on its associate input modality." }, { "text": "The fusion modules generate context representation using all modalities and feed such information to the unimodal branches." }, { "text": "Both ˆ y 0 and ˆ y 1 depend on information ...
eyheq0JfG.lDLi0nFVcl.00
For example, using mixup on top of random scaling and cropping improves the results by 0.4%. This suggests that thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations).
For example, using mixup on top of random scaling and cropping improves the results by 0.4%. In comparison, when we trained Real-to-Bin Martinez et al. (2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II. This suggests that, thanks to the proposed methods, we are getting closer than ever...
{ "annotation": [ "Content_addition", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
eyheq0JfG
lDLi0nFVcl
0
[ { "text": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%." }, { "text": "" }, { "text": "" }, { "text": "This suggests that thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amena...
[ { "text": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%." }, { "text": "In comparison, when we trained Real-to-Bin Martinez et al." }, { "text": "(2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II." }, { "tex...
CVRUl83zah.I75TtW0V7.05
Equivariance of DSPN We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g is always multiset-equivariant, but depending on the encoder, it is not necessarily setequivariant. Zhang et al. find t...
Equivariance of DSPN. We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g with respect to the set input Y is always multiset-equivariant, but depending on the encoder, it is not necessarily se...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light", "Development" ], "instruction": "", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
5
[ { "text": "Equivariance of DSPN We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets." }, { "text": "The gradient of the permutation-invariant encoder g is always multiset-equivariant, but depending on the encoder, it is not nece...
[ { "text": "Equivariance of DSPN. We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets." }, { "text": "The gradient of the permutation-invariant encoder g with respect to the set input Y is always multiset-equivariant, but dependin...
aomiOZE_m2.rxb2TiQ6bq.05
Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly introduced recursive learning in DRCN to decrease model ...
Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly decreased parameter number by utilizing recursive learni...
{ "annotation": [ "Rewriting_medium", "Concision" ], "instruction": "Can you make my paragraph more concise?", "annotator": "annotator_09" }
{ "annotation": [ "Concision" ], "instruction": "Use shorter formulations and more direct language to make the paragraph more concise.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
5
[ { "text": "Lightweight Image SR Models." }, { "text": "Recent years have been rising interest in investigating lightweight image SR models." }, { "text": "These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting." }, { ...
[ { "text": "Lightweight Image SR Models." }, { "text": "Recent years have been rising interest in investigating lightweight image SR models." }, { "text": "These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting." }, { ...
gIp_U0JsFa.T3RdAsTpzN.00
Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [32], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data...
Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [27], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
gIp_U0JsFa
T3RdAsTpzN
0
[ { "text": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1)." }, { "text": "As per [32], we consider the setting of ‘dataset shift’, whereby a model is developed on the...
[ { "text": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1)." }, { "text": "As per [27], we consider the setting of ‘dataset shift’, whereby a model is developed on the...
7_CwM-IzWd.zcm6f5HDI.22
We report means and standard deviations of the models’ test accuracy in Table 1.[- -] The guided algorithm improves the models’ generalization performance over the vanilla algorithm in all four cases.It also outperforms the random algorithm, with the exception of ModelNet40, where their performances are very close.
We report means and standard deviations of the models’ test accuracies in Table 1.[- -] 3 RUBi does not show consistent improvement across tasks compared to the vanilla algorithm. The guided algorithm improves the models’ generalization performance over all three other methods in all four cases.
{ "annotation": [ "Content_substitution", "Development" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Content_substitution", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_07" }
7_CwM-IzWd
zcm6f5HDI
22
[ { "text": "We report means and standard deviations of the models’ test accuracy in Table 1.[-\n-]" }, { "text": " The guided algorithm improves the models’ generalization performance over the vanilla algorithm in all four cases.It also outperforms the random algorithm, with the exception of ModelNet40, ...
[ { "text": "We report means and standard deviations of the models’ test accuracies in Table 1.[-\n-]" }, { "text": "3 RUBi does not show consistent improvement across tasks compared to the vanilla algorithm. The guided algorithm improves the models’ generalization performance over all three other methods...
S1-LZxvKX.rJ009I8RX.03
Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at th...
Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at th...
{ "annotation": [ "Concision" ], "instruction": "Edit the last sentence of this paragraph to make it shorter and remove the reference to Section 5.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Rewrite the last sentence to make it more concise.", "annotator": "annotator_07" }
S1-LZxvKX
rJ009I8RX
3
[ { "text": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently." }, { "text": "Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training." }, { "text": "Sparse evolutionary training (Mocanu e...
[ { "text": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently." }, { "text": "Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training." }, { "text": "Sparse evolutionary training (Mocanu e...
XXtXW925iG.JHwYPw52XHb.00
In the previous section, we showed that the limiting diffusion exists when ⌘ and go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ⌘ ! 0 while ⌘ varies and is only upper bounded by some constant. A concrete example is ⌘ ! 0and being fixed.
In the previous section, we showed that the limiting diffusion exists when η and λ go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ηλ → 0 while ηλ is upper bounded by some constant. A concrete example is η → 0 and λ beingfixed.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
XXtXW925iG
JHwYPw52XHb
0
[ { "text": "In the previous section, we showed that the limiting diffusion exists when ⌘ and \u0000 go to zero witha fixed ratio." }, { "text": "However, the situation is more complicated in the general case, i.e. , the intrinsic LR ⌘\u0000 ! 0 while ⌘\u0000 varies and is only upper bounded by some consta...
[ { "text": "In the previous section, we showed that the limiting diffusion exists when η and λ go to zero witha fixed ratio." }, { "text": "However, the situation is more complicated in the general case, i.e. , the intrinsic LR ηλ → 0 while ηλ is upper bounded by some constant." }, { "text": "A c...
aFWzpdwEna.MCecpd3utK.00
In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is usually challenging or intractable without access to the environment that the learned policy will be deployed to. To address this probl...
In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment. To address the problem, we study a bi-objective formulation for model-based of...
{ "annotation": [ "Concision", "Rewriting_heavy" ], "instruction": "Make this paragraph more concise by rewriting the second half.", "annotator": "annotator_02" }
{ "annotation": [ "Concision", "Content_deletion" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
aFWzpdwEna
MCecpd3utK
0
[ { "text": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is usually challenging or intractable without access to the environment that the learned policy will be deployed to." ...
[ { "text": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment." }, { "text": "To address the problem, we study a b...
YkiRt7L93m.jgDbnUD7s.01
We introduce a notion of projection between sets of probability measures supported on Euclidean spaces. The proposed definition is applicable between sets of general probability measures with different supports and possesses good computational and statistical properties. Italso provides a unique solution to the projecti...
A notion of projection between sets of probability measures should be applicable between any set of general probability measures, replicate geometric properties of the target measure, and possess good computational and statistical properties. We introduce such a notion of projection between sets of general probability ...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Please, make this paragraph easier to read.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite and reorganise this paragraph to improve the english and be more convincing, let the last sentence as it is.", "annotator": "annotator_07" }
YkiRt7L93m
jgDbnUD7s
1
[ { "text": "We introduce a notion of projection between sets of probability measures supported on Euclidean spaces. The proposed definition is applicable between sets of general probability measures with different supports and possesses good computational and statistical properties. " }, { "text": "Italso...
[ { "text": "A notion of projection between sets of probability measures should be applicable between any set of general probability measures, replicate geometric properties of the target measure, and possess good computational and statistical properties. We introduce such a notion of projection between sets of g...
jzQGmT-R1q.ugUt9B3XaO.02
In Figure 2 we see that the networks trained in these two experiments both exhibit decreased ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that some units may be saturating, but we see a similar trend across most archi...
In Figure 2 we see that most networks trained in these two experiments exhibit decreasing ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that this capacity loss may be driven by saturated units and that this phenomenon ...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
jzQGmT-R1q
ugUt9B3XaO
2
[ { "text": "In Figure 2 we see that the networks trained in these two experiments both exhibit decreased ability to fit later target functions under a fixed optimization budget." }, { "text": "This effect is strongest in small networks with ReLU activations, suggesting that some units may be saturating, bu...
[ { "text": "In Figure 2 we see that most networks trained in these two experiments exhibit decreasing ability to fit later target functions under a fixed optimization budget." }, { "text": "This effect is strongest in small networks with ReLU activations, suggesting that this capacity loss may be driven by...
hegI87bI5S.fL6Q48sfx8.08
VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an opticalmouse, Logitech gaming mouse (G-PPD-002WLr; 1600 DPI). The mouse-cursor speed via the OS setting was set to the middle of the slider in the control display and ” Enhance pointer precision ” setting was turned on to mat...
VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an optical mouse (Logitech gaming mouse, G-PPD-002WLr; 1600 DPI, and the mouse-cursor speed based on the OS setting was set to the middle of the slider in the control display and the “ Enhance pointer precision ” setting was tur...
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the English of this paragraph", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium", "Rewriting_light" ], "instruction": "Slightly revise the linking between phrases.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
8
[ { "text": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz." }, { "text": "We used an opticalmouse, Logitech gaming mouse (G-PPD-002WLr; 1600 DPI)." }, { "text": "The mouse-cursor speed via the OS setting was set to the middle of the slider in the control displ...
[ { "text": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz." }, { "text": "We used an optical mouse (Logitech gaming mouse," }, { "text": "G-PPD-002WLr; 1600 DPI, and the mouse-cursor speed based on the OS setting was set to the middle of the slider in the cont...
_nwyDQp-7.85dN7i1zNm.00
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. This assumpti...
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. Intuitively, ...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
_nwyDQp-7
85dN7i1zNm
0
[ { "text": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018;" }, { "text": "Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d." }, ...
[ { "text": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018;" }, { "text": "Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d." }, ...
OV5v_wBMHk.bw4cqlpLh.02
Estimating ITE with observational data suffers from two primary issues: (1) missing counterfactuals, i . e ., we can only observe one factual outcome out of all potential outcomes; (2) treatment selection bias, i . e ., individuals have their preferences regarding treatment selection, making the population across diffe...
Estimating ITE with observational data has two main challenges: (1) missing counterfactuals, i . e ., only one factual outcome out of all potential outcomes can be observed; (2) treatment selection bias, i . e ., individuals have their preferences for treatment selection, making units in different treatment groups hete...
{ "annotation": [ "Unusable", "Rewriting_light" ], "instruction": "", "annotator": "annotator_07" }
null
OV5v_wBMHk
bw4cqlpLh
2
[ { "text": "Estimating ITE with observational data suffers from two primary issues: (1) missing counterfactuals, i ." }, { "text": "e ., we can only observe one factual outcome out of all potential outcomes; (2) treatment selection bias, i ." }, { "text": "e ., individuals have their preferences ...
[ { "text": "Estimating ITE with observational data has two main challenges: (1) missing counterfactuals, i ." }, { "text": "e ., only one factual outcome out of all potential outcomes can be observed; (2) treatment selection bias, i ." }, { "text": "e ., individuals have their preferences for tre...
aomiOZE_m2.rxb2TiQ6bq.07
We first give a brief view of the problem setting about deep CNN for image SR. We also observe that there exists heavy redundancy in the networks. To pursue more efficient image SR networks, we then propose structure-regularized pruning (SRP) method to compress them.
We first present an overview of the problem setting about deep CNN for image SR. It is also observed that excessive redundancy exists in the SR deep CNNs. Then we move on to proposing our structureregularized pruning (SRP) method attempting to achieve more efficient SR networks.
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Can you paraphrase the last sentence?", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite the last sentence preferring passive voice over active.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
7
[ { "text": "We first give a brief view of the problem setting about deep CNN for image SR." }, { "text": "We also observe that there exists heavy redundancy in the networks." }, { "text": "To pursue more efficient image SR networks, we then propose structure-regularized pruning (SRP) method to co...
[ { "text": "We first present an overview of the problem setting about deep CNN for image SR." }, { "text": "It is also observed that excessive redundancy exists in the SR deep CNNs." }, { "text": "Then we move on to proposing our structureregularized pruning (SRP) method attempting to achieve more...
nCTSF9BQJ.DGhBYSP_sR.02
Recently, deep learning has gained tremendous success in modeling proteins, making data-driven methods more appealing than ever (Rives et al., 2019; Jumper et al., 2021). Nevertheless, challenges exist for developing deep learning-based models to predict mutational effects on protein-protein binding. The major challeng...
Recently, deep learning has shown significant promise in modeling proteins, making data-driven approaches more attractive than ever (Rives et al., 2019; Jumper et al., 2021). However, developing deep learning-based models to predict mutational effects on protein-protein binding is challenging due to the scarcity of exp...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the following paragraph using a more formal language.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite this paragraph for better readability.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
2
[ { "text": "Recently, deep learning has gained tremendous success in modeling proteins, making data-driven methods more appealing than ever (Rives et al., 2019; Jumper et al., 2021)." }, { "text": "Nevertheless, challenges exist for developing deep learning-based models to predict mutational effects on p...
[ { "text": "Recently, deep learning has shown significant promise in modeling proteins, making data-driven approaches more attractive than ever (Rives et al., 2019; Jumper et al., 2021)." }, { "text": "However, developing deep learning-based models to predict mutational effects on protein-protein binding...
g5N2H6sr7.6J3ec8Dl3p.02
Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We denote our framework using (1) GCN (Kipf & Welling, ...
Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We also include the results of recent supervised graph ...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
g5N2H6sr7
6J3ec8Dl3p
2
[ { "text": "Kernel (MLG) (Kondor & Pan, 2016)." }, { "text": "In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC" }, { "text": "(Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmad...
[ { "text": "Kernel (MLG) (Kondor & Pan, 2016)." }, { "text": "In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC" }, { "text": "(Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmad...
hegI87bI5S.fL6Q48sfx8.11
We defined the notch position ( Position ) as the condition. Position = Inside indicated that the notch was placed between the start area and the target, and Position = Outside indicated that the notch was placed to the right of the target. When the angle of entry to a target adjacent to a top edge with respect to the ...
We defined the notch position ( Position ) as the condition. Position = Inside indicates that the notch is placed between the start area and the target, and Position = Outside indicates that the notch is placed to the left of the target. An equivalent effect is observed at angles of entry that are lineally symmetric ab...
{ "annotation": [ "Rewriting_medium", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
hegI87bI5S
fL6Q48sfx8
11
[ { "text": "We defined the notch position ( Position ) as the condition." }, { "text": "Position =" }, { "text": "Inside indicated that the notch was placed between the start area and the target, and Position = Outside indicated that the notch was placed to the right of the target." }, { ...
[ { "text": "We defined the notch position ( Position ) as the condition." }, { "text": "Position =" }, { "text": "Inside indicates that the notch is placed between the start area and the target, and Position = Outside indicates that the notch is placed to the left of the target." }, { "te...
aVemIPPM7t.-8hV3QV4L9.00
Experiments were conducted on a small number of n1-standard-96 Google Cloud Platform VM instances, with 48 CPU cores on an Intel Skylake processor and 360 GB of RAM. It takes less than a week of compute on a single n1-standard-96 instance to run all the experiments described in this paper.
Experiments were conducted on a workstation (Intel i9-7920X CPU with 64 GB of RAM), and a small number of r5.24xlarge AWS VM instances, with 48 CPU cores on an Intel Skylake processor and 768 GB of RAM. It takes less than a week of compute on a single r5.24xlarge instance to run all the experiments described in this pa...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
aVemIPPM7t
-8hV3QV4L9
0
[ { "text": "Experiments were conducted on a small number of n1-standard-96 Google Cloud Platform VM instances, with 48 CPU cores on an Intel Skylake processor and 360 GB of RAM." }, { "text": "It takes less than a week of compute on a single n1-standard-96 instance to run all the experiments described i...
[ { "text": "Experiments were conducted on a workstation (Intel i9-7920X CPU with 64 GB of RAM), and a small number of r5.24xlarge AWS VM instances, with 48 CPU cores on an Intel Skylake processor and 768 GB of RAM." }, { "text": "It takes less than a week of compute on a single r5.24xlarge instance to ru...
SRquLaHRM4.vI2x5N-YHC.00
We solve this problem by introducing the optimal transport theory [51] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimizat...
We solve this problem by introducing the optimal transport theory [50] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimizat...
{ "annotation": [ "Content_deletion" ], "instruction": "Remove any unessential information in this paragraph.", "annotator": "annotator_03" }
{ "annotation": [ "Content_deletion", "Rewriting_light" ], "instruction": "Please exclude the content related to optimal transport.", "annotator": "annotator_09" }
SRquLaHRM4
vI2x5N-YHC
0
[ { "text": "We solve this problem by introducing the optimal transport theory [51] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value." }, { "text": "Furthermore,to reduce the computational cost and avoid the extra model parameters, we l...
[ { "text": "We solve this problem by introducing the optimal transport theory [50] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value." }, { "text": "Furthermore,to reduce the computational cost and avoid the extra model parameters, we l...
aomiOZE_m2.rxb2TiQ6bq.20
Model Size and Mult-Adds. Compared with recent works (e.g., MemNet, CARN, and IMDN), our SRPN-L has the least parameter number. We also provide operations number with Mult-Adds by setting the output size as 3 × 1280 × 720. Our SRPN-L operates less Mult-Adds than most compared methods. Those comparisons indicate that S...
Model Size and Mult-Adds. Our SRPN-Lite has the fewest parameter number in comparison to recent efficient SR works such as MemNet, CARN, and IMDN. The comparison in terms of MultAdds (measured when the output size is set to 3 × 1,280 × 720) is also presented. As seen, our SRPNLite costs fewer Mult-Adds than most compari...
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Give me a more formal version of this paragraph", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the text and change SRPN-L to SRPN-Lite", "annotator": "annotator_06" }
aomiOZE_m2
rxb2TiQ6bq
20
[ { "text": "Model Size and Mult-Adds." }, { "text": "Compared with recent works (e.g., MemNet, CARN, and IMDN), our SRPN-L has the least parameter number." }, { "text": "We also provide operations number with Mult-Adds by setting the output size as 3 × 1280 × 720." }, { "text": "Our SRPN...
[ { "text": "Model Size and Mult-Adds." }, { "text": "Our SRPN-Lite has the fewest parameter number in comparison to recent efficient SR works such as MemNet, CARN, and IMDN." }, { "text": "The comparison in terms of MultAdds (measured when the output size is set to 3 × 1,280 × 720) is also present...
MnewiFDvHZ.iAYttXl-uH.00
• Fixed constraints g t p x q “ g p x q , @ t, where the constraint functions are the same across the timebut they are not necessary to be known when making decision at round t . Note the setting ofknown and fixed constraints in [14, 17, 29, 33] is a special case of ours.• Adversarial constraints g t p x q , where the ...
• Fixed constraints g t p x q “ g p x q , @ t, where the constraint function is known (fixed) when makingdecision at round t as in [15, 12, 30, 26]. • Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when making decision at round t and can be arbitrarily and adversarially chosen, as...
{ "annotation": [ "Concision" ], "instruction": "Make paragraph more concise", "annotator": "annotator_06" }
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
MnewiFDvHZ
iAYttXl-uH
0
[ { "text": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint functions are the same across the timebut they are not necessary to be known when making decision at round t . Note the setting ofknown and fixed constraints in [14, 17, 29, 33] is a special case of ours.• Adversarial constraints g t...
[ { "text": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint function is known (fixed) when makingdecision at round t as in [15, 12, 30, 26]. • Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when making decision at round t and can be arbitrarily and adver...
3686sm4Cs.AJMXMDLVn.01
Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other metho...
Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other metho...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
3686sm4Cs
AJMXMDLVn
1
[ { "text": "Results." }, { "text": "Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling." }, { "text": "Note that all the methods boost performance over a single model without requiring additional model parameter...
[ { "text": "Results." }, { "text": "Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling." }, { "text": "Note that all the methods boost performance over a single model without requiring additional model parameter...
OV5v_wBMHk.bw4cqlpLh.08
However, as neural network estimators mainly update parameters with stochastic gradient methods, only a subset of the representation’s distribution is accessible within each iteration. As such, a shortcut (Liuyi et al., 2018) is to calculate the group discrepancy at a stochastic mini-batch level:
However, since prevalent neural estimators mainly update parameters with stochastic gradient methods, only a fraction of the units is accessible within each iteration. A shortcut in this context is to calculate the group discrepancy at a stochastic mini-batch level:
{ "annotation": [ "Rewriting_light" ], "instruction": "check the wordings but keep the original content as much as possible", "annotator": "annotator_05" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the language to make it more formal.", "annotator": "annotator_07" }
OV5v_wBMHk
bw4cqlpLh
8
[ { "text": "However, as neural network estimators mainly update parameters with stochastic gradient methods, only a subset of the representation’s distribution is accessible within each iteration." }, { "text": "As such, a shortcut (Liuyi et al., 2018) is to calculate the group discrepancy at a stochasti...
[ { "text": "However, since prevalent neural estimators mainly update parameters with stochastic gradient methods, only a fraction of the units is accessible within each iteration." }, { "text": "A shortcut in this context is to calculate the group discrepancy at a stochastic mini-batch level:" } ]
5Eyr2crzI.s502diDSt.00
We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. From 16 upsampled points at the last iteration and lower, coverage performance starts to diminish while little speed gains are made. We still kept a relatively high N=64 in our model as we wanted to insure a wide cov...
We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. 7, evaluated on the Interpret multi-agent dataset with marginal MissRate 6 . The curve is obtained setting the number N of upsampled points at the last refinement iteration from 2 to 128. From N = 16 and lower, coverag...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
5Eyr2crzI
s502diDSt
0
[ { "text": "We also display the trade-off between inference speed and coverage from hierarchical refinement in " }, { "text": "Fig." }, { "text": "From 16 upsampled points at the last iteration and lower, coverage performance starts to diminish while little speed gains are made." }, { "te...
[ { "text": "We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig." }, { "text": "7, evaluated on the Interpret multi-agent dataset with marginal MissRate 6 . The curve is obtained setting the number N of upsampled points at the last refinement iteration fro...
atxti8SVk.3K9AmPwALM.15
Pascal: Image tag annotations. On Pascal VOC dataset, our method outperforms others by a large margin. Table 2 shows that, without additional saliency labels, our method still achieves SOTA. Compared to (Chang et al., 2020), we improves mIoU bya sizable 4 . 5% .
Pascal: Image tag annotations. Table 2 shows that, without using additional saliency labels, our method outperforms existing methods with saliency by 4 . 4% , and those without saliency by 5 .
{ "annotation": [ "Content_deletion", "Content_substitution" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
15
[ { "text": "Pascal: Image tag annotations." }, { "text": "On Pascal VOC dataset, our method outperforms others by a large margin." }, { "text": "Table 2 shows that, without additional saliency labels, our method still achieves SOTA. Compared to (Chang et al., 2020), we improves mIoU bya sizable ...
[ { "text": "Pascal: Image tag annotations." }, { "text": "" }, { "text": "Table 2 shows that, without using additional saliency labels, our method outperforms existing methods with saliency by 4 . 4% , and those without saliency by 5 ." } ]
OzYyHKPyj7.O9Mk1uqXra.01
The stack of Joulin & Mikolov (2015) simulatespartial pushes and pops by making each stack element a convex combination, or “superposition,” of the elements immediately above and below it (resulting from pushing and popping, respectively). In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , ...
The stack of Joulin & Mikolov (2015) simulates a combination of partial stack actions by computing three new, separate stacks: one with all cells shifted down (push), kept the same (no-op), and shifted up (pop). The new stack is then an element-wise interpolation (“superposition”) of these three stacks. In this model, ...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
OzYyHKPyj7
O9Mk1uqXra
1
[ { "text": "The stack of Joulin & Mikolov (2015) simulatespartial pushes and pops by making each stack element a convex combination, or “superposition,” of the elements immediately above and below it (resulting from pushing and popping, respectively)." }, { "text": "In this model, stack elements are aga...
[ { "text": "The stack of Joulin & Mikolov (2015) simulates a combination of partial stack actions by computing three new, separate stacks: one with all cells shifted down (push), kept the same (no-op), and shifted up (pop). The new stack is then an element-wise interpolation (“superposition”) of these three stac...
BkwlK_dPB.SJfZLu8oB.00
It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem such asvolume of the goal set |F RLgoal | and how complex...
It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem. It grows as |F RLgoal | becomes smaller or as the length...
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_04" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the text to make it more direct and readable when necessary.", "annotator": "annotator_07" }
BkwlK_dPB
SJfZLu8oB
0
[ { "text": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b ." }, { "text": "Intuitively, ˆ a depends on the scale of the problem such asvolume o...
[ { "text": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b ." }, { "text": "Intuitively, ˆ a depends on the scale of the problem. It grows as |F...
URRc6L6nmE.yUoqIf6zGY.00
A less conservative approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators. A continuous approximation that has shown to yield s...
A less conservative and more robust approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space; such an approach constitutes part of our future work. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
URRc6L6nmE
yUoqIf6zGY
0
[ { "text": "A less conservative approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space." }, { "text": "Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators." }, { ...
[ { "text": "A less conservative and more robust approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space; such an approach constitutes part of our future work." }, { "text": "Finally, the discontinuities of (4), (12) might be problematic...
kAwMEYEIN.RlDWAM6qF.00
HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss is abetter choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss f...
HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss isa better choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss f...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
kAwMEYEIN
RlDWAM6qF
0
[ { "text": "HJB equation is stable only if p is sufficiently large." }, { "text": "Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss is abetter choice." }, { "text": "The theory also inspires us to d...
[ { "text": "HJB equation is stable only if p is sufficiently large." }, { "text": "Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss isa better choice." }, { "text": "The theory also inspires us to d...
YCmehaMzt.kHwUIOFr_.00
In addition, we combine EM and our proposed OPS together to craft a kind of composed unlearnable examples. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this composing method under different training st...
Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this ensemble m...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Change the idea of \"composition\" to \"ensemble\" if this paragraph. Fix any spelling mistake.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the first sentence. Improve English in this paragraph.", "annotator": "annotator_07" }
YCmehaMzt
kHwUIOFr_
0
[ { "text": "In addition, we combine EM and our proposed OPS together to craft a kind of composed unlearnable examples." }, { "text": "Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed." }, { "text": "We evaluate the e...
[ { "text": "Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut." }, { "text": "Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed." }, ...
NcdK3bdqnA.kF_TmXY8G0.00
The results in Table 6 demonstrate that adopting image-specific linear projections outperforms directly sharing the contextual projections. The two types of image-specific linear projections do not lead to substantial performance differences. Thus, we take the strategy of only adding additional linear bias for augmente...
The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias. Introducing additional image-specific linear projection weights does not lead to further performance increase. Thus, we take the strategy of only adding additional linear bias for...
{ "annotation": [ "Rewriting_medium", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
NcdK3bdqnA
kF_TmXY8G0
0
[ { "text": "The results in Table 6 demonstrate that adopting image-specific linear projections outperforms directly sharing the contextual projections." }, { "text": "The two types of image-specific linear projections do not lead to substantial performance differences." }, { "text": "Thus, we tak...
[ { "text": "The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias." }, { "text": "Introducing additional image-specific linear projection weights does not lead to further performance increase." }, { "text": "Thu...
mS4xvgSiEH.i-a3xp3usm.00
The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. To ensure that the discrete latent space is necessary, we introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.
The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
mS4xvgSiEH
i-a3xp3usm
0
[ { "text": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels." }, { "text": "To ensure that the discrete latent space is necessary, we introduce two ablative baselines, which replace the VQ-VAE with either a generic...
[ { "text": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels." }, { "text": "We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE." } ]
g5N2H6sr7.6J3ec8Dl3p.04
Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL, i.e., it is 10 times faster than INFOGRAPH and 15 times faster than MVGRL on PROTEINS. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.
Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL. Our model takes 10s to train one epoch of PORTEINS on Tesla P40 24G, while INFOGRAPH needs 127s and MVGRL needs 193s. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
g5N2H6sr7
6J3ec8Dl3p
4
[ { "text": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL, i.e., it is 10 times faster than INFOGRAPH and 15 times faster than MVGRL on PROTEINS." }, { "text": "This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH a...
[ { "text": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL. Our model takes 10s to train one epoch of PORTEINS on Tesla P40 24G, while INFOGRAPH needs 127s and MVGRL needs 193s." }, { "text": "This is because our model neglects the tedious process of negative sam...
aomiOZE_m2.rxb2TiQ6bq.06
Neural Network Pruning. Pruning aims to remove parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). It mainly falls into two groups: filter pruning (a.k.a. structured pruning) 1 and weight-element pruning (a.k.a. unstructured pruning). The former aims to remove we...
Neural Network Pruning. Network pruning aims to eliminate redundant parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). The methodology of pruning mainly falls into two groups: filter pruning (or more generally known as structured pruning) * and weight-element pr...
{ "annotation": [ "Development", "Concision" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Rewrite the last sentence to make it more concise by removing shortcomings of other work.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
6
[ { "text": "Neural Network Pruning." }, { "text": "Pruning aims to remove parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017)." }, { "text": "It mainly falls into two groups: filter pruning (a.k.a. structured pruning) 1 and weight-element pr...
[ { "text": "Neural Network Pruning." }, { "text": "Network pruning aims to eliminate redundant parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017)." }, { "text": "The methodology of pruning mainly falls into two groups: filter pruning (or mo...
7_CwM-IzWd.zcm6f5HDI.21
Improved generalization performance We compare the generalization ability of the three algorithms (guided, random and vanilla). For each algorithm, we train three repetitions of each model using the same learning rate: 0.01, 0.1 and 0.01 for Colored-and-gray-MNIST, ModelNet40 and
Improved generalization performance We compare the generalization ability of multi-modal DNNs trained by the three algorithms (guided, random and vanilla) and the RUBi learning strategy (Cadene et al., 2019). For each algorithm, we train each model three times with the same learning rate. We use 0.01, 0.1 and 0.01 as l...
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_08" }
7_CwM-IzWd
zcm6f5HDI
21
[ { "text": "Improved generalization performance We compare the generalization ability of the three algorithms (guided, random and vanilla)." }, { "text": "For each algorithm, we train three repetitions of each model using the same learning rate: 0.01, 0.1 and 0.01 for Colored-and-gray-MNIST, ModelNet40...
[ { "text": "Improved generalization performance We compare the generalization ability of multi-modal DNNs trained by the three algorithms (guided, random and vanilla) and the RUBi learning strategy (Cadene et al., 2019)." }, { "text": "For each algorithm, we train each model three times with the same lea...
sIqSoZ9KiO.KLlOZMoJ9G.01
To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, a non-hierarchical VAE is more suitable for representation learning – there is a single stochastic vector and not a hie...
To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, non-hierarchical VAE is more suitable for disentangled representation learning, at least in the sense of (Higgins et al....
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make sentence precise.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Rephrase the second sentence, mostly focusing on the second half.", "annotator": "annotator_07" }
sIqSoZ9KiO
KLlOZMoJ9G
1
[ { "text": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE." }, { "text": "Apart from the implementation simplicity and shorter training time, a non-hierarchical VAE is more suitable for representation learning – there i...
[ { "text": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE." }, { "text": "Apart from the implementation simplicity and shorter training time, non-hierarchical VAE is more suitable for disentangled representation learning...
q4rMz7ZfFG.uyxGiQeMP.01
We give two cases of the GraphCodeBERT output for this task in Figure 6. In the first example, the model successfully finds Python source code that correctly matches the sementic of the query “Scans through a string for substrings matched some patterns”. The source code finds all substrings by calling re.findall () build-...
We use GraphCodeBERT to separately encode query and source code with data flow, and calculate inner product of their representations of the special token [ CLS ] as relevance scores to rank candidate codes. In the fine-turning step, we set the learning rate as 2e-5, the batch size as 32, the max sequence length of querie...
{ "annotation": [ "Rewriting_heavy", "Content_substitution" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
q4rMz7ZfFG
uyxGiQeMP
1
[ { "text": "We give two cases of the GraphCodeBERT output for this task in Figure 6. In the first example, the model successfully finds Python source code that correctly matches the sementic of the query “Scans through a string for substrings matched some patterns”." }, { "text": "The source code finds all ...
[ { "text": "We use GraphCodeBERT to separately encode query and source code with data flow, and calculate inner product of their representations of the special token [ CLS ] as relevance scores to rank candidate codes." }, { "text": "In the fine-turning step, we set the learning rate as 2e-5, the batch siz...
End of preview. Expand in Data Studio

ParaRev: Building a dataset for Scientific Paragraph Revision annotated with revision instruction

About

This repository contains ParaRev, a dataset of 48k revised scientific paragraphs with an evaluation subset of 641 paragraphs manually annotated with revision instructions. This dataset is extracted from the CASIMIR corpus, the extraction, and annotation process is described in:

ParaRev : Building a dataset for Scientific Paragraph Revision annotated with revision instruction (Jourdan et al., WRAICOGS 2025)

Content

The dataset is composed of two subsets:

  • pararev_full: The full dataset, composed of 48k pairs of revised paragraphs without annotation.
  • pararev_annot_subset: The manually annotated subset composed of 641 paragraphs, each paragraph have 2 annotations. Those paragraphs are also included in pararev_full

The data in pararev_full follow this distribution:

Distribution # chars Src # chars Tgt # words Src # words Tgt # sents Src # sents Tgt % words deleted % words added Lev dist
Min 47 48 7 7 1 1 0 0 0
Avg 680.16 715.58 125.54 132.99 5.26 5.50 21.54 25.63 194.80
Max 5202 5588 1003 1147 70 68 96.51 97.90 2265
Avg 374.11 394.20 69.04 73.32 3.07 3.19 18.19 18.15 160.10

The data in pararev_annot_subset are labelled with the following distribution:

Label Rewriting light Rewriting medium Rewriting heavy Development Content add Content subs Concision Content del Unusable
Prct % 15.44 14.27 4.13 19.07 12.99 6.47 12.83 4.72 10.06

The following data fields are available:

  • WIP

Please cite this work as:

@inproceedings{jourdan-etal-2025-pararev,
    title = "{P}ara{R}ev : Building a dataset for Scientific Paragraph Revision annotated with revision instruction",
    author = "Jourdan, L{\'e}ane  and
      Boudin, Florian  and
      Dufour, Richard  and
      Hernandez, Nicolas  and
      Aizawa, Akiko",
    editor = "Zock, Michael  and
      Inui, Kentaro  and
      Yuan, Zheng",
    booktitle = "Proceedings of the First Workshop on Writing Aids at the Crossroads of AI, Cognitive Science and NLP (WRAICOGS 2025)",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "International Committee on Computational Linguistics",
    url = "https://aclanthology.org/2025.wraicogs-1.4/",
    pages = "35--44",
    abstract = "Revision is a crucial step in scientific writing, where authors refine their work to improve clarity, structure, and academic quality. Existing approaches to automated writing assistance often focus on sentence-level revisions, which fail to capture the broader context needed for effective modification. In this paper, we explore the impact of shifting from sentence-level to paragraph-level scope for the task of scientific text revision. The paragraph level definition of the task allows for more meaningful changes, and is guided by detailed revision instructions rather than general ones. To support this task, we introduce ParaRev, the first dataset of revised scientific paragraphs with an evaluation subset manually annotated with revision instructions. Our experiments demonstrate that using detailed instructions significantly improves the quality of automated revisions compared to general approaches, no matter the model or the metric considered."
}
Downloads last month
43