Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 53 new columns ({'col_2,480', 'col_xiaohua', 'col_alla', 'col_cj', 'col_â', 'col_tm', 'col_715', 'col_andre', 'col_6:', 'col_dependency-driven', 'col_it', 'col_is', 'col_y', 'col_dynamic', 'col_robert', 'col_s2d', 'col_content', 'col_dt', 'col_10', 'col_local', 'col_daniel', 'col_vasin', 'col_pm', 'col_appendix:', 'col_agreement', 'col_computational', 'col_90', 'col_an', 'col_to', 'col_625', 'col_d2:', 'col_x', 'col_as', 'col_i', 'col_joint', 'col_conference', 'col_a', 'col_empirical', 'col_30', 'col_francis', 'col_h', 'col_60', 'col_na-rae', 'col_b', 'col_koby', 'col_joel', 'col_north', 'col_pl', 'col_a1', 'col_b1', 'col_sebastian', 'col_kevin', 'col_p3'}) and 14 missing columns ({'col_p', 'col_for', 'col_other', 'col_25%', 'col_he', 'col_in', 'col_arabic', 'col_latin', 'col_10%', 'col_3+', 'col_devanagari', 'col_appendix', 'col_cyrillic', 'col_50%'}).

This happened while the csv dataset builder was generating data using

hf://datasets/iaadlab/FutureGen/ACL_13_updated.csv (at revision 9d0d4d25b86d544f247161375abc3e9b4af1f385)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              col_1: string
              col_2: string
              col_3: string
              col_4: string
              col_5: string
              col_6: string
              col_acknowledgments: string
              abstract: string
              authors: string
              id: string
              references: string
              col_acknowledgements: string
              col_7: string
              col_8: string
              col_9: string
              col_acknowledgement: string
              col_acknowledgment: string
              col_10: string
              col_i: string
              col_joint: string
              col_conference: string
              col_empirical: string
              col_north: string
              col_computational: string
              col_â: string
              col_a: string
              col_b1: string
              col_715: string
              col_625: string
              col_30: string
              col_60: string
              col_90: string
              col_an: string
              col_francis: string
              col_koby: string
              col_daniel: string
              col_robert: string
              col_na-rae: string
              col_kevin: string
              col_xiaohua: string
              col_andre: string
              col_y: string
              col_vasin: string
              col_sebastian: string
              col_alla: string
              col_joel: string
              col_2,480: string
              col_s2d: string
              col_cj: string
              col_is: string
              col_pm: string
              col_pl: string
              col_appendix:: string
              col_h: string
              col_d2:: string
              col_6:: string
              col_to: string
              col_x: string
              col_agreement: string
              col_a1: string
              col_p3: string
              col_dt: string
              col_it: string
              col_b: string
              col_content: string
              col_local: string
              col_dependency-driven: string
              col_dynamic: string
              col_tm: string
              col_as: string
              Concatenated Text: string
              Future_Work: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 8697
              to
              {'col_1': Value('string'), 'col_2': Value('string'), 'col_3': Value('string'), 'col_4': Value('string'), 'col_5': Value('string'), 'abstract': Value('string'), 'authors': Value('string'), 'id': Value('string'), 'references': Value('string'), 'col_6': Value('string'), 'col_acknowledgements': Value('string'), 'col_7': Value('string'), 'col_acknowledgments': Value('string'), 'col_8': Value('string'), 'col_acknowledgement': Value('string'), 'col_10%': Value('string'), 'col_25%': Value('string'), 'col_50%': Value('string'), 'col_9': Value('string'), 'col_in': Value('string'), 'col_he': Value('string'), 'col_p': Value('string'), 'col_appendix': Value('string'), 'col_for': Value('string'), 'col_cyrillic': Value('string'), 'col_arabic': Value('string'), 'col_other': Value('string'), 'col_latin': Value('string'), 'col_devanagari': Value('string'), 'col_3+': Value('string'), 'col_acknowledgment': Value('string'), 'Concatenated Text': Value('string'), 'Future_Work': Value('string')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 53 new columns ({'col_2,480', 'col_xiaohua', 'col_alla', 'col_cj', 'col_â', 'col_tm', 'col_715', 'col_andre', 'col_6:', 'col_dependency-driven', 'col_it', 'col_is', 'col_y', 'col_dynamic', 'col_robert', 'col_s2d', 'col_content', 'col_dt', 'col_10', 'col_local', 'col_daniel', 'col_vasin', 'col_pm', 'col_appendix:', 'col_agreement', 'col_computational', 'col_90', 'col_an', 'col_to', 'col_625', 'col_d2:', 'col_x', 'col_as', 'col_i', 'col_joint', 'col_conference', 'col_a', 'col_empirical', 'col_30', 'col_francis', 'col_h', 'col_60', 'col_na-rae', 'col_b', 'col_koby', 'col_joel', 'col_north', 'col_pl', 'col_a1', 'col_b1', 'col_sebastian', 'col_kevin', 'col_p3'}) and 14 missing columns ({'col_p', 'col_for', 'col_other', 'col_25%', 'col_he', 'col_in', 'col_arabic', 'col_latin', 'col_10%', 'col_3+', 'col_devanagari', 'col_appendix', 'col_cyrillic', 'col_50%'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/iaadlab/FutureGen/ACL_13_updated.csv (at revision 9d0d4d25b86d544f247161375abc3e9b4af1f385)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

col_1
string
col_2
string
col_3
string
col_4
string
col_5
string
abstract
string
authors
string
id
string
references
string
col_6
string
col_acknowledgements
string
col_7
string
col_acknowledgments
null
col_8
null
col_acknowledgement
null
col_10%
null
col_25%
null
col_50%
null
col_9
null
col_in
null
col_he
null
col_p
null
col_appendix
null
col_for
null
col_cyrillic
null
col_arabic
null
col_other
null
col_latin
null
col_devanagari
null
col_3+
null
col_acknowledgment
null
Concatenated Text
string
Future_Work
string
1 introduction :Since its first appearance in (Huang and Chiang, 2005), the Cube Pruning (CP) algorithm has quickly gained popularity in statistical natural language processing. Informally, this algorithm applies to scenarios in which we have thek-best solutions for two input sub-problems, and we need to compute thekbe...
2 preliminaries :Let L = 〈x0, . . . , xk−1〉 be a list overR, that is, an ordered sequence of real numbers, possibly with repetitions. We write|L| = k to denote the length of L. We say thatL is descending if xi ≥ xj for every i, j with 0 ≤ i < j < k. Let L1 = 〈x0, . . . , xk−1〉 andL2 = 〈y0, . . . , yk′−1〉 be two descend...
3 cube pruning with constant slope :Consider listsL1,L2 defined as in section 2. We say thatL2 hasconstant slope if yi−1− yi = ∆ > 0 for everyi with 0 < i < k. Throughout this section we assume thatL2 has constant slope, and we develop an (exact) linear time algorithm for solving the CP problem under this assumption. F...
4 linear time heuristic solution :In this section we further elaborate on the exact algorithm of section 3 for the constant slope case, and develop a heuristic solution for the general CP problem. LetL1,L2, L andk be defined as in sections 2 and 3. Despite the fact thatL2 does not have a constant slope, we can still sp...
5 experiments :We implement Linear CP (LCP) on top of Cdec (Dyer et al., 2010), a widely-used hierarchical MT system that includes implementations of standard CP and FCP algorithms. The experiments were executed on the NIST 2003 Chinese-English parallel corpus. The training corpus contains 239k sentence pairs. A binary...
We propose a novel heuristic algorithm for Cube Pruning running in linear time in the beam size. Empirically, we show a gain in running time of a standard machine translation system, at a small loss in accuracy.
[{"affiliations": [], "name": "Andrea Gesmundo"}, {"affiliations": [], "name": "Giorgio Satta"}, {"affiliations": [], "name": "James Henderson"}]
SP:314f6dada911571f98ada4fc471cd0d2da046314
[{"authors": ["David Chiang."], "title": "Hierarchical phrase-based translation", "venue": "Computational Linguistics, 33(2):201\u2013228.", "year": 2007}, {"authors": ["Chris Dyer", "Adam Lopez", "Juri Ganitkevitch", "Jonathan Weese", "Hendra Setiawan", "Ferhan Ture", "Vladimir Eidelman", "Phil Blunsom", "Philip Resni...
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
1 introduction :Since its first appearance in (Huang and Chiang, 2005), the Cube Pruning (CP) algorithm has quickly gained popularity in statistical natural language processing. Informally, this algorithm applies to scenarios in which we have thek-best solutions for two input sub-problems, and we need to compute thekbe...
null
1 introduction :Speakers present already known and yet to be established information according to principles referred to as information structure (Prince, 1981; Lambrecht, 1994; Kruijff-Korbayová and Steedman, 2003, inter alia). While information structure affects all kinds of constituents in a sentence, we here adopt...
2 related work :IS annotation schemes and corpora. We enhance the approach in Nissim et al. (2004) in two major ways (see also Section 3.1). First, comparative anaphora are not specifically handled in Nissim et al. (2004) (and follow-on work such as Ritz et al. (2008) and Riester et al. (2010)), although some of them m...
3 corpus creation : Our scheme follows Nissim et al. (2004) in distinguishing three major IS categories old, new and mediated. A mention is old if it is either coreferential with an already introduced entity or a generic or deictic pronoun. We follow the OntoNotes (Weischedel et al., 2011) definition of coreference to ...
4 features :In this Section, we describe both the local as well as the relational features we use. We use the following local features, including the features in Nissim (2006) and Rahman and Ng (2011) to be able to gauge how their systems fare on our corpus and as a comparison point for our novel collective classificat...
5 experiments : We use our gold standard corpus (see Section 3.3) via 10-fold cross-validation on documents for all experiments. Following Nissim (2006) and Rahman and Ng (2011), we perform all experiments on gold standard mentions and use the human WSJ syntactic annotation for feature extraction, when necessary. For t...
Previous work on classifying information status (Nissim, 2006; Rahman and Ng, 2011) is restricted to coarse-grained classification and focuses on conversational dialogue. We here introduce the task of classifying finegrained information status and work on written text. We add a fine-grained information status layer to ...
[{"affiliations": [], "name": "Katja Markert"}, {"affiliations": [], "name": "Yufang Hou"}, {"affiliations": [], "name": "Michael Strube"}]
SP:1c5cf913e5539237754494a20f57450888547f41
[{"authors": ["Ron Artstein", "Massimo Poesio."], "title": "Inter-coder agreement for computational linguistics", "venue": "Computational Linguistics, 34(4):555\u2013596.", "year": 2008}, {"authors": ["Regina Barzilay", "Mirella Lapata."], "title": "Modeling local coherence: An entity-based approach", "venue": "Computa...
6 conclusions :We presented a new approach to information status classification in written text, for which we also provide the first reliably annotated English language corpus. Based on linguistic intuition, we define fea- tures for classifying mentions collectively. We show that our collective classification approach ...
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
1 introduction :Speakers present already known and yet to be established information according to principles referred to as information structure (Prince, 1981; Lambrecht, 1994; Kruijff-Korbayová and Steedman, 2003, inter alia). While information structure affects all kinds of constituents in a sentence, we here adopt...
null
1 introduction :The builders of affective lexica face the vexing task of distilling the many and varied pragmatic uses of a word or concept into an overall semantic measure of affect. The task is greatly complicated by the fact that in each context of use, speakers may implicitly agree to focus on just a subset of the ...
2 related work and ideas :In its simplest form, an affect lexicon assigns an affective score – along one or more dimensions – to each word or sense. For instance, Whissell’s (1989) Dictionary of Affect (or DoA) assigns a trio of scores to each of its 8000+ words to describe three psycholinguistic dimensions: pleasantne...
3 an affective lexicon of stereotypes :We construct the stereotype-based lexicon in two stages. For the first layer, a large collection of stereotypical descriptions is harvested from the web. As in Liu et al. (2003), our goal is to acquire a lightweight common-sense representation of many everyday concepts. For the se...
4 empirical evaluation :In the process of populating +R and -R, we identify a reference set of 478 positive stereotype nouns (such as saint and hero) and 677 negative stereotype nouns (such as tyrant and monster). We can use these reference stereotypes to test the effectiveness of (5) and (6), and thus, indirectly, of ...
5 re-shaping affect in figurative contexts :The Google n-grams are a rich source of affective metaphors of the form Target is Source, such as “politicians are crooks”, “Apple is a cult”, “racism is a disease” and “Steve Jobs is a god”. Let src(T) denote the set of stereotypes that are commonly used to describe T, where...
Since we can ‘spin’ words and concepts to suit our affective needs, context is a major determinant of the perceived affect of a word or concept. We view this re-profiling as a selective emphasis or de-emphasis of the qualities that underpin our shared stereotype of a concept or a word meaning, and construct our model o...
[{"affiliations": [], "name": "Tony Veale"}]
SP:bd4c1958900c1a0e0f9d86de86daff19247d3b70
[{"authors": ["Thorsten Brants", "Alex Franz."], "title": "Web 1T 5-gram Version 1", "venue": "Linguistic Data Consortium.", "year": 2006}, {"authors": ["Paul Ekman."], "title": "Facial expression of emotion", "venue": "American Psychologist, 48:384-392.", "year": 1993}, {"authors": ["Andrea Esuli", "Fabrizio Sebastian...
6 concluding remarks :Metaphor is the perfect tool for influencing the perceived affect of words and concepts in context. The web application Metaphor Magnet provides a proof-of-concept demonstration of this re-shaping process at work, using the stereotype lexicon of §3, the selective highlighting of (7)–(8), and the m...
acknowledgements :This research was supported by the WCU (World Class University) program under the National Research Foundation of Korea, and funded by the Ministry of Education, Science and Technology of Korea (Project No: R31-30007).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
1 introduction :The builders of affective lexica face the vexing task of distilling the many and varied pragmatic uses of a word or concept into an overall semantic measure of affect. The task is greatly complicated by the fact that in each context of use, speakers may implicitly agree to focus on just a subset of the ...
null
"1 introduction :In recent years, standardized examinations have proved a fertile source of evaluati(...TRUNCATED)
"2 related work :The past work which is most similar to ours is derived from the lexical substitutio(...TRUNCATED)
"3 sentence completion via language modeling :Perhaps the most straightforward approach to solving t(...TRUNCATED)
"4 sentence completion via latent semantic analysis :Latent Semantic Analysis (LSA) (Deerwester et a(...TRUNCATED)
"5 experimental results : We present results with two datasets. The first is taken from 11 Practice (...TRUNCATED)
"This paper studies the problem of sentencelevel semantic coherence by answering SATstyle sentence c(...TRUNCATED)
"[{\"affiliations\": [], \"name\": \"Geoffrey Zweig\"}, {\"affiliations\": [], \"name\": \"John C. P(...TRUNCATED)
SP:22407c0cc9a80fbd400fc49efba0d270857b046e
"[{\"authors\": [\"J. Bellegarda.\"], \"title\": \"Exploiting latent semantic information in statist(...TRUNCATED)
"6 discussion :To verify that the differences in accuracy between the different algorithms are not s(...TRUNCATED)
null
"7 conclusion :In this paper we have investigated methods for answering sentence-completion question(...TRUNCATED)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
"1 introduction :In recent years, standardized examinations have proved a fertile source of evaluati(...TRUNCATED)
"3 sentence completion via language modeling :Perhaps the most straightforward approach to solving t(...TRUNCATED)
"1 introduction :The rapid childhood development from a seemingly blank slate to language mastery is(...TRUNCATED)
"2 data :To identify trends in child language learning we need a corpus of child speech samples, whi(...TRUNCATED)
"3 experiments :Learning Individual Child Metrics Our first task is to predict the age at which a he(...TRUNCATED)
"4 discussion :Our first set of experiments verified that we can achieve a decrease in mean squared (...TRUNCATED)
null
"We propose a new approach for the creation of child language development metrics. A set of linguist(...TRUNCATED)
"[{\"affiliations\": [], \"name\": \"Sam Sahakian\"}, {\"affiliations\": [], \"name\": \"Benjamin Sn(...TRUNCATED)
SP:b8cc91cb7b418d46f769a87e4a5121b7c8c20247
"[{\"authors\": [\"R.H. Baayen\", \"R. Piepenbrock\", \"L. Gulikers.\"], \"title\": \"The CELEX lexi(...TRUNCATED)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
"1 introduction :The rapid childhood development from a seemingly blank slate to language mastery is(...TRUNCATED)
null
"1 introduction :There is a deep tension in statistical modeling of grammatical structure between pr(...TRUNCATED)
"2 probabilistic model :In the basic nonparametric TSG model, there is an independent DP for every g(...TRUNCATED)
"3 inference :Given this model, our inference task is to explore optimal derivations underlying the (...TRUNCATED)
"4 evaluation results :We use the standard Penn treebank methodology of training on sections 2–21 (...TRUNCATED)
"5 conclusion :We described a nonparametric Bayesian inference scheme for estimating TIG grammars an(...TRUNCATED)
"We present a Bayesian nonparametric model for estimating tree insertion grammars (TIG), building up(...TRUNCATED)
"[{\"affiliations\": [], \"name\": \"Elif Yamangil\"}, {\"affiliations\": [], \"name\": \"Stuart M. (...TRUNCATED)
SP:c3ecef4d8ea5e8b2ecff74ca1b0f3209113b27f4
"[{\"authors\": [\"Xavier Carreras\", \"Michael Collins\", \"Terry Koo.\"], \"title\": \"TAG, dynami(...TRUNCATED)
null
"acknowledgements :The first author was supported in part by a Google PhD Fellowship in Natural Lan(...TRUNCATED)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
"1 introduction :There is a deep tension in statistical modeling of grammatical structure between pr(...TRUNCATED)
null
"1 introduction :Domain adaptation has been shown useful to many natural language processing applica(...TRUNCATED)
"2 related work :The work closely related to ours was done by Dai et al. (2007), where they proposed(...TRUNCATED)
"3 our model :Intuitively, source-specific and target-specific features can be drawn together by min(...TRUNCATED)
"4 consistency of multiple views :In this section, we present how the consistency of document cluste(...TRUNCATED)
"5 experiments and results :Data and Setup\nCora (McCallum et al., 2000) is an online archive of com(...TRUNCATED)
"We use multiple views for cross-domain document classification. The main idea is to strengthen the (...TRUNCATED)
"[{\"affiliations\": [], \"name\": \"Pei Yang\"}, {\"affiliations\": [], \"name\": \"Wei Gao\"}, {\"(...TRUNCATED)
SP:6801d63823343b2c737bb2d400f03c4b6108f11b
"[{\"authors\": [\"Steven Abney.\"], \"title\": \"Bootstrapping\", \"venue\": \"Proceedings of the 4(...TRUNCATED)
"6 conclusion :We presented a novel feature-level multi-view domain adaptation approach. The thrust (...TRUNCATED)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
"1 introduction :Domain adaptation has been shown useful to many natural language processing applica(...TRUNCATED)
null
"1 introduction :Many scientific subjects, such as psychology, learning sciences, and biology, have (...TRUNCATED)
"2 related work :Natural Language Processing (NLP) methods for automatically understanding and ident(...TRUNCATED)
"3 data :We have collected a corpus of slavery-related United States supreme court legal opinions fr(...TRUNCATED)
"4 the sparse mixed-effects model :To address the over-parameterization, lack of expressiveness and (...TRUNCATED)
"5 prediction experiments :We perform three quantitative experiments to evaluate the predictive powe(...TRUNCATED)
"We propose a latent variable model to enhance historical analysis of large corpora. This work exten(...TRUNCATED)
"[{\"affiliations\": [], \"name\": \"William Yang Wang\"}, {\"affiliations\": [], \"name\": \"Elijah(...TRUNCATED)
SP:bf138cd51c096f01743627deefddf365d96e86eb
"[{\"authors\": [\"Lalit R. Bahl\", \"Peter F. Brown.\", \"Peter V. de Souza\", \"Robert L. Mercer.\(...TRUNCATED)
"6 conclusion and future work :In this work, we propose a sparse mixed-effects model for historical (...TRUNCATED)
null
null
"acknowledgments :We thank Jacob Eisenstein, Noah Smith, and anonymous reviewers for valuable sugge(...TRUNCATED)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
"1 introduction :Many scientific subjects, such as psychology, learning sciences, and biology, have (...TRUNCATED)
"5 prediction experiments :We perform three quantitative experiments to evaluate the predictive powe(...TRUNCATED)
"1 introduction :Reranking techniques are commonly used for improving the accuracy of parsing (Charn(...TRUNCATED)
"2 background :Combinatory Categorial Grammar (CCG, Steedman, 2000) is a lexicalised grammar formali(...TRUNCATED)
"3 dependency hashing :To address this problem of semantically equivalent n-best parses, we define a(...TRUNCATED)
"4 analysing parser errors :A substantial gap exists between the oracle F-score of our improved n-be(...TRUNCATED)
"5 conclusion :We have described how a mismatch between the way CCG parses are modeled and evaluated(...TRUNCATED)
"Optimising for one grammatical representation, but evaluating over a different one is a particular (...TRUNCATED)
"[{\"affiliations\": [], \"name\": \"Dominick Ng\"}, {\"affiliations\": [], \"name\": \"James R. Cur(...TRUNCATED)
SP:4efc0652f6a37b82ef000806e90ef64068587481
"[{\"authors\": [\"UK. Forrest Brennan\"], \"title\": \"k-best Parsing Algorithms for a\", \"year\":(...TRUNCATED)
null
null
null
"acknowledgments :We would like to thank the reviewers for their comments. This work was supported (...TRUNCATED)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
"1 introduction :Reranking techniques are commonly used for improving the accuracy of parsing (Charn(...TRUNCATED)
"5 conclusion :We have described how a mismatch between the way CCG parses are modeled and evaluated(...TRUNCATED)
"1 introduction :Current top performing parsing algorithms rely on the availability of annotated dat(...TRUNCATED)
"2 related work :Traditionally, parallel corpora have been a mainstay of multilingual parsing (Wu, 1(...TRUNCATED)
"3 linguistic motivation :Language-Independent Dependency Properties Despite significant syntactic d(...TRUNCATED)
"4 model :We propose a probabilistic model for generating dependency trees that facilitates paramete(...TRUNCATED)
"5 parameter learning :Our model is parameterized by the parameters θsel, θsize and word. We learn(...TRUNCATED)
"We present a novel algorithm for multilingual dependency parsing that uses annotations from a diver(...TRUNCATED)
"[{\"affiliations\": [], \"name\": \"Tahira Naseem\"}, {\"affiliations\": [], \"name\": \"Regina Bar(...TRUNCATED)
SP:7a2220d007cd44dfd157d625d9e12351339c55f2
"[{\"authors\": [\"Taylor Berg-Kirkpatrick\", \"Dan Klein.\"], \"title\": \"Phylogenetic grammar ind(...TRUNCATED)
"6 experimental setup :Datasets and Evaluation We test the effectiveness of our approach on 17 langu(...TRUNCATED)
null
"7 results :Table 2 summarizes the performance for different configurations of our model and the bas(...TRUNCATED)
"acknowledgments :The authors acknowledge the support of the NSF (IIS-0835445), the MURI program (W(...TRUNCATED)
"8 conclusions :We present a novel algorithm for multilingual dependency parsing that uses annotatio(...TRUNCATED)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
"1 introduction :Current top performing parsing algorithms rely on the availability of annotated dat(...TRUNCATED)
null
End of preview.

📚 ACL & NeurIPS Future Work Dataset

This dataset contains curated "Future Work" sections from ACL and NeurIPS research papers. It is designed to support tasks like scientific document understanding, future work generation, citation intent analysis, and summarization of research directions.

📦 Dataset Details

🔍 Dataset Description

This dataset includes:

  • ACL_2012.csv to ACL_2024.csv: Tabular data where each row is a paper and each column represents a paper section (e.g., Abstract, Introduction, Future Work).
  • NeurIPS_2021.csv, NeurIPS_2022.csv: Similar format as ACL .csv files.
  • ACL_2023.json, ACL_2024.json: Each file contains paper-wise parsed output including section headers and content. "Future Work" sections are extracted and added if found.

Each record is either a paper (in .csv) or a structured section-by-section breakdown of a paper (in .json). If a paper does not contain a "Future Work" section, it has been excluded from the .json.

  • Languages: English
  • Total Papers (after filtering): Varies by year (see Statistics section on HF for breakdown)
  • Data format: .csv, .json

✍️ Curated by

Ibrahim Al Azher, Northern Illinois University, DATALab

📑 Dataset Structure

For .csv Files:

Each file contains:

  • Columns: 'title', 'abstract', 'introduction', 'related work', ..., 'future work'
  • Rows: One paper per row
  • Year-specific files (ACL_2012.csv to ACL_2024.csv)

For .json Files:

Each key is a paper ID (e.g., "ACL23_1.pdf") and its value includes:

{
  "abstractText": "string",
  "sections": [
    { "heading": "Introduction", "text": "..." },
    ...
    { "heading": "Future Work", "text": "..." }
  ],
  "title": "string",
  "year": "int"
}
Downloads last month
30