Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 25 new columns ({'extracted_with_lim_Div7', 'Concatenated Text', 'review_summary_Div6', 'col_6', 'extracted_with_lim_Div6', 'extracted_with_lim_Div3', 'col_7', 'extracted_with_lim_Div4', 'review_summary_Div5', 'review_summary_Div4', 'forum', 'col_5', 'extracted_with_lim_Div5', 'extracted_with_lim_Div2', 'col_3', 'review_summary_Div3', 'col_8', 'review_summary_Div2', 'review_summary_Div7', 'col_4', 'col_2', 'Lim_word_count', 'Abstract', 'keywords', 'Concatenated_Limitations'}) and 6 missing columns ({'id', 'Future_Work', 'references', 'authors', 'abstract', 'category'}).

This happened while the csv dataset builder was generating data using

hf://datasets/datalab2/Limitation_generation_dataset/NeurIps 21_22.csv (at revision b1220f19db7f0a7c6c64e9c22cb6e77e37c0c391)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              forum: string
              title: string
              keywords: string
              Abstract: string
              Introduction: string
              Related_Work: string
              Methodology: string
              Dataset: string
              Conclusion: string
              Experiment_and_Results: string
              col_2: string
              col_3: string
              col_4: string
              col_5: string
              col_6: string
              col_7: string
              col_8: string
              Concatenated Text: string
              Limitation: string
              extracted_with_lim_Div2: string
              extracted_with_lim_Div3: string
              extracted_with_lim_Div4: string
              extracted_with_lim_Div5: string
              extracted_with_lim_Div6: string
              extracted_with_lim_Div7: string
              review_summary_Div2: string
              review_summary_Div3: string
              review_summary_Div4: string
              review_summary_Div5: string
              review_summary_Div6: string
              review_summary_Div7: string
              Lim_word_count: int64
              Concatenated_Limitations: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 4463
              to
              {'title': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'abstract': Value(dtype='string', id=None), 'Introduction': Value(dtype='string', id=None), 'Related_Work': Value(dtype='string', id=None), 'Methodology': Value(dtype='string', id=None), 'Dataset': Value(dtype='string', id=None), 'Conclusion': Value(dtype='string', id=None), 'Experiment_and_Results': Value(dtype='string', id=None), 'Future_Work': Value(dtype='string', id=None), 'Limitation': Value(dtype='string', id=None), 'references': Value(dtype='string', id=None), 'category': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1433, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 25 new columns ({'extracted_with_lim_Div7', 'Concatenated Text', 'review_summary_Div6', 'col_6', 'extracted_with_lim_Div6', 'extracted_with_lim_Div3', 'col_7', 'extracted_with_lim_Div4', 'review_summary_Div5', 'review_summary_Div4', 'forum', 'col_5', 'extracted_with_lim_Div5', 'extracted_with_lim_Div2', 'col_3', 'review_summary_Div3', 'col_8', 'review_summary_Div2', 'review_summary_Div7', 'col_4', 'col_2', 'Lim_word_count', 'Abstract', 'keywords', 'Concatenated_Limitations'}) and 6 missing columns ({'id', 'Future_Work', 'references', 'authors', 'abstract', 'category'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/datalab2/Limitation_generation_dataset/NeurIps 21_22.csv (at revision b1220f19db7f0a7c6c64e9c22cb6e77e37c0c391)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

title
string
authors
string
id
string
abstract
string
Introduction
string
Related_Work
string
Methodology
string
Dataset
string
Conclusion
string
Experiment_and_Results
string
Future_Work
string
Limitation
string
references
string
category
string
Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models
[{"affiliations": [], "name": "Zhengxin Zhang"}, {"affiliations": [], "name": "Dan Zhao"}, {"affiliations": [], "name": "Xupeng Miao"}, {"affiliations": [], "name": "Gabriele Oliaro"}, {"affiliations": [], "name": "Zhihao Zhang"}, {"affiliations": [], "name": "Qing Li"}, {"affiliations": [], "name": "Yong Jiang"}, {"af...
SP:5360da9e3f49cc7e048171d345a89bf468de8c61
Finetuning large language models (LLMs) has been empirically effective on a variety of downstream tasks. Existing approaches to finetuning an LLM either focus on parameter-efficient finetuning, which only updates a small number of trainable parameters, or attempt to reduce the memory footprint during the training phase...
1 introduction :Recent advancements in large language models (LLMs), including GPT (Brown et al., 2020; Floridi and Chiriatti, 2020; OpenAI, 2023), PaLM (Chowdhery et al., 2022), OPT (Zhang et al., 2022), and LLaMA (Touvron et al., 2023), have showcased remarkable taskgeneralization capabilities across diverse applicat...
Finetuning allows an LLM to adapt to specialized domains and tasks (Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020). However, fully finetuning an LLM comes with high computation costs due to the rapidly increasing LLM sizes. Parameter-efficient finetuning (PEFT) methods are proposed to solve this issue. ...
null
null
In this paper, we propose Quantized Side Tuing (QST), a novel fast and memory-efficient finetuning framework. QST operates through a dual-stage process: first, QST quantizes the LLM into 4-bit to reduce the memory footprint of the weights in LLM; then QST introduces a side network separated from the LLM, which utilizes...
In this section, we empirically validate the effectiveness of our QST method by examining its performance for LLMs with different types (e.g., OPT and LLaMA 2), sizes (from 1.3B to 70B), and benchmarks. Datasets. We evaluate the performance of QST and several baselines on natural language understanding (NLU) and natura...
null
null
[{"authors": ["Amanda Askell", "Yuntao Bai", "Anna Chen", "Dawn Drain", "Deep Ganguli", "Tom Henighan", "Andy Jones", "Nicholas Joseph", "Ben Mann", "Nova DasSarma"], "title": "A general language assistant as a laboratory", "year": 2021}, {"authors": ["Yuntao Bai", "Andy Jones", "Kamal Ndousse", "Amanda Askell", "Anna ...
acl_23
Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances
[{"affiliations": [], "name": "Hanlei Zhang"}, {"affiliations": [], "name": "Hua Xu"}, {"affiliations": [], "name": "Fei Long"}, {"affiliations": [], "name": "Xin Wang"}, {"affiliations": [], "name": "Kai Gao"}]
SP:061eb9278d3e424db94c192bbbaa458e16607908
Discovering the semantics of multimodal utterances is essential for understanding human language and enhancing human-machine interactions. Existing methods manifest limitations in leveraging nonverbal information for discerning complex semantics in unsupervised scenarios. This paper introduces a novel unsupervised mult...
1 introduction :Discovering the semantics of dialogue utterances in unsupervised multimodal data requires integrating various modalities (i.e., text, video, and audio) to effectively mine the complicated semantics inherent in multimodal language. Conventional methods for semantics discovery typically focus solely on th...
Unsupervised clustering is fundamental in machine learning. Classic clustering methods like KMeans (MacQueen et al., 1967) and Agglomerative Clustering (Gowda and Krishna, 1978) iteratively assign clusters until convergence based on features. Deep clustering methods, like DEC (Xie et al., 2016) and DCN (Yang et al., 20...
For the task of multimodal semantics discovery, we are provided with a multimodal intent or dialogue act dataset Dmm = {(sTi , sAi , sVi )|yi ∈ I, i = 1, ..., N}, where each ith instance si contains multimodal utterances, including sTi , audio sAi , and video s V i . Here, N represents the total number of instances. Th...
null
We conduct extensive ablation studies and show the results in Table 3. (1) w/o Step 1: Removing Step 1 results in performance drops of 11-14%, 12-15%, and 8-15% across the MIntRec, MELD-DA, and IEMOCAP-DA datasets, emphasizing the importance of our proposed non-verbal modality masking strategy in enhancing subsequent c...
We use MIntRec, MELD-DA, and IEMOCAP-DA as benchmark datasets for the multimodal semantics discovery task. The rationale for using these datasets is that the defined intents or dialogue acts typically exhibit a variety of distinct sentence-level semantics and possess properties of uncertainty in the open world, making ...
null
There are two limitations in this work. Firstly, given the complexity of real-world multimodal intent datasets, the achieved clustering performance still suggests significant potential for further improvements. Secondly, while this study establishes a foundational approach for automatically determining the Knear parame...
[{"authors": ["Humam Alwassel", "Dhruv Mahajan", "Bruno Korbar", "Lorenzo Torresani", "Bernard Ghanem", "Du Tran."], "title": "Self-supervised learning by cross-modal audiovideo clustering", "venue": "Proc. of NeurIPS, pages 9758\u2013 9770.", "year": 2020}, {"authors": ["David Arthur", "Sergei Vassilvitskii."], "title...
acl_23
MAGE: Machine-generated Text Detection in the Wild
[{"affiliations": [], "name": "Yafu Li"}, {"affiliations": [], "name": "Qintong Li"}, {"affiliations": [], "name": "Leyang Cui"}, {"affiliations": [], "name": "Wei Bi"}, {"affiliations": [], "name": "Zhilin Wang"}, {"affiliations": [], "name": "Longyue Wang"}, {"affiliations": [], "name": "Linyi Yang"}, {"affiliations"...
SP:6952f84c7a41e5cfab4df45f691ddcec2e513811
Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective AI-generated text detection to mitigate risks like the spread of fake news and plagiarism. Existing research has been constrained by evaluating detection methods on specific domains or particular language models. ...
1 introduction :With constant advancements in artificial intelligence generated content (AIGC) technology (Rombach et al., 2022; Zhang and Agrawala, 2023; Shi et al., 2023; Brown et al., 2020; OpenAI, 2023b), texts generated by large language models (LLMs) (Brown et al., 2020; OpenAI, 2023b; Touvron et al., 2023; Taori...
A line of work explores the linguistic patterns to achieve automatic machine-writing detection, which has gone through n-gram frequencies (Badaskar et al., 2008), entropy (Lavergne et al., 2008; Gehrmann et al., 2019), perplexity (Beresneva, 2016), and negative curvature regions of the model’s log probability (Mitchell...
A detection system labels a text as either machinegenerated or human-written, or outputs a probability distribution. In this work, we consider a set of commonly used detection methods. To showcase detection difficulty, we first consider naive baselines, i.e., human detection and ask ChatGPT, by asking human and query C...
Data Sourcing. We collect human-written texts from a set of benchmark datasets, which cover diverse writing tasks including: (1) Opinion statement: 804 opinion statements from the /r/ChangeMyView (CMV) Reddit subcommunity (Tan et al., 2016) and 1,000 reviews from Yelp dataset (Zhang et al., 2015); (2) News article writ...
We proposed a comprehensive testbed for machinegenerated text detection, by gathering texts from various writing tasks and machine-generated texts generated by different LLMs. Empirical results on commonly used detection methods demonstrated the challenge of AI-generated text detection. Outof-distribution posed a great...
We consider each benchmark dataset as separate domains, such as CMV, XSum, SciXGen, etc. We group the LLMs into 7 sets based on their source: OpenAI GPT set, LLaMA set, GLM-130B set, FLAT-T5 set, OPT set, BigScience set, and EleutherAI set. To investigate whether machinegenerated text can be distinguished from humanwri...
Nevertheless, our dataset aims to serve as a testbed to select the best-performing detectors, which encounter sufficiently diverse machine-generated texts and can deal with texts from newly-developed LLMs in future. In the future, we plan to gather new online texts that have not been previously seen by LLMs to study su...
Although we are the first to propose a comprehensive testbed for AI-generated text detection and validate the detection effectiveness on frontier test sets, there are two major limitations
[{"authors": ["Sameer Badaskar", "Sachin Agarwal", "Shilpa Arora."], "title": "Identifying real or fake articles: Towards better language modeling", "venue": "Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-II.", "year": 2008}, {"authors": ["Anton Bakhtin", "Sam Gross", "M...
acl_23
PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models
[{"affiliations": [], "name": "Haoran Li"}, {"affiliations": [], "name": "Dadi Guo"}, {"affiliations": [], "name": "Donghao Li"}, {"affiliations": [], "name": "Wei Fan"}, {"affiliations": [], "name": "Qi Hu"}, {"affiliations": [], "name": "Xin Liu"}, {"affiliations": [], "name": "Chunkit Chan"}, {"affiliations": [], "n...
SP:bfb711759e990d387fcfcd9e8427e8b231a4c7fa
The rapid development of language models (LMs) brings unprecedented accessibility and usage for both models and users. On the one hand, powerful LMs achieve state-of-theart performance over numerous downstream NLP tasks. On the other hand, more and more attention is paid to unrestricted model accesses that may bring ma...
1 introduction :The accelerating evolution of language models (LMs) ushers a new era for both modern natural language processing and the whole society. Currently, generative large language models (LLMs) exhibit surprising capability and integrate previous tasks into a unified text generation formulation. As *Equal cont...
To analyze differential privacy implementations on language models, we first introduce the formal definition of DP (Dwork and Roth, 2014): Definition 1 (Differential Privacy). A randomized algorithm mechanism M with domain D and range R satisfies (ϵ, δ)-differential privacy if for any two neighboring datasets D,D′ and ...
null
null
In this paper, we introduce PrivLM-Bench, a benchmark designed to assess and contrast LMs’ multifaceted privacy objectives. By integrating a variety of masked and generative LMs with diverse tuning algorithms, PrivLM-Bench facilitates an extensive evaluation that encompasses both utility metrics and empirical privacy a...
Existing PPLMs evaluate their claimed improvement over tailored downstream tasks. These spe- cific tasks may not be feasible for other PPLMs. Instead, PrivLM-Bench evaluates PPLMs in a more fundamental aspect for natural language understanding (NLU). NLU is essential for general LMs to identify the meaning of given tex...
For future work, we advocate for more potent privacy attacks and utility-enhanced defense strategies that relax the worst-case restriction in accordance with empirical attacks to improve the privacy utility trade-off. This omission represents an area for potential future exploration to provide a more comprehensive unde...
In our evaluation of language model privacy from an adversarial standpoint, we acknowledge certain limitations in the covered scope and effectiveness of the proposed attacks. Firstly, our study does not encompass all attack methodologies, notably excluding the recent trend of prompt injection attacks, which are signifi...
[{"authors": ["Mart\u00edn Abadi", "Andy Chu", "Ian J. Goodfellow", "H.B. McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang."], "title": "Deep learning with differential privacy", "venue": "Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.", "year": 2016}, {"authors": ["Vincent Bi...
acl_23
GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators
"[{\"affiliations\": [], \"name\": \"Yuchen Hu\"}, {\"affiliations\": [], \"name\": \"Chen Chen\"}, (...TRUNCATED)
SP:655dbbdc1661d224fff5e595e5a7ecf423061827
"Recent advances in large language models (LLMs) have stepped forward the development of multilingua(...TRUNCATED)
"1 introduction :Recent advances in large language models (LLMs) have attracted a surge of research (...TRUNCATED)
"There is recently a surge of research interests in Transformer-based large language models, such as(...TRUNCATED)
"In this section, we introduce the proposed method. First, we describe the latest foundational trans(...TRUNCATED)
null
"In this paper, we propose a generative paradigm for translation tasks, namely GenTranslate, which l(...TRUNCATED)
"LLMs. We select the popular LLaMA-2 (Touvron et al., 2023b) for our paradigm. Specifically, we empl(...TRUNCATED)
Therefore, future work may focus on how to better engage LLMs into the translation part
null
"[{\"authors\": [\"Rohan Anil\", \"Andrew M Dai\", \"Orhan Firat\", \"Melvin Johnson\", \"Dmitry Lep(...TRUNCATED)
acl_23
Exploring Chain-of-Thought for Multi-modal Metaphor Detection
"[{\"affiliations\": [], \"name\": \"Yanzhi Xu\"}, {\"affiliations\": [], \"name\": \"Yueying Hua\"}(...TRUNCATED)
SP:96198fcd7c5a5d8b3280fa7864a1ac0892482d13
"Metaphors are commonly found in advertising and internet memes. However, the free form of internet (...TRUNCATED)
"1 introduction :Metaphors are highly prevalent in our everyday expressions and writings, which can (...TRUNCATED)
"Early metaphor detection tasks were confined to a single modality and employed methods based on rul(...TRUNCATED)
"We propose a novel framework called C4MMD using MLLMs to enhance metaphor detection. We first intro(...TRUNCATED)
null
"Our study aimed to tackle the challenges of multimodal metaphor interpretation by leveraging advanc(...TRUNCATED)
"In this section, we begin by introducing the dataset used to validate our method, as well as the ex(...TRUNCATED)
"This work not only advances multi-modal metaphor detection but also paves the way for future resear(...TRUNCATED)
"We believe the main limitation of our work lies in only testing our metaphor detection ability with(...TRUNCATED)
"[{\"authors\": [\"Khalid Alnajjar\", \"Mika H\\u00e4m\\u00e4l\\u00e4inen\", \"Shuo Zhang.\"], \"tit(...TRUNCATED)
acl_23
BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via Self-Distillation
"[{\"affiliations\": [], \"name\": \"Dayou Du\"}, {\"affiliations\": [], \"name\": \"Yijia Zhang\"},(...TRUNCATED)
SP:13d666c5a178a1336a6787584c5f570d397ef3a9
"The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language p(...TRUNCATED)
"1 introduction :Scaling up model sizes has been pivotal to the success of large language models (LL(...TRUNCATED)
"PTQ and QAT PTQ is directly applied to pretrained models without additional training. PTQ for LLMs (...TRUNCATED)
"In this section, we introduce BitDistiller, a QAT with self-distillation framework for LLMs, as ill(...TRUNCATED)
null
"BitDistiller leverages QAT with self-distillation to boost sub-4-bit LLM performance. The asymmetri(...TRUNCATED)
"We evaluate BitDistiller on the LLaMA-2 (Touvron et al., 2023) families and domain-specific LLMs wi(...TRUNCATED)
"Limitations\nDespite the promising results demonstrated by BitDistiller, it is important to acknowl(...TRUNCATED)
"Despite the promising results demonstrated by BitDistiller, it is important to acknowledge certain (...TRUNCATED)
"[{\"authors\": [\"Rishabh Agarwal\", \"Nino Vieillard\", \"Piotr Stanczyk\", \"Sabela Ramos\", \"Ma(...TRUNCATED)
acl_23
A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation and Extrapolation
"[{\"affiliations\": [], \"name\": \"Kai Chen\"}, {\"affiliations\": [], \"name\": \"Ye Wang\"}, {\"(...TRUNCATED)
SP:54d8b9e0de9f41beef2487127ba550ff6e4b275a
"Temporal knowledge graph (TKG) reasoning has two settings: interpolation reasoning and extrapolatio(...TRUNCATED)
"1 introduction :Knowledge graph (KG) is a semantic network that represents real-world facts in a st(...TRUNCATED)
"Static KG reasoning methods can be summarized into three classes: the translation models (Bordes et(...TRUNCATED)
"In this section, we introduce a novel temporal pathbased reasoning model with a neural-driven symbo(...TRUNCATED)
null
"We propose a temporal path-based reasoning (TPAR) model with a neural-symbolic fashion that can be (...TRUNCATED)
"Link prediction task that aims to infer incomplete time-wise fact with a missing entity ((s, r, ?, (...TRUNCATED)
"To test the hypothesis that completing missing knowledge about the past can enhance the accuracy of(...TRUNCATED)
"We identify that there may be some possible limitations in this study. First, our reasoning results(...TRUNCATED)
"[{\"authors\": [\"John S. Baras\", \"George Theodorakopoulos.\"], \"title\": \"Path Problems in Net(...TRUNCATED)
acl_23
"Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Gener(...TRUNCATED)
"[{\"affiliations\": [], \"name\": \"Shicheng Xu\"}, {\"affiliations\": [], \"name\": \"Liang Pang\"(...TRUNCATED)
SP:f538754179e8c3ef2a97459734a985f148f10524
"Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating additio(...TRUNCATED)
"1 introduction :Retrieval-augmented generation (RAG) is a popular framework in modern NLP systems t(...TRUNCATED)
"Retrieval Augmented Generation Retrieval augmented generation (RAG) aims to provide addi-\ntional k(...TRUNCATED)
null
null
"This paper proposes a novel perspective to reassess the role of LLMs in RAG that considers LLMs as (...TRUNCATED)
"To demonstrate the generality of our unsupervised training method, we evaluate the performance of I(...TRUNCATED)
null
null
"[{\"authors\": [\"Akari Asai\", \"Zeqiu Wu\", \"Yizhong Wang\", \"Avirup Sil\", \"Hannaneh Hajishir(...TRUNCATED)
acl_23
CSCD-NS: a Chinese Spelling Check Dataset for Native Speakers
"[{\"affiliations\": [], \"name\": \"Yong Hu\"}, {\"affiliations\": [], \"name\": \"Fandong Meng\"},(...TRUNCATED)
SP:cd01bfaf6e7e593db2fb361479620de7a5e80a98
"In this paper, we present CSCD-NS, the first Chinese spelling check (CSC) dataset designed for nati(...TRUNCATED)
"1 introduction :Chinese spelling check (CSC) is a task to detect and correct spelling errors in Chi(...TRUNCATED)
"CSC Datasets: The existing CSC datasets, such as the SIGHAN series (Wu et al., 2013; Yu et al., 201(...TRUNCATED)
"License: CSCD-NS and the constructed pseudodata LCSTS-IME-2M are based on LCSTS (Hu et al., 2015), (...TRUNCATED)
"The manual annotation of CSC dataset is very expensive, therefore, how to construct pseudo data has(...TRUNCATED)
"In this paper, we focus on CSC for native speakers. For this scenario, we propose a new dataset, CS(...TRUNCATED)
"In this section, we evaluate the performance of different models on CSCD-NS and compare different p(...TRUNCATED)
"Consequently, enabling controlled text generation, addressing complex word-level and grammatical er(...TRUNCATED)
Limitation of the CSCD-NS dataset
"[{\"authors\": [\"Baichuan.\"], \"title\": \"Baichuan 2: Open large-scale language models\", \"venu(...TRUNCATED)
acl_23
End of preview.
  • Curated by: [Will be diisclosed later]
  • Funded by [optional]: [Will be diisclosed later]
  • Shared by [optional]: [Will be diisclosed later] Dataset Sources [optional] -->
Downloads last month
2