paper_id stringlengths 12 48 | title stringlengths 12 155 | url stringlengths 39 46 | abstract stringlengths 389 2.11k | ocr_markdown stringlengths 18.1k 576k |
|---|---|---|---|---|
zhang-etal-2023-investigating | Investigating Glyph-Phonetic Information for {C}hinese Spell Checking: What Works and What{'}s Next? | https://aclanthology.org/2023.findings-acl.1 | While pre-trained Chinese language models have demonstrated impressive performance on a wide range of NLP tasks, the Chinese Spell Checking (CSC) task remains a challenge. Previous research has explored using information such as glyphs and phonetics to improve the ability of CSC models to distinguish misspelled charact... | # Investigating Glyph-Phonetic Information For Chinese Spell Checking: What Works And What'S Next?
Xiaotian Zhang ∗
, Yanjun Zheng ∗
, Hang Yan, Xipeng Qiu †
Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University
{xiaotianzhang21, yanjunzheng21}@m.f... |
jo-2023-self | A Self-Supervised Integration Method of Pretrained Language Models and Word Definitions | https://aclanthology.org/2023.findings-acl.2 | We investigate the representation of pretrained language models and humans, using the idea of word definition modeling{--}how well a word is represented by its definition, and vice versa. Our analysis shows that a word representation in pretrained language models does not successfully map its human-written definition a... | # A Self-Supervised Integration Method Of Pretrained Language Models And Word Definitions
Hwiyeol Jo NAVER Search US
hwiyeolj@gmail.com
## Abstract
We investigate the representation of pretrained language models and humans, using the idea of word definition modeling–how well a word is represented by its definition, ... |
ravfogel-etal-2023-conformal | Conformal Nucleus Sampling | https://aclanthology.org/2023.findings-acl.3 | Language models generate text based on successively sampling the next word. A decoding procedure based on nucleus (top-$p$) sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability $p$. In this work, we assess whether a top-$p$ set is indeed aligned with its probabil... | # Conformal Nucleus Sampling
Shauli Ravfogel1,2 **Yoav Goldberg**1,2 **Jacob Goldberger**1 1Bar-Ilan University 2Allen Institute for Artificial Intelligence
{shauli.ravfogel, yoav.goldberg}@gmail.com , jacob.goldberger@biu.ac.il
## Abstract
Language models generate text based on successively sampling the next word. ... |
chan-etal-2023-discoprompt | {D}isco{P}rompt: Path Prediction Prompt Tuning for Implicit Discourse Relation Recognition | https://aclanthology.org/2023.findings-acl.4 | "Implicit Discourse Relation Recognition (IDRR) is a sophisticated and challenging task to recognize(...TRUNCATED) | "# Discoprompt: Path Prediction Prompt Tuning For Implicit Discourse Relation Recognition\n\nChunkit(...TRUNCATED) |
cao-jiang-2023-modularized | Modularized Zero-shot {VQA} with Pre-trained Models | https://aclanthology.org/2023.findings-acl.5 | "Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In this paper, we study ho(...TRUNCATED) | "# Modularized Zero-Shot Vqa With Pre-Trained Models\n\nRui Cao and **Jing Jiang**\nSchool of Comput(...TRUNCATED) |
tan-etal-2023-timelineqa | {T}imeline{QA}: A Benchmark for Question Answering over Timelines | https://aclanthology.org/2023.findings-acl.6 | "Lifelogs are descriptions of experiences that a person had during their life. Lifelogs are created (...TRUNCATED) | "# Timelineqa: A Benchmark For Question Answering Over Timelines\n\nWang-Chiew Tan, Jane Dwivedi-Yu,(...TRUNCATED) |
lam-etal-2023-abstractive | Abstractive Text Summarization Using the {BRIO} Training Paradigm | https://aclanthology.org/2023.findings-acl.7 | "Summary sentences produced by abstractive summarization models may be coherent and comprehensive, b(...TRUNCATED) | "\n## Abstractive Text Summarization Using The Brio Training Paradigm\n\nKhang Nhut Lam Can Tho Univ(...TRUNCATED) |
wu-etal-2023-modeling | Modeling the {Q}-Diversity in a Min-max Play Game for Robust Optimization | https://aclanthology.org/2023.findings-acl.8 | "Models trained with empirical risk minimization (ERM) are revealed to easily rely on spurious corre(...TRUNCATED) | "\n## Modeling The Q**-Diversity In A Min-Max Play Game** For Robust Optimization\n\nTing Wu1, Rui Z(...TRUNCATED) |
chen-etal-2023-pre | Pre-training Language Model as a Multi-perspective Course Learner | https://aclanthology.org/2023.findings-acl.9 | "ELECTRA, the generator-discriminator pre-training framework, has achieved impressive semantic const(...TRUNCATED) | "Pre-training Language Model as a Multi-perspective Course Learner Beiduo Chen§‡∗\n, Shaohan Hu(...TRUNCATED) |
tsymboi-etal-2023-layerwise | Layerwise universal adversarial attack on {NLP} models | https://aclanthology.org/2023.findings-acl.10 | "In this work, we examine the vulnerability of language models to universal adversarial triggers (UA(...TRUNCATED) | "\n## Layerwise Universal Adversarial Attack On Nlp Models\n\n# Olga Tsymboi1, 2, Danil Malaev1, **A(...TRUNCATED) |
End of preview. Expand in Data Studio
ACL 2023 Paper in Markdown after OCR
This dataset contains 2150 papers from Association for Computational Linguistics (ACL) 2023:
- Long Papers (912 papers)
- Short Papers (185 papers)
- System Demonstrations (59 paper)
- Student Research Workshop (35 papers)
- Industry Track (77 papers)
- Tutorial Abstracts (7 papers)
- Findings (902 papers)
This dataset is processed and compiled by @hu_yifei as part of open-source effort from the Open Research Assistant Project.
OCR process
The papers were processed using Marker (by @VikParuchuri). For commercial usage, please see Marker's License and Restrictions for more details.
- Downloads last month
- 29