bibtex_url
stringlengths
41
53
acl_proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
listlengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
10
10
GitHub
listlengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
listlengths
0
100
Datasets
listlengths
0
15
Spaces
listlengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.emnlp-main.1.bib
https://aclanthology.org/2023.emnlp-main.1/
@inproceedings{zhang-etal-2023-iag, title = "{IAG}: Induction-Augmented Generation Framework for Answering Reasoning Questions", author = "Zhang, Zhebin and Zhang, Xinyu and Ren, Yuanhang and Shi, Saijiang and Han, Meng and Wu, Yongkang and Lai, Ruofei and Cao, Z...
Retrieval-Augmented Generation (RAG), by incorporating external knowledge with parametric memory of language models, has become the state-of-the-art architecture for open-domain QA tasks. However, common knowledge bases are inherently constrained by limited coverage and noisy information, making retrieval-based approac...
[ "Zhang, Zhebin", "Zhang, Xinyu", "Ren, Yuanhang", "Shi, Saijiang", "Han, Meng", "Wu, Yongkang", "Lai, Ruofei", "Cao, Zhao" ]
IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions
emnlp-main.1
2311.18397
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.2.bib
https://aclanthology.org/2023.emnlp-main.2/
@inproceedings{yamamoto-matsuzaki-2023-absolute, title = "Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position", author = "Yamamoto, Yuji and Matsuzaki, Takuya", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Procee...
Attention weight is a clue to interpret how a Transformer-based model makes an inference. In some attention heads, the attention focuses on the neighbors of each token. This allows the output vector of each token to depend on the surrounding tokens and contributes to make the inference context-dependent. We analyze the...
[ "Yamamoto, Yuji", "Matsuzaki, Takuya" ]
Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position
emnlp-main.2
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.3.bib
https://aclanthology.org/2023.emnlp-main.3/
@inproceedings{qiang-etal-2023-chinese, title = "{C}hinese Lexical Substitution: Dataset and Method", author = "Qiang, Jipeng and Liu, Kang and Li, Ying and Li, Yun and Zhu, Yi and Yuan, Yun-Hao and Hu, Xiaocheng and Ouyang, Xiaoye", editor = "Bouamor, Houda ...
Existing lexical substitution (LS) benchmarks were collected by asking human annotators to think of substitutes from memory, resulting in benchmarks with limited coverage and relatively small scales. To overcome this problem, we propose a novel annotation method to construct an LS dataset based on human and machine col...
[ "Qiang, Jipeng", "Liu, Kang", "Li, Ying", "Li, Yun", "Zhu, Yi", "Yuan, Yun-Hao", "Hu, Xiaocheng", "Ouyang, Xiaoye" ]
Chinese Lexical Substitution: Dataset and Method
emnlp-main.3
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.4.bib
https://aclanthology.org/2023.emnlp-main.4/
@inproceedings{sun-etal-2023-decoding, title = "Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting", author = "Sun, Chenkai and Li, Jinning and Fung, Yi and Chan, Hou and Abdelzaher, Tarek and Zhai, ChengXian...
Automatic response forecasting for news media plays a crucial role in enabling content producers to efficiently predict the impact of news releases and prevent unexpected negative outcomes such as social conflict and moral injury. To effectively forecast responses, it is essential to develop measures that leverage the ...
[ "Sun, Chenkai", "Li, Jinning", "Fung, Yi", "Chan, Hou", "Abdelzaher, Tarek", "Zhai, ChengXiang", "Ji, Heng" ]
Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting
emnlp-main.4
2310.13297
[ "https://github.com/chenkaisun/socialsense" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.5.bib
https://aclanthology.org/2023.emnlp-main.5/
@inproceedings{yao-etal-2023-fine, title = "Fine-grained Conversational Decoding via Isotropic and Proximal Search", author = "Yao, Yuxuan and Wu, Han and Xu, Qiling and Song, Linqi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings o...
General-purpose text decoding approaches are usually adopted for dialogue response generation. Although the quality of the generated responses can be improved with dialogue-specific encoding methods, conversational decoding methods are still under-explored. Inspired by SimDRC that a good dialogue feature space should f...
[ "Yao, Yuxuan", "Wu, Han", "Xu, Qiling", "Song, Linqi" ]
Fine-grained Conversational Decoding via Isotropic and Proximal Search
emnlp-main.5
2310.08130
[ "https://github.com/starrYYxuan/IPS" ]
https://huggingface.co/papers/2310.08130
0
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.6.bib
https://aclanthology.org/2023.emnlp-main.6/
@inproceedings{stefanovitch-piskorski-2023-holistic, title = "Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign", author = "Stefanovitch, Nicolas and Piskorski, Jakub", editor = "Bouamor, Houda and Pino, Juan and Bali, K...
In this paper we report on the complexity of persuasion technique annotation in the context of a large multilingual annotation campaign involving 6 languages and approximately 40 annotators. We highlight the techniques that appear to be difficult for humans to annotate and elaborate on our findings on the causes of thi...
[ "Stefanovitch, Nicolas", "Piskorski, Jakub" ]
Holistic Inter-Annotator Agreement and Corpus Coherence Estimation in a Large-scale Multilingual Annotation Campaign
emnlp-main.6
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.7.bib
https://aclanthology.org/2023.emnlp-main.7/
@inproceedings{borenstein-etal-2023-phd, title = "{PHD}: Pixel-Based Language Modeling of Historical Documents", author = "Borenstein, Nadav and Rust, Phillip and Elliott, Desmond and Augenstein, Isabelle", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", boo...
The digitisation of historical documents has provided historians with unprecedented research opportunities. Yet, the conventional approach to analysing historical documents involves converting them from images to text using OCR, a process that overlooks the potential benefits of treating them as images and introduces h...
[ "Borenstein, Nadav", "Rust, Phillip", "Elliott, Desmond", "Augenstein, Isabelle" ]
PHD: Pixel-Based Language Modeling of Historical Documents
emnlp-main.7
2310.18343
[ "https://github.com/nadavborenstein/pixel-bw" ]
https://huggingface.co/papers/2310.18343
1
1
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.8.bib
https://aclanthology.org/2023.emnlp-main.8/
@inproceedings{wang-etal-2023-primacy, title = "Primacy Effect of {C}hat{GPT}", author = "Wang, Yiwei and Cai, Yujun and Chen, Muhao and Liang, Yuxuan and Hooi, Bryan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 20...
Instruction-tuned large language models (LLMs), such as ChatGPT, have led to promising zero-shot performance in discriminative natural language understanding (NLU) tasks. This involves querying the LLM using a prompt containing the question, and the candidate labels to choose from. The question-answering capabilities o...
[ "Wang, Yiwei", "Cai, Yujun", "Chen, Muhao", "Liang, Yuxuan", "Hooi, Bryan" ]
Primacy Effect of ChatGPT
emnlp-main.8
2310.13206
[ "https://github.com/wangywust/primacyeffectgpt" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.9.bib
https://aclanthology.org/2023.emnlp-main.9/
@inproceedings{kawabata-sugawara-2023-evaluating, title = "Evaluating the Rationale Understanding of Critical Reasoning in Logical Reading Comprehension", author = "Kawabata, Akira and Sugawara, Saku", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedin...
To precisely evaluate a language model{'}s capability for logical reading comprehension, we present a dataset for testing the understanding of the rationale behind critical reasoning. For questions taken from an existing multiple-choice logical reading comprehension dataset, we crowdsource rationale texts that explain ...
[ "Kawabata, Akira", "Sugawara, Saku" ]
Evaluating the Rationale Understanding of Critical Reasoning in Logical Reading Comprehension
emnlp-main.9
2311.18353
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.10.bib
https://aclanthology.org/2023.emnlp-main.10/
@inproceedings{muller-etal-2023-evaluating, title = "Evaluating and Modeling Attribution for Cross-Lingual Question Answering", author = "Muller, Benjamin and Wieting, John and Clark, Jonathan and Kwiatkowski, Tom and Ruder, Sebastian and Soares, Livio and Aharoni, Roee...
Trustworthy answer content is abundant in many high-resource languages and is instantly accessible through question answering systems {---} yet this content can be hard to access for those that do not speak these languages. The leap forward in cross-lingual modeling quality offered by generative language models offers ...
[ "Muller, Benjamin", "Wieting, John", "Clark, Jonathan", "Kwiatkowski, Tom", "Ruder, Sebastian", "Soares, Livio", "Aharoni, Roee", "Herzig, Jonathan", "Wang, Xinyi" ]
Evaluating and Modeling Attribution for Cross-Lingual Question Answering
emnlp-main.10
2305.14332
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.11.bib
https://aclanthology.org/2023.emnlp-main.11/
@inproceedings{oladipo-etal-2023-better, title = "Better Quality Pre-training Data and T5 Models for {A}frican Languages", author = "Oladipo, Akintunde and Adeyemi, Mofetoluwa and Ahia, Orevaoghene and Owodunni, Abraham and Ogundepo, Odunayo and Adelani, David and Lin, ...
In this study, we highlight the importance of enhancing the quality of pretraining data in multilingual language models. Existing web crawls have demonstrated quality issues, particularly in the context of low-resource languages. Consequently, we introduce a new multilingual pretraining corpus for 16 African languages,...
[ "Oladipo, Akintunde", "Adeyemi, Mofetoluwa", "Ahia, Orevaoghene", "Owodunni, Abraham", "Ogundepo, Odunayo", "Adelani, David", "Lin, Jimmy" ]
Better Quality Pre-training Data and T5 Models for African Languages
emnlp-main.11
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.12.bib
https://aclanthology.org/2023.emnlp-main.12/
@inproceedings{tan-etal-2023-sparse, title = "Sparse Universal Transformer", author = "Tan, Shawn and Shen, Yikang and Chen, Zhenfang and Courville, Aaron and Gan, Chuang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of th...
The Universal Transformer (UT) is a variant of the Transformer that shares parameters across its layers and is Turing-complete under certain assumptions. Empirical evidence also shows that UTs have better compositional generalization than Vanilla Transformers (VTs) in formal language tasks. The parameter-sharing also a...
[ "Tan, Shawn", "Shen, Yikang", "Chen, Zhenfang", "Courville, Aaron", "Gan, Chuang" ]
Sparse Universal Transformer
emnlp-main.12
2310.07096
[ "" ]
https://huggingface.co/papers/2310.07096
1
0
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.13.bib
https://aclanthology.org/2023.emnlp-main.13/
@inproceedings{li-etal-2023-theory, title = "Theory of Mind for Multi-Agent Collaboration via Large Language Models", author = "Li, Huao and Chong, Yu and Stepputtis, Simon and Campbell, Joseph and Hughes, Dana and Lewis, Charles and Sycara, Katia", editor = "Bouamo...
While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored. This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing thei...
[ "Li, Huao", "Chong, Yu", "Stepputtis, Simon", "Campbell, Joseph", "Hughes, Dana", "Lewis, Charles", "Sycara, Katia" ]
Theory of Mind for Multi-Agent Collaboration via Large Language Models
emnlp-main.13
2310.10701
[ "https://github.com/romanlee6/multi_LLM_comm" ]
https://huggingface.co/papers/2310.10701
0
0
0
7
[]
[]
[ "agentharbor/agenta" ]
1
Poster
https://aclanthology.org/2023.emnlp-main.14.bib
https://aclanthology.org/2023.emnlp-main.14/
@inproceedings{litschko-etal-2023-establishing, title = "Establishing Trustworthiness: Rethinking Tasks and Model Evaluation", author = {Litschko, Robert and M{\"u}ller-Eberstein, Max and van der Goot, Rob and Weber-Genzel, Leon and Plank, Barbara}, editor = "Bouamor, Houda and ...
Language understanding is a multi-faceted cognitive capability, which the Natural Language Processing (NLP) community has striven to model computationally for decades. Traditionally, facets of linguistic intelligence have been compartmentalized into tasks with specialized model architectures and corresponding evaluatio...
[ "Litschko, Robert", "M{\\\"u}ller-Eberstein, Max", "van der Goot, Rob", "Weber-Genzel, Leon", "Plank, Barbara" ]
Establishing Trustworthiness: Rethinking Tasks and Model Evaluation
emnlp-main.14
2310.05442
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.15.bib
https://aclanthology.org/2023.emnlp-main.15/
@inproceedings{himakunthala-etal-2023-lets, title = "Let{'}s Think Frame by Frame with {VIP}: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought", author = "Himakunthala, Vaishnavi and Ouyang, Andy and Rose, Daniel and He, Ryan and Mei, Alex and Lu,...
Despite exciting recent results showing vision-language systems{'} capacity to reason about images using natural language, their capacity for video reasoning remains underexplored. We motivate framing video reasoning as the sequential understanding of a small number of keyframes, thereby leveraging the power and robust...
[ "Himakunthala, Vaishnavi", "Ouyang, Andy", "Rose, Daniel", "He, Ryan", "Mei, Alex", "Lu, Yujie", "Sonar, Chinmay", "Saxon, Michael", "Wang, William" ]
Let's Think Frame by Frame with VIP: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought
emnlp-main.15
2305.13903
[ "https://github.com/vaishnavihimakunthala/vip" ]
https://huggingface.co/papers/2305.13903
2
0
0
9
[]
[ "ryanhe/VIP" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.16.bib
https://aclanthology.org/2023.emnlp-main.16/
@inproceedings{khondaker-etal-2023-gptaraeval, title = "{GPTA}ra{E}val: A Comprehensive Evaluation of {C}hat{GPT} on {A}rabic {NLP}", author = "Khondaker, Md Tawkat Islam and Waheed, Abdul and Nagoudi, El Moatez Billah and Abdul-Mageed, Muhammad", editor = "Bouamor, Houda and Pin...
ChatGPT{'}s emergence heralds a transformative phase in NLP, particularly demonstrated through its excellent performance on many English benchmarks. However, the model{'}s efficacy across diverse linguistic contexts remains largely uncharted territory. This work aims to bridge this knowledge gap, with a primary focus o...
[ "Khondaker, Md Tawkat Islam", "Waheed, Abdul", "Nagoudi, El Moatez Billah", "Abdul-Mageed, Muhammad" ]
GPTAraEval: A Comprehensive Evaluation of ChatGPT on Arabic NLP
emnlp-main.16
2305.14976
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.17.bib
https://aclanthology.org/2023.emnlp-main.17/
@inproceedings{li-etal-2023-dual-channel, title = "Dual-Channel Span for Aspect Sentiment Triplet Extraction", author = "Li, Pan and Li, Ping and Zhang, Kai", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empiric...
Aspect Sentiment Triplet Extraction (ASTE) is one of the compound tasks of fine-grained aspect-based sentiment analysis (ABSA), aiming at extracting the triplets of aspect terms, corresponding opinion terms and the associated sentiment orientation. Recent efforts in exploiting span-level semantic interaction shown supe...
[ "Li, Pan", "Li, Ping", "Zhang, Kai" ]
Dual-Channel Span for Aspect Sentiment Triplet Extraction
emnlp-main.17
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.18.bib
https://aclanthology.org/2023.emnlp-main.18/
@inproceedings{li-zhang-2023-cultural, title = "Cultural Concept Adaptation on Multimodal Reasoning", author = "Li, Zhi and Zhang, Yin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Languag...
Developing cultural adaptation methods is important, which can improve the model performance on the low-resource ones and provide more equitable opportunities for everyone to benefit from advanced technology. Past methods primarily focused on multilingual and multimodal capabilities, and the improvement of multicultura...
[ "Li, Zhi", "Zhang, Yin" ]
Cultural Concept Adaptation on Multimodal Reasoning
emnlp-main.18
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.19.bib
https://aclanthology.org/2023.emnlp-main.19/
@inproceedings{samir-silfverberg-2023-understanding, title = "Understanding Compositional Data Augmentation in Typologically Diverse Morphological Inflection", author = "Samir, Farhan and Silfverberg, Miikka", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "P...
Data augmentation techniques are widely used in low-resource automatic morphological inflection to address the issue of data sparsity. However, the full implications of these techniques remain poorly understood. In this study, we aim to shed light on the theoretical aspects of the data augmentation strategy StemCorrupt...
[ "Samir, Farhan", "Silfverberg, Miikka" ]
Understanding Compositional Data Augmentation in Typologically Diverse Morphological Inflection
emnlp-main.19
2305.13658
[ "https://github.com/smfsamir/understanding-augmentation-morphology" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.20.bib
https://aclanthology.org/2023.emnlp-main.20/
@inproceedings{li-etal-2023-evaluating, title = "Evaluating Object Hallucination in Large Vision-Language Models", author = "Li, Yifan and Du, Yifan and Zhou, Kun and Wang, Jinpeng and Zhao, Xin and Wen, Ji-Rong", editor = "Bouamor, Houda and Pino, Juan and B...
Inspired by the superior language abilities of large language models (LLM), large vision-language models (LVLM) have been recently proposed by integrating powerful LLMs for improving the performance on complex multimodal tasks. Despite the promising progress on LVLMs, we find that they suffer from object hallucinations...
[ "Li, Yifan", "Du, Yifan", "Zhou, Kun", "Wang, Jinpeng", "Zhao, Xin", "Wen, Ji-Rong" ]
Evaluating Object Hallucination in Large Vision-Language Models
emnlp-main.20
2305.10355
[ "https://github.com/rucaibox/pope" ]
https://huggingface.co/papers/2305.10355
0
0
0
6
[ "google/paligemma-3b-pt-224", "google/paligemma-3b-pt-896", "google/paligemma-3b-mix-448", "google/paligemma-3b-mix-224", "google/paligemma-3b-pt-448", "google/paligemma-3b-ft-ocrvqa-896", "google/paligemma-3b-ft-vqav2-448", "google/paligemma-3b-ft-refcoco-seg-896", "google/paligemma-3b-ft-ocrvqa-44...
[ "HuggingFaceM4/POPE_modif" ]
[ "big-vision/paligemma-hf", "manu/ColPali-demo", "merve/paligemma-doc", "merve/paligemma-tracking", "agentsea/paligemma-waveui", "Justinrune/LLaMA-Factory", "Saee/vQA-exploration", "dwb2023/model_explorer2", "dwb2023/model_explorer4", "rynmurdock/Blue_Tigers", "beingcognitive/Image_to_Music", "...
1
Poster
https://aclanthology.org/2023.emnlp-main.21.bib
https://aclanthology.org/2023.emnlp-main.21/
@inproceedings{cao-etal-2023-event, title = "Event Ontology Completion with Hierarchical Structure Evolution Networks", author = "Cao, Pengfei and Hao, Yupu and Chen, Yubo and Liu, Kang and Xu, Jiexin and Li, Huaijun and Jiang, Xiaojian and Zhao, Jun", editor...
Traditional event detection methods require predefined event schemas. However, manually defining event schemas is expensive and the coverage of schemas is limited. To this end, some works study the event type induction (ETI) task, which discovers new event types via clustering. However, the setting of ETI suffers from ...
[ "Cao, Pengfei", "Hao, Yupu", "Chen, Yubo", "Liu, Kang", "Xu, Jiexin", "Li, Huaijun", "Jiang, Xiaojian", "Zhao, Jun" ]
Event Ontology Completion with Hierarchical Structure Evolution Networks
emnlp-main.21
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.22.bib
https://aclanthology.org/2023.emnlp-main.22/
@inproceedings{jin-etal-2023-parameter, title = "Parameter-efficient Tuning for Large Language Model without Calculating Its Gradients", author = "Jin, Feihu and Zhang, Jiajun and Zong, Chengqing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Procee...
Fine-tuning all parameters of large language models (LLMs) requires significant computational resources and is time-consuming. Recent parameter-efficient tuning methods such as Adapter tuning, Prefix tuning, and LoRA allow for updating a small subset of parameters in large language models. However, they can only save a...
[ "Jin, Feihu", "Zhang, Jiajun", "Zong, Chengqing" ]
Parameter-efficient Tuning for Large Language Model without Calculating Its Gradients
emnlp-main.22
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.23.bib
https://aclanthology.org/2023.emnlp-main.23/
@inproceedings{lei-huang-2023-discourse, title = "Discourse Structures Guided Fine-grained Propaganda Identification", author = "Lei, Yuanyuan and Huang, Ruihong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical...
Propaganda is a form of deceptive narratives that instigate or mislead the public, usually with a political purpose. In this paper, we aim to identify propaganda in political news at two fine-grained levels: sentence-level and token-level. We observe that propaganda content is more likely to be embedded in sentences th...
[ "Lei, Yuanyuan", "Huang, Ruihong" ]
Discourse Structures Guided Fine-grained Propaganda Identification
emnlp-main.23
2310.18544
[ "https://github.com/yuanyuanlei-nlp/propaganda_emnlp_2023" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.24.bib
https://aclanthology.org/2023.emnlp-main.24/
@inproceedings{minixhofer-etal-2023-compoundpiece, title = "{C}ompound{P}iece: Evaluating and Improving Decompounding Performance of Language Models", author = "Minixhofer, Benjamin and Pfeiffer, Jonas and Vuli{\'c}, Ivan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika...
While many languages possess processes of joining two or more words to create compound words, previous studies have been typically limited only to languages with excessively productive compound formation (e.g., German, Dutch) and there is no public dataset containing compound and non-compound words across a large numbe...
[ "Minixhofer, Benjamin", "Pfeiffer, Jonas", "Vuli{\\'c}, Ivan" ]
CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models
emnlp-main.24
2305.14214
[ "https://github.com/bminixhofer/compoundpiece" ]
https://huggingface.co/papers/2305.14214
1
0
0
3
[ "benjamin/compoundpiece", "benjamin/compoundpiece-stage1" ]
[ "benjamin/compoundpiece" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.25.bib
https://aclanthology.org/2023.emnlp-main.25/
@inproceedings{wang-etal-2023-improving, title = "Improving Image Captioning via Predicting Structured Concepts", author = "Wang, Ting and Chen, Weidong and Tian, Yuanhe and Song, Yan and Mao, Zhendong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", ...
Having the difficulty of solving the semantic gap between images and texts for the image captioning task, conventional studies in this area paid some attention to treating semantic concepts as a bridge between the two modalities and improved captioning performance accordingly. Although promising results on concept pred...
[ "Wang, Ting", "Chen, Weidong", "Tian, Yuanhe", "Song, Yan", "Mao, Zhendong" ]
Improving Image Captioning via Predicting Structured Concepts
emnlp-main.25
2311.08223
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.26.bib
https://aclanthology.org/2023.emnlp-main.26/
@inproceedings{jones-etal-2023-gatitos, title = "{GATITOS}: Using a New Multilingual Lexicon for Low-resource Machine Translation", author = "Jones, Alexander and Caswell, Isaac and Firat, Orhan and Saxena, Ishank", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika"...
Modern machine translation models and language models are able to translate without having been trained on parallel data, greatly expanding the set of languages that they can serve. However, these models still struggle in a variety of predictable ways, a problem that cannot be overcome without at least some trusted bil...
[ "Jones, Alex", "er", "Caswell, Isaac", "Firat, Orhan", "Saxena, Ishank" ]
GATITOS: Using a New Multilingual Lexicon for Low-resource Machine Translation
emnlp-main.26
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.27.bib
https://aclanthology.org/2023.emnlp-main.27/
@inproceedings{gao-etal-2023-continually, title = "Continually Improving Extractive {QA} via Human Feedback", author = "Gao, Ge and Chen, Hung-Ting and Artzi, Yoav and Choi, Eunsol", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of...
We study continually improving an extractive question answering (QA) system via human user feedback. We design and deploy an iterative approach, where information-seeking users ask questions, receive model-predicted answers, and provide feedback. We conduct experiments involving thousands of user interactions under div...
[ "Gao, Ge", "Chen, Hung-Ting", "Artzi, Yoav", "Choi, Eunsol" ]
Continually Improving Extractive QA via Human Feedback
emnlp-main.27
2305.12473
[ "https://github.com/lil-lab/qa-from-hf" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.28.bib
https://aclanthology.org/2023.emnlp-main.28/
@inproceedings{chen-etal-2023-using, title = "Using Interpretation Methods for Model Enhancement", author = "Chen, Zhuo and Jiang, Chengyue and Tu, Kewei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical ...
In the age of neural natural language processing, there are plenty of works trying to derive interpretations of neural models. Intuitively, when gold rationales exist during training, one can additionally train the model to match its interpretation with the rationales. However, this intuitive idea has not been fully ex...
[ "Chen, Zhuo", "Jiang, Chengyue", "Tu, Kewei" ]
Using Interpretation Methods for Model Enhancement
emnlp-main.28
2404.02068
[ "https://github.com/chord-chen-30/uimer" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.29.bib
https://aclanthology.org/2023.emnlp-main.29/
@inproceedings{zhang-etal-2023-expression, title = "An Expression Tree Decoding Strategy for Mathematical Equation Generation", author = "Zhang, Wenqi and Shen, Yongliang and Nong, Qingpeng and Tan, Zeqi and Ma, Yanna and Lu, Weiming", editor = "Bouamor, Houda and P...
Generating mathematical equations from natural language requires an accurate understanding of the relations among math expressions. Existing approaches can be broadly categorized into token-level and expression-level generation. The former treats equations as a mathematical language, sequentially generating math tokens...
[ "Zhang, Wenqi", "Shen, Yongliang", "Nong, Qingpeng", "Tan, Zeqi", "Ma, Yanna", "Lu, Weiming" ]
An Expression Tree Decoding Strategy for Mathematical Equation Generation
emnlp-main.29
2310.09619
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.30.bib
https://aclanthology.org/2023.emnlp-main.30/
@inproceedings{yang-etal-2023-bootstrapping, title = "Bootstrapping Small {\&} High Performance Language Models with Unmasking-Removal Training Policy", author = "Yang, Yahan and Sulem, Elior and Lee, Insup and Roth, Dan", editor = "Bouamor, Houda and Pino, Juan and Bali, ...
BabyBERTa, a language model trained on small-scale child-directed speech while none of the words are unmasked during training, has been shown to achieve a level of grammaticality comparable to that of RoBERTa-base, which is trained on 6,000 times more words and 15 times more parameters. Relying on this promising result...
[ "Yang, Yahan", "Sulem, Elior", "Lee, Insup", "Roth, Dan" ]
Bootstrapping Small & High Performance Language Models with Unmasking-Removal Training Policy
emnlp-main.30
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.31.bib
https://aclanthology.org/2023.emnlp-main.31/
@inproceedings{yoon-bak-2023-diversity, title = "Diversity Enhanced Narrative Question Generation for Storybooks", author = "Yoon, Hokeun and Bak, JinYeong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Metho...
Question generation (QG) from a given context can enhance comprehension, engagement, assessment, and overall efficacy in learning or conversational environments. Despite recent advancements in QG, the challenge of enhancing or measuring the diversity of generated questions often remains unaddressed. In this paper, we i...
[ "Yoon, Hokeun", "Bak, JinYeong" ]
Diversity Enhanced Narrative Question Generation for Storybooks
emnlp-main.31
2310.16446
[ "https://github.com/hkyoon95/mqg" ]
https://huggingface.co/papers/2310.16446
0
0
0
2
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.32.bib
https://aclanthology.org/2023.emnlp-main.32/
@inproceedings{dong-etal-2023-debiasing, title = "Debiasing Made State-of-the-art: Revisiting the Simple Seed-based Weak Supervision for Text Classification", author = "Dong, Chengyu and Wang, Zihan and Shang, Jingbo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", ...
Recent advances in weakly supervised text classification mostly focus on designing sophisticated methods to turn high-level human heuristics into quality pseudo-labels. In this paper, we revisit the seed matching-based method, which is arguably the simplest way to generate pseudo-labels, and show that its power was gre...
[ "Dong, Chengyu", "Wang, Zihan", "Shang, Jingbo" ]
Debiasing Made State-of-the-art: Revisiting the Simple Seed-based Weak Supervision for Text Classification
emnlp-main.32
2305.14794
[ "https://github.com/shwinshaker/simseed" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.33.bib
https://aclanthology.org/2023.emnlp-main.33/
@inproceedings{chen-etal-2023-enhance, title = "How to Enhance Causal Discrimination of Utterances: A Case on Affective Reasoning", author = "Chen, Hang and Yang, Xinyu and Luo, Jing and Zhu, Wenjing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitl...
Our investigation into the Affective Reasoning in Conversation (ARC) task highlights the challenge of causal discrimination. Almost all existing models, including large language models (LLMs), excel at capturing semantic correlations within utterance embeddings but fall short in determining the specific causal relation...
[ "Chen, Hang", "Yang, Xinyu", "Luo, Jing", "Zhu, Wenjing" ]
How to Enhance Causal Discrimination of Utterances: A Case on Affective Reasoning
emnlp-main.33
2305.02615
[ "https://github.com/zodiark-ch/mater-of-our-emnlp2023-paper" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.34.bib
https://aclanthology.org/2023.emnlp-main.34/
@inproceedings{si-etal-2023-compressing, title = "Compressing and Debiasing Vision-Language Pre-Trained Models for Visual Question Answering", author = "Si, Qingyi and Liu, Yuanxin and Lin, Zheng and Fu, Peng and Cao, Yanan and Wang, Weiping", editor = "Bouamor, Houda and...
Despite the excellent performance of vision-language pre-trained models (VLPs) on conventional VQA task, they still suffer from two problems: First, VLPs tend to rely on language biases in datasets and fail to generalize to out-of-distribution (OOD) data. Second, they are inefficient in terms of memory footprint and co...
[ "Si, Qingyi", "Liu, Yuanxin", "Lin, Zheng", "Fu, Peng", "Cao, Yanan", "Wang, Weiping" ]
Compressing and Debiasing Vision-Language Pre-Trained Models for Visual Question Answering
emnlp-main.34
2210.14558
[ "https://github.com/phoebussi/compress-robust-vqa" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.35.bib
https://aclanthology.org/2023.emnlp-main.35/
@inproceedings{cole-etal-2023-selectively, title = "Selectively Answering Ambiguous Questions", author = "Cole, Jeremy and Zhang, Michael and Gillick, Daniel and Eisenschlos, Julian and Dhingra, Bhuwan and Eisenstein, Jacob", editor = "Bouamor, Houda and Pino, Juan ...
Trustworthy language models should abstain from answering questions when they do not know the answer. However, the answer to a question can be unknown for a variety of reasons. Prior research has focused on the case in which the question is clear and the answer is unambiguous but possibly unknown. However, the answer t...
[ "Cole, Jeremy", "Zhang, Michael", "Gillick, Daniel", "Eisenschlos, Julian", "Dhingra, Bhuwan", "Eisenstein, Jacob" ]
Selectively Answering Ambiguous Questions
emnlp-main.35
2305.14613
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.36.bib
https://aclanthology.org/2023.emnlp-main.36/
@inproceedings{lee-etal-2023-temporal, title = "Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning", author = "Lee, Dong-Ho and Ahrabian, Kian and Jin, Woojeong and Morstatter, Fred and Pujara, Jay", editor = "Bouamor, Houda and Pino, Juan an...
Temporal knowledge graph (TKG) forecasting benchmarks challenge models to predict future facts using knowledge of past facts. In this paper, we develop an approach to use in-context learning (ICL) with large language models (LLMs) for TKG forecasting. Our extensive evaluation compares diverse baselines, including both ...
[ "Lee, Dong-Ho", "Ahrabian, Kian", "Jin, Woojeong", "Morstatter, Fred", "Pujara, Jay" ]
Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning
emnlp-main.36
2305.10613
[ "https://github.com/usc-isi-i2/isi-tkg-icl" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.37.bib
https://aclanthology.org/2023.emnlp-main.37/
@inproceedings{hwang-etal-2023-knowledge, title = "Knowledge Graph Compression Enhances Diverse Commonsense Generation", author = "Hwang, EunJeong and Thost, Veronika and Shwartz, Vered and Ma, Tengfei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", bookti...
Generating commonsense explanations requires reasoning about commonsense knowledge beyond what is explicitly mentioned in the context. Existing models use commonsense knowledge graphs such as ConceptNet to extract a subgraph of relevant knowledge pertaining to concepts in the input. However, due to the large coverage a...
[ "Hwang, EunJeong", "Thost, Veronika", "Shwartz, Vered", "Ma, Tengfei" ]
Knowledge Graph Compression Enhances Diverse Commonsense Generation
emnlp-main.37
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.38.bib
https://aclanthology.org/2023.emnlp-main.38/
@inproceedings{li-etal-2023-pragmatic, title = "Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models", author = "Li, Yiyuan and Menon, Rakesh and Ghosh, Sayan and Srivastava, Shashank", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", bookti...
Generalized quantifiers (e.g., $\textit{few}$, $\textit{most}$) are used to indicate the proportions predicates satisfy (for example, $\textit{some}$ apples are red). One way to interpret quantifier semantics is to explicitly bind these satisfactions with percentage scopes (e.g., 30{\%}-40{\%} of apples are red). This ...
[ "Li, Yiyuan", "Menon, Rakesh", "Ghosh, Sayan", "Srivastava, Shashank" ]
Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models
emnlp-main.38
2311.04659
[ "https://github.com/nativeatom/presque" ]
https://huggingface.co/papers/2311.04659
0
0
0
4
[]
[ "billli/QuRe" ]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.39.bib
https://aclanthology.org/2023.emnlp-main.39/
@inproceedings{liu-etal-2023-llm, title = "{LLM}-{FP}4: 4-Bit Floating-Point Quantized Transformers", author = "Liu, Shih-yang and Liu, Zechun and Huang, Xijie and Dong, Pingcheng and Cheng, Kwang-Ting", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", ...
We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values, in a post-training manner. Existing post-training quantization (PTQ) solutions are primarily integer-based and struggle with bit widths below 8 bits. Compared to integer quantization, floa...
[ "Liu, Shih-yang", "Liu, Zechun", "Huang, Xijie", "Dong, Pingcheng", "Cheng, Kwang-Ting" ]
LLM-FP4: 4-Bit Floating-Point Quantized Transformers
emnlp-main.39
2310.16836
[ "https://github.com/nbasyl/llm-fp4" ]
https://huggingface.co/papers/2310.16836
3
13
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.40.bib
https://aclanthology.org/2023.emnlp-main.40/
@inproceedings{tang-etal-2023-improving, title = "Improving Biomedical Abstractive Summarisation with Knowledge Aggregation from Citation Papers", author = "Tang, Chen and Wang, Shun and Goldsack, Tomas and Lin, Chenghua", editor = "Bouamor, Houda and Pino, Juan and Bali, ...
Abstracts derived from biomedical literature possess distinct domain-specific characteristics, including specialised writing styles and biomedical terminologies, which necessitate a deep understanding of the related literature. As a result, existing language models struggle to generate technical summaries that are on p...
[ "Tang, Chen", "Wang, Shun", "Goldsack, Tomas", "Lin, Chenghua" ]
Improving Biomedical Abstractive Summarisation with Knowledge Aggregation from Citation Papers
emnlp-main.40
2310.15684
[ "https://github.com/tangg555/biomed-sum" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.41.bib
https://aclanthology.org/2023.emnlp-main.41/
@inproceedings{ye-durrett-2023-explanation, title = "Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting", author = "Ye, Xi and Durrett, Greg", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empiric...
Recent work has shown how to prompt large language models with explanations to obtain strong performance on textual reasoning tasks, i.e., the chain-of-thought paradigm. However, subtly different explanations can yield widely varying downstream task accuracy. Explanations that have not been {``}tuned{''} for a task, su...
[ "Ye, Xi", "Durrett, Greg" ]
Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting
emnlp-main.41
2302.04813
[ "https://github.com/xiye17/explselection" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.42.bib
https://aclanthology.org/2023.emnlp-main.42/
@inproceedings{dale-etal-2023-halomi, title = "{H}al{O}mi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine Translation", author = "Dale, David and Voita, Elena and Lam, Janice and Hansanti, Prangthip and Ropers, Christophe and Ka...
Hallucinations in machine translation are translations that contain information completely unrelated to the input. Omissions are translations that do not include some of the input information. While both cases tend to be catastrophic errors undermining user trust, annotated data with these types of pathologies is extre...
[ "Dale, David", "Voita, Elena", "Lam, Janice", "Hansanti, Prangthip", "Ropers, Christophe", "Kalbassi, Elahe", "Gao, Cynthia", "Barrault, Loic", "Costa-juss{\\`a}, Marta" ]
HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine Translation
emnlp-main.42
2305.11746
[ "https://github.com/facebookresearch/stopes" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.43.bib
https://aclanthology.org/2023.emnlp-main.43/
@inproceedings{he-etal-2023-gradient, title = "Gradient-based Gradual Pruning for Language-Specific Multilingual Neural Machine Translation", author = "He, Dan and Pham, Minh-Quang and Ha, Thanh-Le and Turchi, Marco", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalik...
Multilingual neural machine translation (MNMT) offers the convenience of translating between multiple languages with a single model. However, MNMT often suffers from performance degradation in high-resource languages compared to bilingual counterparts. This degradation is commonly attributed to parameter interference, ...
[ "He, Dan", "Pham, Minh-Quang", "Ha, Thanh-Le", "Turchi, Marco" ]
Gradient-based Gradual Pruning for Language-Specific Multilingual Neural Machine Translation
emnlp-main.43
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.44.bib
https://aclanthology.org/2023.emnlp-main.44/
@inproceedings{whitehouse-etal-2023-llm, title = "{LLM}-powered Data Augmentation for Enhanced Cross-lingual Performance", author = "Whitehouse, Chenxi and Choudhury, Monojit and Aji, Alham Fikri", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Procee...
This paper explores the potential of leveraging Large Language Models (LLMs) for data augmentation in multilingual commonsense reasoning datasets where the available training data is extremely limited. To achieve this, we utilise several LLMs, namely Dolly-v2, StableVicuna, ChatGPT, and GPT-4, to augment three datasets...
[ "Whitehouse, Chenxi", "Choudhury, Monojit", "Aji, Alham Fikri" ]
LLM-powered Data Augmentation for Enhanced Cross-lingual Performance
emnlp-main.44
2305.14288
[ "https://github.com/mbzuai-nlp/gen-X" ]
https://huggingface.co/papers/2305.14288
2
0
0
3
[]
[ "coref-data/gen_winograd_raw" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.45.bib
https://aclanthology.org/2023.emnlp-main.45/
@inproceedings{wang-etal-2023-prompt-based, title = "Prompt-based Logical Semantics Enhancement for Implicit Discourse Relation Recognition", author = "Wang, Chenxu and Jian, Ping and Huang, Mu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedi...
Implicit Discourse Relation Recognition (IDRR), which infers discourse relations without the help of explicit connectives, is still a crucial and challenging task for discourse parsing. Recent works tend to exploit the hierarchical structure information from the annotated senses, which demonstrate enhanced discourse re...
[ "Wang, Chenxu", "Jian, Ping", "Huang, Mu" ]
Prompt-based Logical Semantics Enhancement for Implicit Discourse Relation Recognition
emnlp-main.45
2311.00367
[ "https://github.com/lalalamdbf/plse_idrr" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.46.bib
https://aclanthology.org/2023.emnlp-main.46/
@inproceedings{chung-yu-2023-vlis, title = "{VLIS}: Unimodal Language Models Guide Multimodal Language Generation", author = "Chung, Jiwan and Yu, Youngjae", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Metho...
Multimodal language generation, which leverages the synergy of language and vision, is a rapidly expanding field. However, existing vision-language models face challenges in tasks that require complex linguistic understanding. To address this issue, we introduce Visual-Language models as Importance Sampling weights (VL...
[ "Chung, Jiwan", "Yu, Youngjae" ]
VLIS: Unimodal Language Models Guide Multimodal Language Generation
emnlp-main.46
2310.09767
[ "https://github.com/jiwanchung/vlis" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.47.bib
https://aclanthology.org/2023.emnlp-main.47/
@inproceedings{suresh-etal-2023-conceptual, title = "Conceptual structure coheres in human cognition but not in large language models", author = "Suresh, Siddharth and Mukherjee, Kushin and Yu, Xizheng and Huang, Wei-Chun and Padua, Lisa and Rogers, Timothy", editor = "Bou...
Neural network models of language have long been used as a tool for developing hypotheses about conceptual representation in the mind and brain. For many years, such use involved extracting vector-space representations of words and using distances among these to predict or understand human behavior in various semantic ...
[ "Suresh, Siddharth", "Mukherjee, Kushin", "Yu, Xizheng", "Huang, Wei-Chun", "Padua, Lisa", "Rogers, Timothy" ]
Conceptual structure coheres in human cognition but not in large language models
emnlp-main.47
2304.02754
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.48.bib
https://aclanthology.org/2023.emnlp-main.48/
@inproceedings{feng-etal-2023-towards, title = "Towards {LLM}-driven Dialogue State Tracking", author = "Feng, Yujie and Lu, Zexin and Liu, Bo and Zhan, Liming and Wu, Xiao-Ming", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceeding...
Dialogue State Tracking (DST) is of paramount importance in ensuring accurate tracking of user goals and system actions within task-oriented dialogue systems. The emergence of large language models (LLMs) such as GPT3 and ChatGPT has sparked considerable interest in assessing their efficacy across diverse applications....
[ "Feng, Yujie", "Lu, Zexin", "Liu, Bo", "Zhan, Liming", "Wu, Xiao-Ming" ]
Towards LLM-driven Dialogue State Tracking
emnlp-main.48
2310.14970
[ "https://github.com/woodscene/ldst" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.49.bib
https://aclanthology.org/2023.emnlp-main.49/
@inproceedings{zhang-etal-2023-learning-language, title = "Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis", author = "Zhang, Haoyu and Wang, Yu and Yin, Guanghao and Liu, Kejun and Liu, Yuanyuan and Yu, Tianshu", editor = ...
Though Multimodal Sentiment Analysis (MSA) proves effective by utilizing rich information from multiple sources (*e.g.,* language, video, and audio), the potential sentiment-irrelevant and conflicting information across modalities may hinder the performance from being further improved. To alleviate this, we present Ada...
[ "Zhang, Haoyu", "Wang, Yu", "Yin, Guanghao", "Liu, Kejun", "Liu, Yuanyuan", "Yu, Tianshu" ]
Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis
emnlp-main.49
2310.05804
[ "https://github.com/Haoyu-ha/ALMT" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.50.bib
https://aclanthology.org/2023.emnlp-main.50/
@inproceedings{pantazopoulos-etal-2023-multitask, title = "Multitask Multimodal Prompted Training for Interactive Embodied Task Completion", author = "Pantazopoulos, Georgios and Nikandrou, Malvina and Parekh, Amit and Hemanthage, Bhathiya and Eshghi, Arash and Konstas, Ioanni...
Interactive and embodied tasks pose at least two fundamental challenges to existing Vision {\&} Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a unified enco...
[ "Pantazopoulos, Georgios", "Nik", "rou, Malvina", "Parekh, Amit", "Hemanthage, Bhathiya", "Eshghi, Arash", "Konstas, Ioannis", "Rieser, Verena", "Lemon, Oliver", "Suglia, Aless", "ro" ]
Multitask Multimodal Prompted Training for Interactive Embodied Task Completion
emnlp-main.50
2311.04067
[ "" ]
https://huggingface.co/papers/2311.04067
1
1
0
9
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.51.bib
https://aclanthology.org/2023.emnlp-main.51/
@inproceedings{liu-etal-2023-afraid, title = "We{'}re Afraid Language Models Aren{'}t Modeling Ambiguity", author = "Liu, Alisa and Wu, Zhaofeng and Michael, Julian and Suhr, Alane and West, Peter and Koller, Alexander and Swayamdipta, Swabha and Smith, Noah and...
Ambiguity is an intrinsic feature of natural language. Managing ambiguity is a key part of human language understanding, allowing us to anticipate misunderstanding as communicators and revise our interpretations as listeners. As language models are increasingly employed as dialogue interfaces and writing aids, handling...
[ "Liu, Alisa", "Wu, Zhaofeng", "Michael, Julian", "Suhr, Alane", "West, Peter", "Koller, Alex", "er", "Swayamdipta, Swabha", "Smith, Noah", "Choi, Yejin" ]
We're Afraid Language Models Aren't Modeling Ambiguity
emnlp-main.51
2304.14399
[ "https://github.com/alisawuffles/ambient" ]
https://huggingface.co/papers/2304.14399
1
0
0
9
[]
[ "metaeval/ambient" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.52.bib
https://aclanthology.org/2023.emnlp-main.52/
@inproceedings{liu-etal-2023-linear, title = "Linear-Time Modeling of Linguistic Structure: An Order-Theoretic Perspective", author = "Liu, Tianyu and Amini, Afra and Sachan, Mrinmaya and Cotterell, Ryan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", book...
Tasks that model the relation between pairs of tokens in a string are a vital part of understanding natural language. Such tasks, in general, require exhaustive pair-wise comparisons of tokens, thus having a quadratic runtime complexity in the length of the string. We show that these exhaustive comparisons can be avoid...
[ "Liu, Tianyu", "Amini, Afra", "Sachan, Mrinmaya", "Cotterell, Ryan" ]
Linear-Time Modeling of Linguistic Structure: An Order-Theoretic Perspective
emnlp-main.52
2305.15057
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.53.bib
https://aclanthology.org/2023.emnlp-main.53/
@inproceedings{bao-etal-2023-gemini, title = "{GEMINI}: Controlling The Sentence-Level Summary Style in Abstractive Text Summarization", author = "Bao, Guangsheng and Ou, Zebin and Zhang, Yue", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceeding...
Human experts write summaries using different techniques, including extracting a sentence from the document and rewriting it, or fusing various information from the document to abstract it. These techniques are flexible and thus difficult to be imitated by any single method. To address this issue, we propose an adaptiv...
[ "Bao, Guangsheng", "Ou, Zebin", "Zhang, Yue" ]
GEMINI: Controlling The Sentence-Level Summary Style in Abstractive Text Summarization
emnlp-main.53
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.54.bib
https://aclanthology.org/2023.emnlp-main.54/
@inproceedings{chen-etal-2023-fidelity, title = "Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation", author = "Chen, Wei-Lin and Wu, Cheng-Kuang and Chen, Hsin-Hsi and Chen, Chung-Chi", editor = "Bouamor, Houda and Pino, Jua...
In this paper, we address the hallucination problem commonly found in natural language generation tasks. Language models often generate fluent and convincing content but can lack consistency with the provided source, resulting in potential inaccuracies. We propose a new decoding method called Fidelity-Enriched Contrast...
[ "Chen, Wei-Lin", "Wu, Cheng-Kuang", "Chen, Hsin-Hsi", "Chen, Chung-Chi" ]
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation
emnlp-main.54
2310.14981
[ "https://github.com/ntunlplab/fecs" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.55.bib
https://aclanthology.org/2023.emnlp-main.55/
@inproceedings{moon-etal-2023-analyzing, title = "Analyzing Norm Violations in Live-Stream Chat", author = "Moon, Jihyung and Lee, Dong-Ho and Cho, Hyundong and Jin, Woojeong and Park, Chan and Kim, Minwoo and May, Jonathan and Pujara, Jay and Park, Sungjo...
Toxic language, such as hate speech, can deter users from participating in online communities and enjoying popular platforms. Previous approaches to detecting toxic language and norm violations have been primarily concerned with conversations from online forums and social media, such as Reddit and Twitter. These approa...
[ "Moon, Jihyung", "Lee, Dong-Ho", "Cho, Hyundong", "Jin, Woojeong", "Park, Chan", "Kim, Minwoo", "May, Jonathan", "Pujara, Jay", "Park, Sungjoon" ]
Analyzing Norm Violations in Live-Stream Chat
emnlp-main.55
2305.10731
[ "" ]
https://huggingface.co/papers/2305.10731
0
0
0
9
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.56.bib
https://aclanthology.org/2023.emnlp-main.56/
@inproceedings{singh-etal-2023-coarse, title = "Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for Improved Vision-Language Compositionality", author = "Singh, Harman and Zhang, Pengchuan and Wang, Qifan and Wang, Mengjiao and Xiong, Wenhan and Du, Jingfei and ...
Contrastively trained vision-language models have achieved remarkable progress in vision and language representation learning. However, recent research has highlighted severe limitations of these models in their ability to perform compositional reasoning over objects, attributes, and relations. Scene graphs have emerge...
[ "Singh, Harman", "Zhang, Pengchuan", "Wang, Qifan", "Wang, Mengjiao", "Xiong, Wenhan", "Du, Jingfei", "Chen, Yu" ]
Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for Improved Vision-Language Compositionality
emnlp-main.56
2305.13812
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.57.bib
https://aclanthology.org/2023.emnlp-main.57/
@inproceedings{han-etal-2023-reading, title = "Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms", author = "Han, Seungju and Kim, Junhyeok and Hessel, Jack and Jiang, Liwei and Chung, Jiwan and Son, Yejin and ...
Commonsense norms are defeasible by context: reading books is usually great, but not when driving a car. While contexts can be explicitly described in language, in embodied scenarios, contexts are often provided visually. This type of visually grounded reasoning about defeasible commonsense norms is generally easy for ...
[ "Han, Seungju", "Kim, Junhyeok", "Hessel, Jack", "Jiang, Liwei", "Chung, Jiwan", "Son, Yejin", "Choi, Yejin", "Yu, Youngjae" ]
Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms
emnlp-main.57
2310.10418
[ "https://github.com/wade3han/normlens" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.58.bib
https://aclanthology.org/2023.emnlp-main.58/
@inproceedings{zhang-etal-2023-enhancing-uncertainty, title = "Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus", author = "Zhang, Tianhang and Qiu, Lin and Guo, Qipeng and Deng, Cheng and Zhang, Yue and Zhang, Zheng and Zhou, Chenghu and W...
Large Language Models (LLMs) have gained significant popularity for their impressive performance across diverse fields. However, LLMs are prone to hallucinate untruthful or nonsensical outputs that fail to meet user expectations in many real-world applications. Existing works for detecting hallucinations in LLMs either...
[ "Zhang, Tianhang", "Qiu, Lin", "Guo, Qipeng", "Deng, Cheng", "Zhang, Yue", "Zhang, Zheng", "Zhou, Chenghu", "Wang, Xinbing", "Fu, Luoyi" ]
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
emnlp-main.58
2311.13230
[ "https://github.com/zthang/focus" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.59.bib
https://aclanthology.org/2023.emnlp-main.59/
@inproceedings{feng-etal-2023-factkb, title = "{F}act{KB}: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge", author = "Feng, Shangbin and Balachandran, Vidhisha and Bai, Yuyang and Tsvetkov, Yulia", editor = "Bouamor, Houda and Pino, Juan...
Evaluating the factual consistency of automatically generated summaries is essential for the progress and adoption of reliable summarization systems. Despite recent advances, existing factuality evaluation models are not robust, being especially prone to entity and relation errors in new domains. We propose FactKB{---}...
[ "Feng, Shangbin", "Balach", "ran, Vidhisha", "Bai, Yuyang", "Tsvetkov, Yulia" ]
FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge
emnlp-main.59
2305.08281
[ "https://github.com/bunsenfeng/factkb" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.60.bib
https://aclanthology.org/2023.emnlp-main.60/
@inproceedings{he-etal-2023-mitigating, title = "Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation", author = "He, Xuanli and Xu, Qiongkai and Wang, Jun and Rubinstein, Benjamin and Cohn, Trevor", editor = "Bouamor, Houda and Pino, Juan and ...
Modern NLP models are often trained over large untrusted datasets, raising the potential for a malicious adversary to compromise model behaviour. For instance, backdoors can be implanted through crafting training instances with a specific textual trigger and a target label. This paper posits that backdoor poisoning att...
[ "He, Xuanli", "Xu, Qiongkai", "Wang, Jun", "Rubinstein, Benjamin", "Cohn, Trevor" ]
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation
emnlp-main.60
2305.11596
[ "https://github.com/xlhex/emnlp2023_z-defence" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.61.bib
https://aclanthology.org/2023.emnlp-main.61/
@inproceedings{wei-etal-2023-symbol, title = "Symbol tuning improves in-context learning in language models", author = "Wei, Jerry and Hou, Le and Lampinen, Andrew and Chen, Xiangning and Huang, Da and Tay, Yi and Chen, Xinyun and Lu, Yifeng and Zhou, Denn...
We present symbol tuning - finetuning language models on in-context input-label pairs where natural language labels (e.g., {``}positive/negative sentiment{''}) are replaced with arbitrary symbols (e.g., {``}foo/bar{''}). Symbol tuning leverages the intuition that when a model cannot use instructions or natural language...
[ "Wei, Jerry", "Hou, Le", "Lampinen, Andrew", "Chen, Xiangning", "Huang, Da", "Tay, Yi", "Chen, Xinyun", "Lu, Yifeng", "Zhou, Denny", "Ma, Tengyu", "Le, Quoc" ]
Symbol tuning improves in-context learning in language models
emnlp-main.61
2305.08298
[ "" ]
https://huggingface.co/papers/2305.08298
4
3
0
11
[]
[ "tasksource/icl-symbol-tuning-instruct", "euclaise/symtune_mini" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.62.bib
https://aclanthology.org/2023.emnlp-main.62/
@inproceedings{gauthier-levy-2023-neural, title = "The neural dynamics of word recognition and integration", author = "Gauthier, Jon and Levy, Roger", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in N...
Listeners recognize and integrate words in rapid and noisy everyday speech by combining expectations about upcoming content with incremental sensory evidence. We present a computational model of word recognition which formalizes this perceptual process in Bayesian decision theory. We fit this model to explain scalp EEG...
[ "Gauthier, Jon", "Levy, Roger" ]
The neural dynamics of word recognition and integration
emnlp-main.62
2305.13388
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.63.bib
https://aclanthology.org/2023.emnlp-main.63/
@inproceedings{kim-etal-2023-tree, title = "Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Models", author = "Kim, Gangwoo and Kim, Sungdong and Jeon, Byeongguk and Park, Joonsuk and Kang, Jaewoo", editor = "Bouamor, Houda and ...
Questions in open-domain question answering are often ambiguous, allowing multiple interpretations. One approach to handling them is to identify all possible interpretations of the ambiguous question (AQ) and to generate a long-form answer addressing them all, as suggested by Stelmakh et al., (2022). While it provides ...
[ "Kim, Gangwoo", "Kim, Sungdong", "Jeon, Byeongguk", "Park, Joonsuk", "Kang, Jaewoo" ]
Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Models
emnlp-main.63
2310.14696
[ "https://github.com/gankim/tree-of-clarifications" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.64.bib
https://aclanthology.org/2023.emnlp-main.64/
@inproceedings{huang-etal-2023-incorporating, title = "Incorporating Worker Perspectives into {MT}urk Annotation Practices for {NLP}", author = "Huang, Olivia and Fleisig, Eve and Klein, Dan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings...
Current practices regarding data collection for natural language processing on Amazon Mechanical Turk (MTurk) often rely on a combination of studies on data quality and heuristics shared among NLP researchers. However, without considering the perspectives of MTurk workers, these approaches are susceptible to issues reg...
[ "Huang, Olivia", "Fleisig, Eve", "Klein, Dan" ]
Incorporating Worker Perspectives into MTurk Annotation Practices for NLP
emnlp-main.64
2311.02802
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.65.bib
https://aclanthology.org/2023.emnlp-main.65/
@inproceedings{guo-etal-2023-predict, title = "Predict the Future from the Past? On the Temporal Data Distribution Shift in Financial Sentiment Classifications", author = "Guo, Yue and Hu, Chenxi and Yang, Yi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", bookti...
Temporal data distribution shift is prevalent in the financial text. How can a financial sentiment analysis system be trained in a volatile market environment that can accurately infer sentiment and be robust to temporal data distribution shifts? In this paper, we conduct an empirical study on the financial sentiment a...
[ "Guo, Yue", "Hu, Chenxi", "Yang, Yi" ]
Predict the Future from the Past? On the Temporal Data Distribution Shift in Financial Sentiment Classifications
emnlp-main.65
2310.12620
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.66.bib
https://aclanthology.org/2023.emnlp-main.66/
@inproceedings{xu-etal-2023-look, title = "Look-back Decoding for Open-Ended Text Generation", author = "Xu, Nan and Zhou, Chunting and Celikyilmaz, Asli and Ma, Xuezhe", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Co...
Given a prefix (context), open-ended generation aims to decode texts that are coherent, which do not abruptly drift from previous topics, and informative, which do not suffer from undesired repetitions. In this paper, we propose Look-back, an improved decoding algorithm that leverages the Kullback{--}Leibler divergence...
[ "Xu, Nan", "Zhou, Chunting", "Celikyilmaz, Asli", "Ma, Xuezhe" ]
Look-back Decoding for Open-Ended Text Generation
emnlp-main.66
2305.13477
[ "https://github.com/xunannancy/lookbackdecoding" ]
https://huggingface.co/papers/2305.13477
1
0
0
4
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.67.bib
https://aclanthology.org/2023.emnlp-main.67/
@inproceedings{huang-etal-2023-large, title = "Large Language Models Can Self-Improve", author = "Huang, Jiaxin and Gu, Shixiang and Hou, Le and Wu, Yuexin and Wang, Xuezhi and Yu, Hongkun and Han, Jiawei", editor = "Bouamor, Houda and Pino, Juan and B...
Large Language Models (LLMs) have achieved excellent performances in various tasks. However, fine-tuning an LLM requires extensive supervision. Human, on the other hand, may improve their reasoning abilities by self-thinking without external inputs. In this work, we demonstrate that an LLM is also capable of self-impro...
[ "Huang, Jiaxin", "Gu, Shixiang", "Hou, Le", "Wu, Yuexin", "Wang, Xuezhi", "Yu, Hongkun", "Han, Jiawei" ]
Large Language Models Can Self-Improve
emnlp-main.67
2405.20309
[ "" ]
https://huggingface.co/papers/2405.20309
2
1
0
6
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.68.bib
https://aclanthology.org/2023.emnlp-main.68/
@inproceedings{wang-etal-2023-codet5, title = "{C}ode{T}5+: Open Code Large Language Models for Code Understanding and Generation", author = "Wang, Yue and Le, Hung and Gotmare, Akhilesh and Bui, Nghi and Li, Junnan and Hoi, Steven", editor = "Bouamor, Houda and Pin...
Large language models (LLMs) pretrained on vast source code have achieved prominent progress in code intelligence. However, existing code LLMs have two main limitations. First, they often adopt a specific architecture (encoder-only or decoder-only) or rely on a unified encoder-decoder network for different downstream t...
[ "Wang, Yue", "Le, Hung", "Gotmare, Akhilesh", "Bui, Nghi", "Li, Junnan", "Hoi, Steven" ]
CodeT5+: Open Code Large Language Models for Code Understanding and Generation
emnlp-main.68
2305.07922
[ "https://github.com/salesforce/codet5" ]
https://huggingface.co/papers/2305.07922
3
4
2
6
[ "Salesforce/codet5p-16b", "Salesforce/instructcodet5p-16b", "Salesforce/codet5p-110m-embedding", "Salesforce/codet5p-2b", "Salesforce/codet5p-220m", "Salesforce/codet5p-770m-py", "Salesforce/codet5p-770m", "Salesforce/codet5p-6b", "Salesforce/codet5p-220m-py", "michaelfeil/ct2fast-codet5p-770m-py"...
[]
[ "TIGER-Lab/GenAI-Arena", "Sharathhebbar24/One-stop-for-Open-source-models", "ZhangYuhan/3DGen-Arena", "alKoGolik/codellama-CodeLlama-7b-hf", "li-qing/FIRE", "tianleliphoebe/visual-arena", "Ashmal/MobiLlama", "jeevavijay10/code-gen", "alKoGolik/asd", "K00B404/One-stop-till-you-drop", "lb1064/Sale...
1
Oral
https://aclanthology.org/2023.emnlp-main.69.bib
https://aclanthology.org/2023.emnlp-main.69/
@inproceedings{petit-etal-2023-structural, title = "Structural generalization in {COGS}: Supertagging is (almost) all you need", author = "Petit, Alban and Corro, Caio and Yvon, Fran{\c{c}}ois", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedin...
In many Natural Language Processing applications, neural networks have been found to fail to generalize on out-of-distribution examples. In particular, several recent semantic parsing datasets have put forward important limitations of neural networks in cases where compositional generalization is required. In this work...
[ "Petit, Alban", "Corro, Caio", "Yvon, Fran{\\c{c}}ois" ]
Structural generalization in COGS: Supertagging is (almost) all you need
emnlp-main.69
2310.14124
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.70.bib
https://aclanthology.org/2023.emnlp-main.70/
@inproceedings{pei-etal-2023-biot5, title = "{B}io{T}5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations", author = "Pei, Qizhi and Zhang, Wei and Zhu, Jinhua and Wu, Kehan and Gao, Kaiyuan and Wu, Lijun and Xia, Yin...
Recent advancements in biological research leverage the integration of molecules, proteins, and natural language to enhance drug discovery. However, current models exhibit several limitations, such as the generation of invalid molecular SMILES, underutilization of contextual information, and equal treatment of structur...
[ "Pei, Qizhi", "Zhang, Wei", "Zhu, Jinhua", "Wu, Kehan", "Gao, Kaiyuan", "Wu, Lijun", "Xia, Yingce", "Yan, Rui" ]
BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations
emnlp-main.70
2310.07276
[ "https://github.com/QizhiPei/BioT5" ]
https://huggingface.co/papers/2310.07276
1
5
0
8
[ "QizhiPei/biot5-base", "QizhiPei/biot5-base-text2mol", "QizhiPei/biot5-base-mol2text", "QizhiPei/biot5-base-peer-solubility", "QizhiPei/biot5-base-dti-human", "QizhiPei/biot5-base-dti-biosnap", "QizhiPei/biot5-base-dti-bindingdb", "QizhiPei/biot5-base-peer-binloc", "QizhiPei/biot5-base-peer-human_pp...
[ "QizhiPei/BioT5_finetune_dataset" ]
[ "ndhieunguyen/Lang2mol-Diff" ]
1
Poster
https://aclanthology.org/2023.emnlp-main.71.bib
https://aclanthology.org/2023.emnlp-main.71/
@inproceedings{wen-yi-mimno-2023-hyperpolyglot, title = "Hyperpolyglot {LLM}s: Cross-Lingual Interpretability in Token Embeddings", author = "Wen-Yi, Andrea W and Mimno, David", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conferenc...
Cross-lingual transfer learning is an important property of multilingual large language models (LLMs). But how do LLMs represent relationships between languages? Every language model has an input layer that maps tokens to vectors. This ubiquitous layer of language models is often overlooked. We find that similarities b...
[ "Wen-Yi, Andrea W", "Mimno, David" ]
Hyperpolyglot LLMs: Cross-Lingual Interpretability in Token Embeddings
emnlp-main.71
2311.18034
[ "https://github.com/andreawwenyi/hyperpolyglot" ]
https://huggingface.co/papers/2311.18034
1
0
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.72.bib
https://aclanthology.org/2023.emnlp-main.72/
@inproceedings{wang-etal-2023-target, title = "Target-oriented Proactive Dialogue Systems with Personalization: Problem Formulation and Dataset Curation", author = "Wang, Jian and Cheng, Yi and Lin, Dongding and Leong, Chak and Li, Wenjie", editor = "Bouamor, Houda and Pin...
Target-oriented dialogue systems, designed to proactively steer conversations toward predefined targets or accomplish specific system-side goals, are an exciting area in conversational AI. In this work, by formulating a {\textless}dialogue act, topic{\textgreater} pair as the conversation target, we explore a novel pro...
[ "Wang, Jian", "Cheng, Yi", "Lin, Dongding", "Leong, Chak", "Li, Wenjie" ]
Target-oriented Proactive Dialogue Systems with Personalization: Problem Formulation and Dataset Curation
emnlp-main.72
2310.07397
[ "https://github.com/iwangjian/topdial" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.73.bib
https://aclanthology.org/2023.emnlp-main.73/
@inproceedings{wang-etal-2023-seqxgpt, title = "{S}eq{XGPT}: Sentence-Level {AI}-Generated Text Detection", author = "Wang, Pengyu and Li, Linyang and Ren, Ke and Jiang, Botian and Zhang, Dong and Qiu, Xipeng", editor = "Bouamor, Houda and Pino, Juan and Bali...
Widely applied large language models (LLMs) can generate human-like content, raising concerns about the abuse of LLMs. Therefore, it is important to build strong AI-generated text (AIGT) detectors. Current works only consider document-level AIGT detection, therefore, in this paper, we first introduce a sentence-level d...
[ "Wang, Pengyu", "Li, Linyang", "Ren, Ke", "Jiang, Botian", "Zhang, Dong", "Qiu, Xipeng" ]
SeqXGPT: Sentence-Level AI-Generated Text Detection
emnlp-main.73
2310.08903
[ "https://github.com/jihuai-wpy/seqxgpt" ]
https://huggingface.co/papers/2310.08903
2
1
0
6
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.74.bib
https://aclanthology.org/2023.emnlp-main.74/
@inproceedings{zhao-etal-2023-qtsumm, title = "{QTS}umm: Query-Focused Summarization over Tabular Data", author = "Zhao, Yilun and Qi, Zhenting and Nan, Linyong and Mi, Boyu and Liu, Yixin and Zou, Weijin and Han, Simeng and Chen, Ruizhe and Tang, Xiangru ...
People primarily consult tables to conduct data analysis or answer specific questions. Text generation systems that can provide accurate table summaries tailored to users{'} information needs can facilitate more efficient access to relevant data insights. Motivated by this, we define a new query-focused table summariza...
[ "Zhao, Yilun", "Qi, Zhenting", "Nan, Linyong", "Mi, Boyu", "Liu, Yixin", "Zou, Weijin", "Han, Simeng", "Chen, Ruizhe", "Tang, Xiangru", "Xu, Yumo", "Radev, Dragomir", "Cohan, Arman" ]
QTSumm: Query-Focused Summarization over Tabular Data
emnlp-main.74
2305.14303
[ "https://github.com/yilunzhao/qtsumm" ]
https://huggingface.co/papers/2305.14303
4
0
0
11
[ "yale-nlp/bart-large-finetuned-qtsumm", "yale-nlp/flan-t5-large-finetuned-qtsumm", "yale-nlp/t5-large-finetuned-qtsumm", "yale-nlp/omnitab-large-finetuned-qtsumm", "yale-nlp/tapex-large-finetuned-qtsumm", "yale-nlp/reastap-large-finetuned-qtsumm" ]
[ "yale-nlp/QTSumm", "faizalbs777/research" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.75.bib
https://aclanthology.org/2023.emnlp-main.75/
@inproceedings{ge-etal-2023-wrong, title = "From Wrong To Right: A Recursive Approach Towards Vision-Language Explanation", author = "Ge, Jiaxin and Subramanian, Sanjay and Darrell, Trevor and Li, Boyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", bookti...
Addressing the challenge of adapting pre-trained vision-language models for generating insightful explanations for visual reasoning tasks with limited annotations, we present ReVisE: a Recursive Visual Explanation algorithm. Our method iteratively computes visual features (conditioned on the text input), an answer, and...
[ "Ge, Jiaxin", "Subramanian, Sanjay", "Darrell, Trevor", "Li, Boyi" ]
From Wrong To Right: A Recursive Approach Towards Vision-Language Explanation
emnlp-main.75
2311.12391
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.76.bib
https://aclanthology.org/2023.emnlp-main.76/
@inproceedings{cardenas-etal-2023-dont, title = "{`}Don{'}t Get Too Technical with Me{'}: A Discourse Structure-Based Framework for Automatic Science Journalism", author = "Cardenas, Ronald and Yao, Bingsheng and Wang, Dakuo and Hou, Yufang", editor = "Bouamor, Houda and Pino, Ju...
Science journalism refers to the task of reporting technical findings of a scientific paper as a less technical news article to the general public audience. We aim to design an automated system to support this real-world task (i.e., automatic science journalism ) by 1) introducing a newly-constructed and real-world dat...
[ "Cardenas, Ronald", "Yao, Bingsheng", "Wang, Dakuo", "Hou, Yufang" ]
`Don't Get Too Technical with Me': A Discourse Structure-Based Framework for Automatic Science Journalism
emnlp-main.76
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.77.bib
https://aclanthology.org/2023.emnlp-main.77/
@inproceedings{yang-etal-2023-lacma, title = "{LACMA}: Language-Aligning Contrastive Learning with Meta-Actions for Embodied Instruction Following", author = "Yang, Cheng-Fu and Chen, Yen-Chun and Yang, Jianwei and Dai, Xiyang and Yuan, Lu and Wang, Yu-Chiang and Chang,...
End-to-end Transformers have demonstrated an impressive success rate for Embodied Instruction Following when the environment has been seen in training. However, they tend to struggle when deployed in an unseen environment. This lack of generalizability is due to the agent{'}s insensitivity to subtle changes in natural ...
[ "Yang, Cheng-Fu", "Chen, Yen-Chun", "Yang, Jianwei", "Dai, Xiyang", "Yuan, Lu", "Wang, Yu-Chiang", "Chang, Kai-Wei" ]
LACMA: Language-Aligning Contrastive Learning with Meta-Actions for Embodied Instruction Following
emnlp-main.77
2310.12344
[ "https://github.com/joeyy5588/lacma" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.78.bib
https://aclanthology.org/2023.emnlp-main.78/
@inproceedings{zhu-etal-2023-penalty, title = "Penalty Decoding: Well Suppress the Self-Reinforcement Effect in Open-Ended Text Generation", author = "Zhu, Wenhong and Hao, Hongkun and Wang, Rui", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceed...
The decoding algorithm is critical for open-ended text generation, transforming latent representations into coherent and meaningful outputs. This paper investigates the self-reinforcement effect in text generation and the effectiveness of a repetition penalty to mitigate it. However, determining the optimal repetition ...
[ "Zhu, Wenhong", "Hao, Hongkun", "Wang, Rui" ]
Penalty Decoding: Well Suppress the Self-Reinforcement Effect in Open-Ended Text Generation
emnlp-main.78
2310.14971
[ "https://github.com/zwhong714/penalty_decoding" ]
https://huggingface.co/papers/2310.14971
0
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.79.bib
https://aclanthology.org/2023.emnlp-main.79/
@inproceedings{li-etal-2023-towards-robust, title = "Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models", author = "Li, Jianwei and Lei, Qi and Cheng, Wei and Xu, Dongkuan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika"...
The pruning objective has recently extended beyond accuracy and sparsity to robustness in language models. Despite this, existing methods struggle to enhance robustness against adversarial attacks when continually increasing model sparsity and require a retraining process. As humans step into the era of large language ...
[ "Li, Jianwei", "Lei, Qi", "Cheng, Wei", "Xu, Dongkuan" ]
Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models
emnlp-main.79
2310.13191
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.80.bib
https://aclanthology.org/2023.emnlp-main.80/
@inproceedings{makhervaks-etal-2023-clinical, title = "Clinical Contradiction Detection", author = "Makhervaks, Dave and Gillis, Plia and Radinsky, Kira", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical M...
Detecting contradictions in text is essential in determining the validity of the literature and sources that we consume. Medical corpora are riddled with conflicting statements. This is due to the large throughput of new studies and the difficulty in replicating experiments, such as clinical trials. Detecting contradic...
[ "Makhervaks, Dave", "Gillis, Plia", "Radinsky, Kira" ]
Clinical Contradiction Detection
emnlp-main.80
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.81.bib
https://aclanthology.org/2023.emnlp-main.81/
@inproceedings{liu-etal-2023-vera, title = "Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements", author = "Liu, Jiacheng and Wang, Wenya and Wang, Dianzhuo and Smith, Noah and Choi, Yejin and Hajishirzi, Hannaneh", editor = "Bouamor, Houda an...
Today{'}s language models can be remarkably intelligent yet still produce text that contains trivial commonsense errors. Therefore, we seek a retrospective verification approach that can reflect on the commonsense plausibility of the machine text, and introduce Vera, a general-purpose model that learns to estimate the ...
[ "Liu, Jiacheng", "Wang, Wenya", "Wang, Dianzhuo", "Smith, Noah", "Choi, Yejin", "Hajishirzi, Hannaneh" ]
Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements
emnlp-main.81
2305.03695
[ "https://github.com/liujch1998/vera" ]
https://huggingface.co/papers/2305.03695
3
3
0
6
[ "liujch1998/vera" ]
[ "liujch1998/vera_contrib" ]
[ "liujch1998/vera" ]
1
Oral
https://aclanthology.org/2023.emnlp-main.82.bib
https://aclanthology.org/2023.emnlp-main.82/
@inproceedings{lin-etal-2023-text, title = "Text-Transport: Toward Learning Causal Effects of Natural Language", author = "Lin, Victoria and Morency, Louis-Philippe and Ben-Michael, Eli", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of t...
As language technologies gain prominence in real-world settings, it is important to understand *how* changes to language affect reader perceptions. This can be formalized as the *causal effect* of varying a linguistic attribute (e.g., sentiment) on a reader{'}s response to the text. In this paper, we introduce Text-Tra...
[ "Lin, Victoria", "Morency, Louis-Philippe", "Ben-Michael, Eli" ]
Text-Transport: Toward Learning Causal Effects of Natural Language
emnlp-main.82
2310.20697
[ "https://github.com/torylin/text-transport" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.83.bib
https://aclanthology.org/2023.emnlp-main.83/
@inproceedings{pradeep-etal-2023-generative, title = "How Does Generative Retrieval Scale to Millions of Passages?", author = "Pradeep, Ronak and Hui, Kai and Gupta, Jai and Lelkes, Adam and Zhuang, Honglei and Lin, Jimmy and Metzler, Donald and Tran, Vinh", ...
The emerging paradigm of generative retrieval re-frames the classic information retrieval problem into a sequence-to-sequence modeling task, forgoing external indices and encoding an entire document corpus within a single Transformer. Although many different approaches have been proposed to improve the effectiveness of...
[ "Pradeep, Ronak", "Hui, Kai", "Gupta, Jai", "Lelkes, Adam", "Zhuang, Honglei", "Lin, Jimmy", "Metzler, Donald", "Tran, Vinh" ]
How Does Generative Retrieval Scale to Millions of Passages?
emnlp-main.83
2305.11841
[ "" ]
https://huggingface.co/papers/2305.11841
1
3
0
8
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.84.bib
https://aclanthology.org/2023.emnlp-main.84/
@inproceedings{wen-etal-2023-unveiling, title = "Unveiling the Implicit Toxicity in Large Language Models", author = "Wen, Jiaxin and Ke, Pei and Sun, Hao and Zhang, Zhexin and Li, Chengfei and Bai, Jinfeng and Huang, Minlie", editor = "Bouamor, Houda and Pin...
The open-endedness of large language models (LLMs) combined with their impressive capabilities may lead to new safety issues when being exploited for malicious use. While recent studies primarily focus on probing toxic outputs that can be easily detected with existing toxicity classifiers, we show that LLMs can generat...
[ "Wen, Jiaxin", "Ke, Pei", "Sun, Hao", "Zhang, Zhexin", "Li, Chengfei", "Bai, Jinfeng", "Huang, Minlie" ]
Unveiling the Implicit Toxicity in Large Language Models
emnlp-main.84
2311.17391
[ "https://github.com/thu-coai/implicit-toxicity" ]
https://huggingface.co/papers/2311.17391
0
0
0
7
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.85.bib
https://aclanthology.org/2023.emnlp-main.85/
@inproceedings{qin-etal-2023-chatgpt, title = "Is {C}hat{GPT} a General-Purpose Natural Language Processing Task Solver?", author = "Qin, Chengwei and Zhang, Aston and Zhang, Zhuosheng and Chen, Jiaao and Yasunaga, Michihiro and Yang, Diyi", editor = "Bouamor, Houda and ...
Spurred by advancements in scale, large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot{---}i.e., without adaptation on downstream data. Recently, the debut of ChatGPT has drawn a great deal of attention from the natural language processing ...
[ "Qin, Chengwei", "Zhang, Aston", "Zhang, Zhuosheng", "Chen, Jiaao", "Yasunaga, Michihiro", "Yang, Diyi" ]
Is ChatGPT a General-Purpose Natural Language Processing Task Solver?
emnlp-main.85
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.86.bib
https://aclanthology.org/2023.emnlp-main.86/
@inproceedings{xiao-etal-2023-length, title = "Length is a Curse and a Blessing for Document-level Semantics", author = "Xiao, Chenghao and Li, Yizhi and Hudson, G and Lin, Chenghua and Al Moubayed, Noura", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", ...
In recent years, contrastive learning (CL) has been extensively utilized to recover sentence and document-level encoding capability from pre-trained language models. In this work, we question the length generalizability of CL-based models, i.e., their vulnerability towards length-induced semantic shift. We verify not o...
[ "Xiao, Chenghao", "Li, Yizhi", "Hudson, G", "Lin, Chenghua", "Al Moubayed, Noura" ]
Length is a Curse and a Blessing for Document-level Semantics
emnlp-main.86
2310.16193
[ "https://github.com/gowitheflow-1998/la-ser-cubed" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.87.bib
https://aclanthology.org/2023.emnlp-main.87/
@inproceedings{yin-etal-2023-alcuna, title = "{ALCUNA}: Large Language Models Meet New Knowledge", author = "Yin, Xunjian and Huang, Baizhou and Wan, Xiaojun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empiri...
With the rapid development of NLP, large-scale language models (LLMs) excel in various tasks across multiple domains now. However, existing benchmarks may not adequately measure these models{'} capabilities, especially when faced with new knowledge. In this paper, we address the lack of benchmarks to evaluate LLMs{'} a...
[ "Yin, Xunjian", "Huang, Baizhou", "Wan, Xiaojun" ]
ALCUNA: Large Language Models Meet New Knowledge
emnlp-main.87
2310.14820
[ "https://github.com/arvid-pku/alcuna" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.88.bib
https://aclanthology.org/2023.emnlp-main.88/
@inproceedings{suwono-etal-2023-location, title = "Location-Aware Visual Question Generation with Lightweight Models", author = "Suwono, Nicholas and Chen, Justin and Hung, Tun and Huang, Ting-Hao and Liao, I-Bin and Li, Yung-Hui and Ku, Lun-Wei and Sun, Shao-Hua...
This work introduces a novel task, location-aware visual question generation (LocaVQG), which aims to generate engaging questions from data relevant to a particular geographical location. Specifically, we represent such location-aware information with surrounding images and a GPS coordinate. To tackle this task, we pre...
[ "Suwono, Nicholas", "Chen, Justin", "Hung, Tun", "Huang, Ting-Hao", "Liao, I-Bin", "Li, Yung-Hui", "Ku, Lun-Wei", "Sun, Shao-Hua" ]
Location-Aware Visual Question Generation with Lightweight Models
emnlp-main.88
2310.15129
[ "https://github.com/academiasinicanlplab/locavqg" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.89.bib
https://aclanthology.org/2023.emnlp-main.89/
@inproceedings{hwang-shwartz-2023-memecap, title = "{M}eme{C}ap: A Dataset for Captioning and Interpreting Memes", author = "Hwang, EunJeong and Shwartz, Vered", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical M...
Memes are a widely popular tool for web users to express their thoughts using visual metaphors. Understanding memes requires recognizing and interpreting visual metaphors with respect to the text inside or around the meme, often while employing background knowledge and reasoning abilities. We present the task of meme c...
[ "Hwang, EunJeong", "Shwartz, Vered" ]
MemeCap: A Dataset for Captioning and Interpreting Memes
emnlp-main.89
2305.13703
[ "https://github.com/eujhwang/meme-cap" ]
https://huggingface.co/papers/2305.13703
0
0
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.90.bib
https://aclanthology.org/2023.emnlp-main.90/
@inproceedings{choshen-etal-2023-start, title = "Where to start? Analyzing the potential value of intermediate models", author = "Choshen, Leshem and Venezian, Elad and Don-Yehiya, Shachar and Slonim, Noam and Katz, Yoav", editor = "Bouamor, Houda and Pino, Juan and ...
Previous studies observed that finetuned models may be better base models than the vanilla pretrained model. Such a model, finetuned on some source dataset, may provide a better starting point for a new finetuning process on a desired target dataset. Here, we perform a systematic analysis of this \textit{intertraining}...
[ "Choshen, Leshem", "Venezian, Elad", "Don-Yehiya, Shachar", "Slonim, Noam", "Katz, Yoav" ]
Where to start? Analyzing the potential value of intermediate models
emnlp-main.90
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.91.bib
https://aclanthology.org/2023.emnlp-main.91/
@inproceedings{tay-etal-2023-transcending, title = "Transcending Scaling Laws with 0.1{\%} Extra Compute", author = "Tay, Yi and Wei, Jason and Chung, Hyung and Tran, Vinh and So, David and Shakeri, Siamak and Garcia, Xavier and Zheng, Steven and Rao, Jinf...
Scaling language models improves performance but comes with significant computational costs. This paper proposes UL2R, a method that substantially improves existing language models and their scaling curves with a relatively tiny amount of extra compute. The key idea is to continue training a state-of-the-art large lang...
[ "Tay, Yi", "Wei, Jason", "Chung, Hyung", "Tran, Vinh", "So, David", "Shakeri, Siamak", "Garcia, Xavier", "Zheng, Steven", "Rao, Jinfeng", "Chowdhery, Aakanksha", "Zhou, Denny", "Metzler, Donald", "Petrov, Slav", "Houlsby, Neil", "Le, Quoc", "Dehghani, Mostafa" ]
Transcending Scaling Laws with 0.1% Extra Compute
emnlp-main.91
2210.11399
[ "" ]
https://huggingface.co/papers/2210.11399
1
0
0
16
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.92.bib
https://aclanthology.org/2023.emnlp-main.92/
@inproceedings{li-etal-2023-coannotating, title = "{C}o{A}nnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation", author = "Li, Minzhi and Shi, Taiwei and Ziems, Caleb and Kan, Min-Yen and Chen, Nancy and Liu, Zhengyuan and ...
Annotated data plays a critical role in Natural Language Processing (NLP) in training models and evaluating their performance. Given recent developments in Large Language Models (LLMs), models such as ChatGPT demonstrate zero-shot capability on many text-annotation tasks, comparable with or even exceeding human annotat...
[ "Li, Minzhi", "Shi, Taiwei", "Ziems, Caleb", "Kan, Min-Yen", "Chen, Nancy", "Liu, Zhengyuan", "Yang, Diyi" ]
CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
emnlp-main.92
2310.15638
[ "https://github.com/salt-nlp/coannotating" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.93.bib
https://aclanthology.org/2023.emnlp-main.93/
@inproceedings{berchansky-etal-2023-optimizing, title = "Optimizing Retrieval-augmented Reader Models via Token Elimination", author = "Berchansky, Moshe and Izsak, Peter and Caciularu, Avi and Dagan, Ido and Wasserblat, Moshe", editor = "Bouamor, Houda and Pino, Juan and...
Fusion-in-Decoder (FiD) is an effective retrieval-augmented language model applied across a variety of open-domain tasks, such as question answering, fact checking, etc. In FiD, supporting passages are first retrieved and then processed using a generative model (Reader), which can cause a significant bottleneck in deco...
[ "Berchansky, Moshe", "Izsak, Peter", "Caciularu, Avi", "Dagan, Ido", "Wasserblat, Moshe" ]
Optimizing Retrieval-augmented Reader Models via Token Elimination
emnlp-main.93
2310.13682
[ "https://github.com/mosheber/token_elimination" ]
https://huggingface.co/papers/2310.13682
2
1
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.94.bib
https://aclanthology.org/2023.emnlp-main.94/
@inproceedings{yang-etal-2023-wsdms, title = "{WSDMS}: Debunk Fake News via Weakly Supervised Detection of Misinforming Sentences with Contextualized Social Wisdom", author = "Yang, Ruichao and Gao, Wei and Ma, Jing and Lin, Hongzhan and Yang, Zhiwei", editor = "Bouamor, Houda a...
Fake news debunking primarily focuses on determining the truthfulness of news articles, which oversimplifies the issue as fake news often combines elements of both truth and falsehood. Thus, it becomes crucial to identify specific instances of misinformation within the articles. In this research, we investigate a novel...
[ "Yang, Ruichao", "Gao, Wei", "Ma, Jing", "Lin, Hongzhan", "Yang, Zhiwei" ]
WSDMS: Debunk Fake News via Weakly Supervised Detection of Misinforming Sentences with Contextualized Social Wisdom
emnlp-main.94
2310.16579
[ "https://github.com/hkbunlp/wsdms-emnlp2023" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.95.bib
https://aclanthology.org/2023.emnlp-main.95/
@inproceedings{li-etal-2023-robust, title = "Robust Prompt Optimization for Large Language Models Against Distribution Shifts", author = "Li, Moxin and Wang, Wenjie and Feng, Fuli and Cao, Yixin and Zhang, Jizhi and Chua, Tat-Seng", editor = "Bouamor, Houda and Pino...
Large Language Model (LLM) has demonstrated significant ability in various Natural Language Processing tasks. However, their effectiveness is highly dependent on the phrasing of the task prompt, leading to research on automatic prompt optimization using labeled task data. We reveal that these prompt optimization techni...
[ "Li, Moxin", "Wang, Wenjie", "Feng, Fuli", "Cao, Yixin", "Zhang, Jizhi", "Chua, Tat-Seng" ]
Robust Prompt Optimization for Large Language Models Against Distribution Shifts
emnlp-main.95
2305.13954
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.96.bib
https://aclanthology.org/2023.emnlp-main.96/
@inproceedings{josifoski-etal-2023-exploiting, title = "Exploiting Asymmetry for Synthetic Training Data Generation: {S}ynth{IE} and the Case of Information Extraction", author = "Josifoski, Martin and Sakota, Marija and Peyrard, Maxime and West, Robert", editor = "Bouamor, Houda and ...
Large language models (LLMs) have great potential for synthetic data generation. This work shows that useful data can be synthetically generated even for tasks that cannot be solved directly by LLMs: for problems with structured outputs, it is possible to prompt an LLM to perform the task in the reverse direction, by g...
[ "Josifoski, Martin", "Sakota, Marija", "Peyrard, Maxime", "West, Robert" ]
Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction
emnlp-main.96
2303.04132
[ "https://github.com/epfl-dlab/synthie" ]
https://huggingface.co/papers/2303.04132
0
0
0
4
[ "martinjosifoski/SynthIE" ]
[ "martinjosifoski/SynthIE" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.97.bib
https://aclanthology.org/2023.emnlp-main.97/
@inproceedings{xu-etal-2023-condensing, title = "Condensing Multilingual Knowledge with Lightweight Language-Specific Modules", author = "Xu, Haoran and Tan, Weiting and Li, Shuyue and Chen, Yunmo and Van Durme, Benjamin and Koehn, Philipp and Murray, Kenton", edito...
Incorporating language-specific (LS) modules or Mixture-of-Experts (MoE) are proven methods to boost performance in multilingual model performance, but the scalability of these approaches to hundreds of languages or experts tends to be hard to manage. We present Language-specific Matrix Synthesis (LMS), a novel method ...
[ "Xu, Haoran", "Tan, Weiting", "Li, Shuyue", "Chen, Yunmo", "Van Durme, Benjamin", "Koehn, Philipp", "Murray, Kenton" ]
Condensing Multilingual Knowledge with Lightweight Language-Specific Modules
emnlp-main.97
2305.13993
[ "https://github.com/fe1ixxu/lms_fd" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.98.bib
https://aclanthology.org/2023.emnlp-main.98/
@inproceedings{fernandez-etal-2023-framework, title = "The Framework Tax: Disparities Between Inference Efficiency in {NLP} Research and Deployment", author = "Fernandez, Jared and Kahn, Jacob and Na, Clara and Bisk, Yonatan and Strubell, Emma", editor = "Bouamor, Houda and ...
Increased focus on the computational efficiency of systems in natural language processing has motivated the design of efficient model architectures and improvements to underlying hardware accelerators. However, the resulting increases in computational throughput and reductions in floating point operations have not dire...
[ "Fern", "ez, Jared", "Kahn, Jacob", "Na, Clara", "Bisk, Yonatan", "Strubell, Emma" ]
The Framework Tax: Disparities Between Inference Efficiency in NLP Research and Deployment
emnlp-main.98
2302.06117
[ "https://github.com/jaredfern/framework-tax" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.99.bib
https://aclanthology.org/2023.emnlp-main.99/
@inproceedings{pourreza-rafiei-2023-evaluating, title = "Evaluating Cross-Domain Text-to-{SQL} Models and Benchmarks", author = "Pourreza, Mohammadreza and Rafiei, Davood", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on ...
Text-to-SQL benchmarks play a crucial role in evaluating the progress made in the field and the ranking of different models. However, accurately matching a model-generated SQL query to a reference SQL query in a benchmark fails for various reasons, such as underspecified natural language queries, inherent assumptions i...
[ "Pourreza, Mohammadreza", "Rafiei, Davood" ]
Evaluating Cross-Domain Text-to-SQL Models and Benchmarks
emnlp-main.99
2310.18538
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.100.bib
https://aclanthology.org/2023.emnlp-main.100/
@inproceedings{conia-etal-2023-increasing, title = "Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs", author = "Conia, Simone and Li, Min and Lee, Daniel and Minhas, Umar and Ilyas, Ihab and Li, Yunyao", editor = "Bouamor, Houda a...
Recent work in Natural Language Processing and Computer Vision has been using textual information {--} e.g., entity names and descriptions {--} available in knowledge graphs to ground neural models to high-quality structured data. However, when it comes to non-English languages, the quantity and quality of textual info...
[ "Conia, Simone", "Li, Min", "Lee, Daniel", "Minhas, Umar", "Ilyas, Ihab", "Li, Yunyao" ]
Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs
emnlp-main.100
2311.15781
[ "https://github.com/apple/ml-kge" ]
https://huggingface.co/papers/2311.15781
1
0
0
6
[]
[ "davanstrien/ml-kge" ]
[]
1
Poster