Datasets:
xml stringlengths 7 23 | proceedings stringlengths 58 222 | year stringdate 2005-01-01 00:00:00 2024-01-01 00:00:00 | url stringlengths 1 64 | language documentation stringclasses 1
value | has non-English? stringclasses 4
values | topics stringclasses 7
values | language coverage stringclasses 32
values | title stringlengths 32 161 | abstract stringlengths 176 2.45k |
|---|---|---|---|---|---|---|---|---|---|
2024.nlperspectives.xml | Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024 | 2024 | https://aclanthology.org/2024.nlperspectives-1.15 | x | 0 | general safety, LLM alignment | null | Intersectionality in AI Safety: Using Multilevel Models to Understand Diverse Perceptions of Safety in Conversational AI | State-of-the-art conversational AI exhibits a level of sophistication that promises to have profound impacts on many aspects of daily life, including how people seek information, create content, and find emotional support. It has also shown a propensity for bias, offensive language, and false information. Consequently,... |
2024.clinicalnlp.xml | Proceedings of the 6th Clinical Natural Language Processing Workshop | 2024 | https://aclanthology.org/2024.clinicalnlp-1.3 | x | 0 | hallucination, factuality | null | Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & Hallucinations | Large language models have the potential to be valuable in the healthcare industry, but it’s crucial to verify their safety and effectiveness through rigorous evaluation. In our study, we evaluated LLMs, including Google’s Gemini, across various medical tasks. Despite Gemini’s capabilities, it underperformed compared t... |
2024.clinicalnlp.xml | Proceedings of the 6th Clinical Natural Language Processing Workshop | 2024 | https://aclanthology.org/2024.clinicalnlp-1.12 | x | 0 | others | null | DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents | Large language models (LLMs) have emerged as valuable tools for many natural language understanding tasks. In safety-critical applications such as healthcare, the utility of these models is governed by their ability to generate factually accurate and complete outputs. In this work, we present dialog-enabled resolving a... |
2024.clinicalnlp.xml | Proceedings of the 6th Clinical Natural Language Processing Workshop | 2024 | https://aclanthology.org/2024.clinicalnlp-1.32 | x | 0 | others | null | MediFact at MEDIQA-CORR 2024: Why AI Needs a Human Touch | Accurate representation of medical information is crucial for patient safety, yet artificial intelligence (AI) systems, such as Large Language Models (LLMs), encounter challenges in error-free clinical text interpretation. This paper presents a novel approach submitted to the MEDIQA-CORR 2024 shared task k (Ben Abacha ... |
2024.clinicalnlp.xml | Proceedings of the 6th Clinical Natural Language Processing Workshop | 2024 | https://aclanthology.org/2024.clinicalnlp-1.59 | x | 0 | others | null | WangLab at MEDIQA-CORR 2024: Optimized LLM-based Programs for Medical Error Detection and Correction | Medical errors in clinical text pose significant risks to patient safety. The MEDIQA-CORR 2024 shared task focuses on detecting and correcting these errors across three subtasks: identifying the presence of an error, extracting the erroneous sentence, and generating a corrected sentence. In this paper, we present our a... |
2024.tacl.xml | Transactions of the Association for Computational Linguistics, Volume 12 | 2024 | https://aclanthology.org/2024.tacl-1.10 | x | 0 | others | null | Red Teaming Language Model Detectors with Language Models | The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users. To prevent the potentially deceptive usage of LLMs, recent work has proposed algorithms to detect LLM-generated text and protect LLMs. In this paper, we investigate the robu... |
2024.trustnlp.xml | Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024) | 2024 | https://aclanthology.org/2024.trustnlp-1.2 | x | 0 | toxicity, bias | null | Automated Adversarial Discovery for Safety Classifiers | Safety classifiers are critical in mitigating toxicity on online forums such as social media and in chatbots. Still, they continue to be vulnerable to emergent, and often innumerable, adversarial attacks.Traditional automated adversarial data generation methods, however, tend to produce attacks that are not diverse, bu... |
2024.trustnlp.xml | Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024) | 2024 | https://aclanthology.org/2024.trustnlp-1.9 | null | 1 | jailbreaking attacks | English, Chinese | Cross-Task Defense: Instruction-Tuning LLMs for Content Safety | Recent studies reveal that Large Language Models (LLMs) face challenges in balancing safety with utility, particularly when processing long texts for NLP tasks like summarization and translation. Despite defenses against malicious short questions, the ability of LLMs to safely handle dangerous long content, such as man... |
2024.trustnlp.xml | Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024) | 2024 | https://aclanthology.org/2024.trustnlp-1.15 | x | 0 | toxicity, bias | null | Flatness-Aware Gradient Descent for Safe Conversational AI | As generative dialog models become ubiquitous in real-world applications, it is paramount to ensure a harmless generation. There are two major challenges when enforcing safety to open-domain chatbots. Firstly, it is impractical to provide training data reflecting the desired response to all emerging forms of toxicity (... |
2024.trustnlp.xml | Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024) | 2024 | https://aclanthology.org/2024.trustnlp-1.18 | x | 1 | jailbreaking attacks | English, Thai, Kannada, Arabic, Gujarati, and Vietnamese | Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs | A significant challenge in reliable deployment of Large Language Models (LLMs) is malicious manipulation via adversarial prompting techniques such as jailbreaks. Employing mechanisms such as safety training have proven useful in addressing this challenge. However, in multilingual LLMs, adversaries can exploit the imbal... |
2024.gebnlp.xml | Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 2024 | https://aclanthology.org/2024.gebnlp-1.2 | null | 1 | toxicity, bias | Chinese | Do PLMs and Annotators Share the Same Gender Bias? Definition, Dataset, and Framework of Contextualized Gender Bias | Pre-trained language models (PLMs) have achieved success in various of natural language processing (NLP) tasks. However, PLMs also introduce some disquieting safety problems, such as gender bias. Gender bias is an extremely complex issue, because different individuals may hold disparate opinions on whether the same sen... |
2024.gebnlp.xml | Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 2024 | https://aclanthology.org/2024.gebnlp-1.3 | null | 1 | toxicity, bias | English, Swedish | We Don’t Talk About That: Case Studies on Intersectional Analysis of Social Bias in Large Language Models | Despite concerns that Large Language Models (LLMs) are vectors for reproducing and amplifying social biases such as sexism, transphobia, islamophobia, and racism, there is a lack of work qualitatively analyzing how such patterns of bias are generated by LLMs. We use mixed-methods approaches and apply a feminist, inters... |
2024.nlpcss.xml | Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024) | 2024 | https://aclanthology.org/2024.nlpcss-1.2 | null | 0 | toxicity, bias | null | Connecting the Dots in News Analysis: Bridging the Cross-Disciplinary Disparities in Media Bias and Framing | The manifestation and effect of bias in news reporting have been central topics in the social sciences for decades, and have received increasing attention in the NLP community recently. While NLP can help to scale up analyses or contribute automatic procedures to investigate the impact of biased news in society, we arg... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-eacl.61 | null | 0 | general safety, LLM alignment | null | Do-Not-Answer: Evaluating Safeguards in LLMs | With the rapid evolution of large language models (LLMs), new and hard-to-predict harmful capabilities are emerging. This requires developers to identify potential risks through the evaluation of “dangerous capabilities” in order to responsibly deploy LLMs. Here we aim to facilitate this process. In particular, we coll... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-eacl.109 | x | 0 | toxicity, bias | null | GrounDial: Human-norm Grounded Safe Dialog Response Generation | Current conversational AI systems based on large language models (LLMs) are known to generate unsafe responses agreeing to offensive user input or including toxic content. Previous research aimed to alleviate the toxicity by fine-tuning LLM with manually annotated safe dialogue histories. However, the dependency on add... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-naacl.224 | null | 1 | jailbreaking attacks | null | Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking | While large language models (LLMs) have demonstrated increasing power, they have also called upon studies on their vulnerabilities. As representatives, jailbreak attacks can provoke harmful or unethical responses from LLMs, even after safety alignment. In this paper, we investigate a novel category of jailbreak attacks... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.102 | x | 0 | jailbreaking attacks | null | UNIWIZ: A Unified Large Language Model Orchestrated Wizard for Safe Knowledge Grounded Conversations | Large Language Models (LLMs) have made significant progress in integrating safety and knowledge alignment. However, adversarial actors can manipulate these models into generating unsafe responses, and excessive safety alignment can lead to unintended hallucinations. To address these challenges, we introduce UniWiz, a n... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.144 | null | 1 | others | null | Integrating Physician Diagnostic Logic into Large Language Models: Preference Learning from Process Feedback | The utilization of large language models for medical dialogue generation has attracted considerable attention due to its potential to enhance response richness and coherence. While previous studies have made strides in optimizing model performance, there is a pressing need to bolster the model’s capacity for diagnostic... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.156 | null | 1 | jailbreaking attacks | null | The Language Barrier: Dissecting Safety Challenges of LLMs in Multilingual Contexts | As the influence of large language models (LLMs) spans across global communities, their safety challenges in multilingual settings become paramount for alignment research. This paper examines the variations in safety challenges faced by LLMs across different languages and discusses approaches to alleviating such concer... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.162 | null | 0 | hallucination, factuality | null | Simulated Misinformation Susceptibility (SMISTS): Enhancing Misinformation Research with Large Language Model Simulations | Psychological inoculation, a strategy designed to build resistance against persuasive misinformation, has shown efficacy in curbing its spread and mitigating its adverse effects at early stages. Despite its effectiveness, the design and optimization of these inoculations typically demand substantial human and financial... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.184 | null | 1 | jailbreaking attacks | null | A Chinese Dataset for Evaluating the Safeguards in Large Language Models | Many studies have demonstrated that large language models (LLMs) can produce harmful responses, exposing users to unexpected risks. Previous studies have proposed comprehensive taxonomies of LLM risks, as well as corresponding prompts that can be used to examine LLM safety. However, the focus has been almost exclusivel... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.198 | x | 0 | jailbreaking attacks | null | Red Teaming Visual Language Models | VLMs (Vision-Language Models) extend the capabilities of LLMs (Large Language Models) to accept multimodal inputs. Since it has been verified that LLMs can be induced to generate harmful or inaccurate content through specific test cases (termed as Red Teaming), how VLMs perform in similar scenarios, especially with the... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.219 | x | 0 | jailbreaking attacks | null | A + B: A General Generator-Reader Framework for Optimizing LLMs to Unleash Synergy Potential | Retrieval-Augmented Generation (RAG) is an effective solution to supplement necessary knowledge to large language models (LLMs). Targeting its bottleneck of retriever performance, “generate-then-read” pipeline is proposed to replace the retrieval stage with generation from the LLM itself. Although promising, this resea... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.235 | x | 0 | jailbreaking attacks | null | SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models | In the rapidly evolving landscape of Large Language Models (LLMs), ensuring robust safety measures is paramount. To meet this crucial need, we propose SALAD-Bench, a safety benchmark specifically designed for evaluating LLMs, attack, and defense methods. Distinguished by its breadth, SALAD-Bench transcends conventional... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.270 | null | 0 | others | null | SyntaxShap: Syntax-aware Explainability Method for Text Generation | To harness the power of large language models in safety-critical domains, we need to ensure the explainability of their predictions. However, despite the significant attention to model interpretability, there remains an unexplored domain in explaining sequence-to-sequence tasks using methods tailored for textual data. ... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.292 | x | 0 | jailbreaking attacks | null | Evaluating the Validity of Word-level Adversarial Attacks with Large Language Models | Deep neural networks exhibit vulnerability to word-level adversarial attacks in natural language processing. Most of these attack methods adopt synonymous substitutions to perturb original samples for crafting adversarial examples while attempting to maintain semantic consistency with the originals. Some of them claim ... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.344 | null | 0 | jailbreaking attacks | null | Asymmetric Bias in Text-to-Image Generation with Adversarial Attacks | The widespread use of Text-to-Image (T2I) models in content generation requires careful examination of their safety, including their robustness to adversarial attacks. Despite extensive research on adversarial attacks, the reasons for their effectiveness remain underexplored. This paper presents an empirical study on a... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.349 | null | 1 | jailbreaking attacks | null | All Languages Matter: On the Multilingual Safety of LLMs | Safety lies at the core of developing and deploying large language models (LLMs). However, previous safety benchmarks only concern the safety in one language, e.g. the majority language in the pretraining data such as English. In this work, we build the first multilingual safety benchmark for LLMs, XSafety, in response... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.357 | x | ? | others | null | When to Trust LLMs: Aligning Confidence with Response Quality | Despite the success of large language models (LLMs) in natural language generation, much evidence shows that LLMs may produce incorrect or nonsensical text. This limitation highlights the importance of discerning when to trust LLMs, especially in safety-critical domains. Existing methods often express reliability by co... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.367 | x | ? | general safety, LLM alignment | null | Decoding the Narratives: Analyzing Personal Drug Experiences Shared on Reddit | Online communities such as drug-related subreddits serve as safe spaces for people who use drugs (PWUD), fostering discussions on substance use experiences, harm reduction, and addiction recovery. Users’ shared narratives on these forums provide insights into the likelihood of developing a substance use disorder (SUD) ... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.441 | null | ? | others | null | “My Answer is C”: First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models | The open-ended nature of language generation makes the evaluation of autoregressive large language models (LLMs) challenging. One common evaluation approach uses multiple-choice questions to limit the response space. The model is then evaluated by ranking the candidate answers by the log probability of the first token ... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.443 | x | 0 | jailbreaking attacks | null | A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models | Large Language Models (LLMs) have increasingly become central to generating content with potential societal impacts. Notably, these models have demonstrated capabilities for generating content that could be deemed harmful. To mitigate these risks, researchers have adopted safety training techniques to align model outpu... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.492 | x | 0 | privacy | null | Large Language Models Relearn Removed Concepts | Advances in model editing through neuron pruning hold promise for removing undesirable concepts from large language models. However, it remains unclear whether models have the capacity to reacquire pruned concepts after editing. To investigate this, we evaluate concept relearning in models by tracking concept saliency ... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.549 | x | 0 | jailbreaking attacks | null | On the Vulnerability of Safety Alignment in Open-Access LLMs | Large language models (LLMs) possess immense capabilities but are susceptible to malicious exploitation. To mitigate the risk, safety alignment is employed to align LLMs with ethical standards. However, safety-aligned LLMs may remain vulnerable to carefully crafted jailbreak attacks, but these attacks often face high r... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.591 | x | 0 | others | null | Boosting LLM Agents with Recursive Contemplation for Effective Deception Handling | Recent advances in large language models (LLMs) have led to significant success in using LLMs as agents. Nevertheless, a common assumption that LLMs always process honest information neglects the widespread deceptive or misleading content in human and AI-generated material. This oversight might expose LLMs to malicious... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.596 | x | 0 | jailbreaking attacks | null | SpeechGuard: Exploring the Adversarial Robustness of Multi-modal Large Language Models | Integrated Speech and Large Language Models (SLMs) that can follow speech instructions and generate relevant text responses have gained popularity lately. However, the safety and robustness of these models remains largely unclear. In this work, we investigate the potential vulnerabilities of such instruction-following ... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.630 | x | 0 | general safety, LLM alignment | null | Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization | A single language model, even when aligned with labelers through reinforcement learning from human feedback (RLHF), may not suit all human preferences. Recent approaches therefore prefer customization, gathering multi-dimensional feedback, and creating distinct reward models for each dimension.Different language models... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.633 | x | 0 | hallucination, factuality | null | Evaluating Robustness of Generative Search Engine on Adversarial Factoid Questions | Generative search engines have the potential to transform how people seek information online, but generated responses from existing large language models (LLMs)-backed generative search engines may not always be accurate. Nonetheless, retrieval-augmented generation exacerbates safety concerns, since adversaries may suc... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.667 | x | 0 | others | null | Enhancing Adverse Drug Event Detection with Multimodal Dataset: Corpus Creation and Model Development | The mining of adverse drug events (ADEs) is pivotal in pharmacovigilance, enhancing patient safety by identifying potential risks associated with medications, facilitating early detection of adverse events, and guiding regulatory decision-making. Traditional ADE detection methods are reliable but slow, not easily adapt... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.679 | x | 0 | jailbreaking attacks | null | CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion | The rapid advancement of Large Language Models (LLMs) has brought about remarkable generative capabilities but also raised concerns about their potential misuse. While strategies like supervised fine-tuning and reinforcement learning from human feedback have enhanced their safety, these methods primarily focus on natur... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.776 | null | 0 | jailbreaking attacks | null | The Art of Defending: A Systematic Evaluation and Analysis of LLM Defense Strategies on Safety and Over-Defensiveness | As Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications, their safety concerns become critical areas of NLP research. This has resulted in the development of various LLM defense strategies. Unfortunately, despite the shared goal of improving the safety of LLMs, the ... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.814 | null | 1 | jailbreaking attacks | Chinese, English | ROSE Doesn’t Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding | With the development of instruction-tuned large language models (LLMs), improving the safety of LLMs has become more critical. However, the current approaches for aligning the LLMs output with expected safety usually require substantial training efforts, e.g., high-quality safety data and expensive computational resour... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.893 | null | 1 | toxicity, bias | English, Italien, French, Portuguese, Spanish, Russian, Arabic, Hindi, Korean | From One to Many: Expanding the Scope of Toxicity Mitigation in Language Models | To date, toxicity mitigation in language models has almost entirely been focused on single-language settings. As language models embrace multilingual capabilities, it’s crucial our safety measures keep pace. Recognizing this research gap, our approach expands the scope of conventional toxicity mitigation to address the... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.927 | x | 0 | jailbreaking attacks | null | From Representational Harms to Quality-of-Service Harms: A Case Study on Llama 2 Safety Safeguards | Recent progress in large language models (LLMs) has led to their widespread adoption in various domains. However, these advancements have also introduced additional safety risks and raised concerns regarding their detrimental impact on already marginalized populations.Despite growing mitigation efforts to develop safet... |
2024.findings.xml | Findings of the Association for Computational Linguistics: EACL 2024 | 2024 | https://aclanthology.org/2024.findings-acl.960 | x | 0 | general safety, LLM alignment | null | Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models | In the rapidly advancing field of artificial intelligence, the concept of ‘Red-Teaming’ or ‘Jailbreaking’ large language models (LLMs) has emerged as a crucial area of study. This approach is especially significant in terms of assessing and enhancing the safety and robustness of these models. This paper investigates th... |
2024.wassa.xml | Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis | 2024 | https://aclanthology.org/2024.wassa-1.9 | null | 0 | toxicity, bias | null | MBIAS: Mitigating Bias in Large Language Models While Retaining Context | The deployment of Large Language Models (LLMs) in diverse applications necessitates an assurance of safety without compromising the contextual integrity of the generated content. Traditional approaches, including safety-specific fine-tuning or adversarial testing, often yield safe outputs at the expense of contextual m... |
2024.lrec.xml | Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | 2024 | https://aclanthology.org/2024.lrec-main.314 | null | null | others | null | Combining Discourse Coherence with Large Language Models for More Inclusive, Equitable, and Robust Task-Oriented Dialogue | Large language models (LLMs) are capable of generating well-formed responses, but using LLMs to generate responses on the fly is not yet feasible for many task-oriented systems. Modular architectures are often still required for safety and privacy guarantees on the output. We hypothesize that an offline generation appr... |
2024.lrec.xml | Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | 2024 | https://aclanthology.org/2024.lrec-main.855 | null | null | others | null | Knowledge-augmented Graph Neural Networks with Concept-aware Attention for Adverse Drug Event Detection | Adverse drug events (ADEs) are an important aspect of drug safety. Various texts such as biomedical literature, drug reviews, and user posts on social media and medical forums contain a wealth of information about ADEs. Recent studies have applied word embedding and deep learning-based natural language processing to au... |
2024.lrec.xml | Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | 2024 | https://aclanthology.org/2024.lrec-main.869 | null | null | general safety, LLM alignment | null | Korean Disaster Safety Information Sign Language Translation Benchmark Dataset | Sign language is a crucial means of communication for deaf communities. However, those outside deaf communities often lack understanding of sign language, leading to inadequate communication accessibility for the deaf. Therefore, sign language translation is a significantly important research area. In this context, we ... |
2024.lrec.xml | Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | 2024 | https://aclanthology.org/2024.lrec-main.924 | null | 0 | others | null | Linguistic Rule Induction Improves Adversarial and OOD Robustness in Large Language Models | Ensuring robustness is especially important when AI is deployed in responsible or safety-critical environments. ChatGPT can perform brilliantly in both adversarial and out-of-distribution (OOD) robustness, while other popular large language models (LLMs), like LLaMA-2, ERNIE and ChatGLM, do not perform satisfactorily i... |
2024.lrec.xml | Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | 2024 | https://aclanthology.org/2024.lrec-main.986 | null | 1 | hallucination, factuality | English, Turkish | MiDe22: An Annotated Multi-Event Tweet Dataset for Misinformation Detection | The rapid dissemination of misinformation through online social networks poses a pressing issue with harmful consequences jeopardizing human health, public safety, democracy, and the economy; therefore, urgent action is required to address this problem. In this study, we construct a new human-annotated dataset, called ... |
2024.lrec.xml | Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | 2024 | https://aclanthology.org/2024.lrec-main.1510 | null | null | others | null | VI-OOD: A Unified Framework of Representation Learning for Textual Out-of-distribution Detection | Out-of-distribution (OOD) detection plays a crucial role in ensuring the safety and reliability of deep neural networks in various applications. While there has been a growing focus on OOD detection in visual data, the field of textual OOD detection has received less attention. Only a few attempts have been made to dir... |
2024.lrec.xml | Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | 2024 | https://aclanthology.org/2024.lrec-main.1542 | null | null | toxicity, bias | null | XAI-Attack: Utilizing Explainable AI to Find Incorrectly Learned Patterns for Black-Box Adversarial Example Creation | Adversarial examples, capable of misleading machine learning models into making erroneous predictions, pose significant risks in safety-critical domains such as crisis informatics, medicine, and autonomous driving. To counter this, we introduce a novel textual adversarial example method that identifies falsely learned ... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.65 | x | 0 | jailbreaking attacks | null | Ensuring Safe and High-Quality Outputs: A Guideline Library Approach for Language Models | Large Language Models (LLMs) exhibit impressive capabilities but also present risks such as biased content generation and privacy issues. One of the current alignment techniques includes principle-driven integration, but it faces challenges arising from the imprecision of manually crafted rules and inadequate risk perc... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.78 | x | 0 | jailbreaking attacks | null | IterAlign: Iterative Constitutional Alignment of Large Language Models | With the rapid development of large language models (LLMs), aligning LLMs with human values and societal norms to ensure their reliability and safety has become crucial. Reinforcement learning with human feedback (RLHF) and Constitutional AI (CAI) have been proposed for LLM alignment. However, these methods require eit... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.92 | x | 0 | jailbreaking attacks | null | SELF-GUARD: Empower the LLM to Safeguard Itself | With the increasing risk posed by jailbreak attacks, recent studies have investigated various methods to improve the safety of large language models (LLMs), mainly falling into two strategies: safety training and safeguards. Safety training involves fine-tuning the LLM with adversarial samples, which activate the LLM’s... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.107 | x | 0 | jailbreaking attacks | null | MART: Improving LLM Safety with Multi-round Automatic Red-Teaming | Red-teaming is a common practice for mitigating unsafe behaviors in Large Language Models (LLMs), which involves thoroughly assessing LLMs to identify potential flaws and addressing them with responsible and accurate responses.While effective, manual red-teaming is costly, and existing automatic red-teaming typically d... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.152 | x | 0 | jailbreaking attacks | null | How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities | The rapid progress in open-source Large Language Models (LLMs) is significantly driving AI development forward. However, there is still a limited understanding of their trustworthiness. Deploying these models at scale without sufficient trustworthiness can pose significant risks, highlighting the need to uncover these ... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.190 | null | null | others | null | GRASP: A Disagreement Analysis Framework to Assess Group Associations in Perspectives | Human annotation plays a core role in machine learning — annotations for supervised models, safety guardrails for generative models, and human feedback for reinforcement learning, to cite a few avenues. However, the fact that many of these human annotations are inherently subjective is often overlooked. Recent work has... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.256 | null | 1 | general safety, LLM alignment | Chinese | Flames: Benchmarking Value Alignment of LLMs in Chinese | The widespread adoption of large language models (LLMs) across various regions underscores the urgent need to evaluate their alignment with human values. Current benchmarks, however, fall short of effectively uncovering safety vulnerabilities in LLMs. Despite numerous models achieving high scores and ‘topping the chart... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.263 | null | 1 | general safety, LLM alignment | Chinese | Fake Alignment: Are LLMs Really Aligned Well? | The growing awareness of safety concerns in large language models (LLMs) has sparked considerable interest in the evaluation of safety. This study investigates an under-explored issue about the evaluation of LLMs, namely the substantial discrepancy in performance between multiple-choice questions and open-ended questio... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.276 | x | 0 | jailbreaking attacks | null | Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections | Recent developments in Large Language Models (LLMs) have manifested significant advancements. To facilitate safeguards against malicious exploitation, a body of research has concentrated on aligning LLMs with human preferences and inhibiting their generation of inappropriate content. Unfortunately, such alignments are ... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.288 | null | null | others | null | MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets | Development of multimodal interactive systems is hindered by the lack of rich, multimodal (text, images) conversational data, which is needed in large quantities for LLMs. Previous approaches augment textual dialogues with retrieved images, posing privacy, diversity, and quality constraints. In this work, we introduce ... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.301 | null | 0 | jailbreaking attacks | null | XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models | Without proper safeguards, large language models will readily follow malicious instructions and generate toxic content. This risk motivates safety efforts such as red-teaming and large-scale feedback learning, which aim to make models both helpful and harmless. However, there is a tension between these two objectives, ... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.375 | null | null | jailbreaking attacks | English, Chinese | Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey | Large Language Models (LLMs) are now commonplace in conversation applications. However, their risks of misuse for generating harmful responses have raised serious societal concerns and spurred recent research on LLM conversation safety. Therefore, in this survey, we provide a comprehensive overview of recent studies, c... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-long.422 | x | 0 | general safety, LLM alignment | null | Safer-Instruct: Aligning Language Models with Automated Preference Data | Reinforcement learning from human feedback (RLHF) is a vital strategy for enhancing model capability in language models. However, annotating preference data for RLHF is a resource-intensive and creativity-demanding process, while existing automatic generation methods face limitations in data diversity and quality. In r... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-short.60 | x | 0 | toxicity, bias | null | LifeTox: Unveiling Implicit Toxicity in Life Advice | As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial. To this end, we introduce \texttt{LifeTox}, a dataset designed for identifying implicit toxicity within a broad range of advice-seeking scenarios. Unlike existing safety datasets, \te... |
2024.naacl.xml | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.naacl-industry.8 | null | null | others | null | An NLP-Focused Pilot Training Agent for Safe and Efficient Aviation Communication | Aviation communication significantly influences the success of flight operations, ensuring safety of lives and efficient air transportation. In day-to-day flight operations, air traffic controllers (ATCos) would timely communicate instructions to pilots using specific phraseology for aircraft manipulation . However, pi... |
2024.splurobonlp.xml | Proceedings of the 4th Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLU-RoboNLP 2024) | 2024 | https://aclanthology.org/2024.splurobonlp-1.1 | x | 0 | general safety, LLM alignment | null | Language-guided World Models: A Model-based Approach to AI Control | Developing internal world models for artificial agents opens an efficient channel for humans to communicate with and control them. In addition to updating policies, humans can modify the world models of these agents in order to influence their decisions.The challenge, however, is that currently existing world models ar... |
2024.safety4convai.xml | Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024 | 2024 | https://aclanthology.org/2024.safety4convai-1.1 | x | 0 | hallucination, factuality | null | Grounding LLMs to In-prompt Instructions: Reducing Hallucinations Caused by Static Pre-training Knowledge | When deploying LLMs in certain commercial or research settings, domain specific knowledge must be explicitly provided within the prompt. This in-prompt knowledge can conflict with an LLM’s static world knowledge learned at pre-training, causing model hallucination (see examples in Table 1). In safety-critical settings,... |
2024.safety4convai.xml | Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024 | 2024 | https://aclanthology.org/2024.safety4convai-1.2 | x | 0 | general safety, LLM alignment | null | Diversity-Aware Annotation for Conversational AI Safety | How people interpret content is deeply influenced by their socio-cultural backgrounds and lived experiences. This is especially crucial when evaluating AI systems for safety, where accounting for such diversity in interpretations and potential impacts on human users will make them both more successful and inclusive. Wh... |
2024.safety4convai.xml | Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024 | 2024 | https://aclanthology.org/2024.safety4convai-1.3 | null | 0 | toxicity, bias | null | Using Information Retrieval Techniques to Automatically Repurpose Existing Dialogue Datasets for Safe Chatbot Development | There has been notable progress in the development of open-domain dialogue systems (chatbots) especially with the rapid advancement of the capabilities of Large Language Models. Chatbots excel at holding conversations in a manner that keeps a user interested and engaged. However, their responses can be unsafe, as they ... |
2024.safety4convai.xml | Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024 | 2024 | https://aclanthology.org/2024.safety4convai-1.5 | null | 0 | jailbreaking attacks | null | Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks | Augmenting Large Language Models (LLMs) with image-understanding capabilities has resulted in a boom of high-performing Vision-Language models (VLMs). While studying the alignment of LLMs to human values has received widespread attention, the safety of VLMs has not received the same attention. In this paper, we explore... |
2024.bionlp.xml | Proceedings of the 23rd Workshop on Biomedical Natural Language Processing | 2024 | https://aclanthology.org/2024.bionlp-1.23 | null | null | others | null | Large Language Models for Biomedical Knowledge Graph Construction: Information extraction from EMR notes | The automatic construction of knowledge graphs (KGs) is an important research area in medicine, with far-reaching applications spanning drug discovery and clinical trial design. These applications hinge on the accurate identification of interactions among medical and biological entities. In this study, we propose an en... |
2024.bionlp.xml | Proceedings of the 23rd Workshop on Biomedical Natural Language Processing | 2024 | https://aclanthology.org/2024.bionlp-1.55 | null | null | others | null | SICAR at RRG2024: GPU Poor’s Guide to Radiology Report Generation | Radiology report generation (RRG) aims to create free-text radiology reports from clinical imaging. Our solution employs a lightweight multimodal language model (MLLM) enhanced with a two-stage post-processing strategy, utilizing a Large Language Model (LLM) to boost diagnostic accuracy and ensure patient safety. We in... |
2024.smm4h.xml | Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks | 2024 | https://aclanthology.org/2024.smm4h-1.38 | null | null | others | null | FORCE: A Benchmark Dataset for Foodborne Disease Outbreak and Recall Event Extraction from News | The escalating prevalence of food safety incidents within the food supply chain necessitates immediate action to protect consumers. These incidents encompass a spectrum of issues, including food product contamination and deliberate food and feed adulteration for economic gain leading to outbreaks and recalls. Understan... |
2024.privatenlp.xml | Proceedings of the Fifth Workshop on Privacy in Natural Language Processing | 2024 | https://aclanthology.org/2024.privatenlp-1.17 | x | 0 | jailbreaking attacks | null | Reinforcement Learning-Driven LLM Agent for Automated Attacks on LLMs | Recently, there has been a growing focus on conducting attacks on large language models (LLMs) to assess LLMs’ safety. Yet, existing attack methods face challenges, including the need to access model weights or merely ensuring LLMs output harmful information without controlling the specific content of their output. Exa... |
2024.woah.xml | Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024) | 2024 | https://aclanthology.org/2024.woah-1.12 | x | 0 | jailbreaking attacks | null | Robust Safety Classifier Against Jailbreaking Attacks: Adversarial Prompt Shield | Large Language Models’ safety remains a critical concern due to their vulnerability to jailbreaking attacks, which can prompt these systems to produce harmful and malicious responses. Safety classifiers, computational models trained to discern and mitigate potentially harmful, offensive, or unethical outputs, offer a p... |
2024.knowllm.xml | Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024) | 2024 | https://aclanthology.org/2024.knowllm-1.13 | null | 0 | general safety, LLM alignment | null | Safe-Embed: Unveiling the Safety-Critical Knowledge of Sentence Encoders | Despite the impressive capabilities of Large Language Models (LLMs) in various tasks, their vulnerability to unsafe prompts remains a critical issue. These prompts can lead LLMs to generate responses on illegal or sensitive topics, posing a significant threat to their safe and ethical use. Existing approaches address t... |
2024.arabicnlp.xml | Proceedings of The Second Arabic Natural Language Processing Conference | 2024 | https://aclanthology.org/2024.arabicnlp-1.18 | null | 1 | toxicity, bias | Arabic, English, and Russian | John vs. Ahmed: Debate-Induced Bias in Multilingual LLMs | Large language models (LLMs) play a crucial role in a wide range of real world applications. However, concerns about their safety and ethical implications are growing. While research on LLM safety is expanding, there is a noticeable gap in evaluating safety across multiple languages, especially in Arabic and Russian. W... |
2024.case.xml | Proceedings of the 7th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2024) | 2024 | https://aclanthology.org/2024.case-1.30 | null | 1 | toxicity, bias | Turkish, Arabic | Team Curie at HSD-2Lang 2024: Hate Speech Detection in Turkish and Arabic Tweets using BERT-based models | Team Curie at HSD-2Lang 2024: Team Curie at HSD-2Lang 2024: Hate Speech Detection in Turkish and Arabic Tweets using BERT-based models This paper has presented our methodologies and findings in tackling hate speech detection in Turkish and Arabic tweets as part of the HSD-2Lang 2024 contest. Through innovative approach... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.30 | x | 0 | jailbreaking attacks | null | GradSafe: Detecting Jailbreak Prompts for LLMs via Safety-Critical Gradient Analysis | Large Language Models (LLMs) face threats from jailbreak prompts. Existing methods for detecting jailbreak prompts are primarily online moderation APIs or finetuned LLMs. These strategies, however, often require extensive and resource-intensive data collection and training processes. In this study, we propose GradSafe,... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.58 | null | 0 | general safety, LLM alignment | null | Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning | The surge in Large Language Models (LLMs) has revolutionized natural language processing, but fine-tuning them for specific tasks often encounters challenges in balancing performance and preserving general instruction-following abilities. In this paper, we posit that the distribution gap between task datasets and the L... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.99 | x | 0 | general safety, LLM alignment | null | Dissecting Human and LLM Preferences | As a relative quality comparison of model responses, human and Large Language Model (LLM) preferences serve as common alignment goals in model fine-tuning and criteria in evaluation. Yet, these preferences merely reflect broad tendencies, resulting in less explainable and controllable models with potential safety risks... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.119 | x | 0 | jailbreaking attacks | null | ToolSword: Unveiling Safety Issues of Large Language Models in Tool Learning Across Three Stages | Tool learning is widely acknowledged as a foundational approach or deploying large language models (LLMs) in real-world scenarios. While current research primarily emphasizes leveraging tools to augment LLMs, it frequently neglects emerging safety considerations tied to their application. To fill this gap, we present T... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.140 | x | 0 | jailbreaking attacks | null | RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models | Reinforcement Learning with Human Feedback (RLHF) is a methodology designed to align Large Language Models (LLMs) with human preferences, playing an important role in LLMs alignment. Despite its advantages, RLHF relies on human annotators to rank the text, which can introduce potential security vulnerabilities if any a... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.198 | null | 0 | general safety, LLM alignment | null | Relying on the Unreliable: The Impact of Language Models’ Reluctance to Express Uncertainty | As natural language becomes the default interface for human-AI interaction, there is a need for LMs to appropriately communicate uncertainties in downstream applications. In this work, we investigate how LMs incorporate confidence in responses via natural language and how downstream users behave in response to LM-artic... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.253 | x | 0 | jailbreaking attacks | null | Navigating the OverKill in Large Language Models | Large language models are meticulously aligned to be both helpful and harmless. However, recent research points to a potential overkill which means models may refuse to answer benign queries. In this paper, we investigate the factors for overkill by exploring how models handle and determine the safety of queries. Our f... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.303 | x | 0 | jailbreaking attacks | null | SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding | As large language models (LLMs) become increasingly integrated into real-world applications such as code generation and chatbot assistance, extensive efforts have been made to align LLM behavior with human values, including safety. Jailbreak attacks, which aim to provoke unintended and unsafe behaviors from LLMs, remai... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.461 | null | 1 | jailbreaking attacks | English, Chinese | Safety Alignment in NLP Tasks: Weakly Aligned Summarization as an In-Context Attack | Recent developments in balancing the usefulness and safety of Large Language Models (LLMs) have raised a critical question: Are mainstream NLP tasks adequately aligned with safety consideration? Our study, focusing on safety-sensitive documents obtained through adversarial attacks, reveals significant disparities in th... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.481 | x | 0 | jailbreaking attacks | null | Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization | While significant attention has been dedicated to exploiting weaknesses in LLMs through jailbreaking attacks, there remains a paucity of effort in defending against these attacks. We point out a pivotal factor contributing to the success of jailbreaks: the intrinsic conflict between the goals of being helpful and ensur... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.538 | null | 1 | general safety, LLM alignment | Chinese | NewsBench: A Systematic Evaluation Framework for Assessing Editorial Capabilities of Large Language Models in Chinese Journalism | We present NewsBench, a novel evaluation framework to systematically assess the capabilities of Large Language Models (LLMs) for editorial capabilities in Chinese journalism. Our constructed benchmark dataset is focused on four facets of writing proficiency and six facets of safety adherence, and it comprises manually ... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.572 | null | null | general safety, LLM alignment | null | Aligning Large Language Models with Human Preferences through Representation Engineering | Aligning large language models (LLMs) with human preferences is crucial for enhancing their utility in terms of helpfulness, truthfulness, safety, harmlessness, and interestingness. Existing methods for achieving this alignment often involve employing reinforcement learning from human feedback (RLHF) to fine-tune LLMs ... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.698 | null | null | general safety, LLM alignment | null | SOTOPIA-π: Interactive Learning of Socially Intelligent Language Agents | Humans learn social skills through both imitation and social interaction. This social learning process is largely understudied by existing research on building language agents. Motivated by this gap, we propose an interactive learning method, SOTOPIA-π, that improves the social intelligence of language agents. This met... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.741 | null | 0 | privacy | null | Reducing Privacy Risks in Online Self-Disclosures with Language Models | Self-disclosure, while being common and rewarding in social media interaction, also poses privacy risks. In this paper, we take the initiative to protect the user-side privacy associated with online self-disclosure through detection and abstraction. We develop a taxonomy of 19 self-disclosure categories and curate a la... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.762 | null | 1 | jailbreaking attacks | English, Chinese, Hindi | Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic | We propose RESTA to perform LLM realignment towards safety, which gets compromised due to downstream task fine-tuning. RESTA stands for REstoring Safety through Task Arithmetic. At its core, it involves a simple arithmetic addition of a safety vector to the weights of the compromised model. We demonstrate the effective... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.773 | x | 0 | jailbreaking attacks | null | How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs | Most traditional AI safety research views models as machines and centers on algorithm-focused attacks developed by security experts. As large language models (LLMs) become increasingly common and competent, non-expert users can also impose risks during daily interactions. Observing this, we shift the perspective, by tr... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.793 | null | 0 | hallucination, factuality | null | Disinformation Capabilities of Large Language Models | Automated disinformation generation is often listed as one of the risks of large language models (LLMs). The theoretical ability to flood the information space with disinformation content might have dramatic consequences for democratic societies around the world. This paper presents a comprehensive study of the disinfo... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.796 | null | x | general safety, LLM alignment | Indonesian | Cendol: Open Instruction-tuned Generative Large Language Models for Indonesian Languages | Large language models (LLMs) show remarkable human-like capability in various domains and languages. To bridge this quality gap, we introduce Cendol, a collection of Indonesian LLMs encompassing both decoder-only and encoder-decoder architectures across a range of model sizes. We highlight Cendol’s effectiveness across... |
2024.acl.xml | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2024 | https://aclanthology.org/2024.acl-long.809 | x | 0 | jailbreaking attacks | null | ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs | Safety is critical to the usage of large language models (LLMs). Multiple techniques such as data filtering and supervised fine-tuning have been developed to strengthen LLM safety. However, currently known techniques presume that corpora used for safety alignment of LLMs are solely interpreted by semantics. This assump... |
The State of Multilingual LLM Safety Research: From Measuring the Language Gap to Mitigating It
We present a comprehensive analysis of the linguistic diversity of LLM safety research, highlighting the English-centric nature of the field. Through a systematic review of nearly 300 publications from 2020–2024 across major NLP conferences and workshops at *ACL, we identify a significant and growing language gap in LLM safety research, with even high-resource non-English languages receiving minimal attention.
Dataset Description
Current version of the dataset consists of annotations for conference and workshop papers collected from *ACL venues between 2020 and 2024, using keywords of "safe" and "safety" in abstracts to identify relevant literature. The data source is https://github.com/acl-org/acl-anthology/tree/master/data, and the paperes are curated by Zheng-Xin Yong, Beyza Ermis, Marzieh Fadaee, and Julia Kreutzer.
Dataset Structure
- xml: xml string from ACL Anthology
- proceedings: proceedings of the conference or workshop the work is published in.
- year: year of publication
- url: paper url on ACL Anthology
- language documentation: whether the paper explicitly reports the languages studied in the work. ("x" indicates failure of reporting)
- has non-English?: whether the work contains non-English language. (0: English-only, 1: has at least one non-English language)
- topics: topic of the safety work ('jailbreaking attacks'; 'toxicity, bias'; 'hallucination, factuality'; 'privacy'; 'policy'; 'general safety, LLM alignment'; 'others')
- language coverage: languages covered in the work (null means English only)
- title: title of the paper
- abstract: abstract of the paper
Citation
@article{yong2025safetysurvey,
title={The State of Multilingual LLM Safety Research: From Measuring the Language Gap to Mitigating It},
author={Zheng-Xin Yong and Beyza Ermis and Marzieh Fadaee and Stephen H. Bach and Julia Kreutzer},
year={2025},
journal={arXiv preprint arXiv:2505.24119},
}
Dataset Card Authors
- Downloads last month
- 27