Abstracts stringlengths 379 1.97k | Class stringclasses 21
values |
|---|---|
Although previous research on Aspect-based Sentiment Analysis (ABSA) for Indonesian reviews in hotel domain has been conducted using CNN and XGBoost, its model did not generalize well in test data and high number of OOV words contributed to misclassification cases. Nowadays, most state-of-the-art results for wide array... | Aspect-Based Sentiment Analysis (ABSA) |
Aspect-based sentiment analysis (ABSA), a task in sentiment analysis, predicts the sentiment polarity of specific aspects mentioned in the input sentence. Recent research has demonstrated the effectiveness of Bidirectional Encoder Representation from Transformers (BERT) and its variants in improving the performance of ... | Aspect-Based Sentiment Analysis (ABSA) |
Due to the breathtaking growth of social media or newspaper user comments, online product reviews comments, sentiment analysis (SA) has captured substantial interest from the researchers. With the fast increase of domain, SA work aims not only to predict the sentiment of a sentence or document but also to give the nece... | Aspect-Based Sentiment Analysis (ABSA) |
This study aims to gain a deeper understanding of online student reviews regarding the learning process at a private university in Indonesia and to compare the effectiveness of several algorithms: Naive Bayes, K-NN, Decision Tree, and Indo-Bert. Traditional Sentiment Analysis methods can only analyze sentences as a who... | Aspect-Based Sentiment Analysis (ABSA) |
spect-Based Sentiment Analysis (ABSA) is increasingly crucial in Natural Language Processing (NLP) for applications such as customer feedback analysis and product recommendation systems. ABSA goes beyond traditional sentiment analysis by extracting sentiments related to specific aspects mentioned in the text; existing ... | Aspect-Based Sentiment Analysis (ABSA) |
Sentiment analysis is a natural language processing (NLP) task of identifying orextracting the sentiment content of a text unit. This task has become an active research topic since the early 2000s. During the two last editions of the VLSP workshop series, the shared task on Sentiment Analysis (SA) for Vietnamese has be... | Aspect-Based Sentiment Analysis (ABSA) |
Aspect-based sentiment analysis (ABSA) is a task in natural language processing (NLP) that involves predicting the sentiment polarity towards a specific aspect in text. Graph neural networks (GNNs) have been shown to be effective tools for sentiment analysis tasks, but current research often overlooks affective informa... | Aspect-Based Sentiment Analysis (ABSA) |
Sentiment analysis (SA) is also known as opinion mining, it is the process of gathering and analyzing people's opinions about a particular service, good, or company on websites like Twitter, Facebook, Instagram, LinkedIn, and blogs, among other places. This article covers a thorough analysis of SA and its levels. This ... | Aspect-Based Sentiment Analysis (ABSA) |
Aspect-based sentiment analysis (ABSA) is currently among the most vigorous areas in natural language processing (NLP). Individuals, private and government institutions are increasingly using media sources for decision making. In the last decade, aspect extraction has been the most essential phase of sentiment analysis... | Aspect-Based Sentiment Analysis (ABSA) |
Sentiment analysis has become one of the most important tools in natural language processing, since it opens many possibilities to understand people's opinions on different topics. Aspect-based sentiment analysis aims to take this a step further and find out what exactly someone is talking about, and if he likes or dis... | Aspect-Based Sentiment Analysis (ABSA) |
Recent research on dialog state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and fine-tuning of ever larger language model based architectures. In contrast, general purpose language models, t... | Dialogue State Tracking (DST) |
Dialogue State Tracking (DST) is a sub-task of task-based dialogue systems where the user intention is tracked through a set of (domain, slot, slot-value) triplets. Existing DST models can be difficult to extend for new datasets with larger domains/slots mainly due to either of the two reasons- i) prediction of domain-... | Dialogue State Tracking (DST) |
The dialogue state tracking module is a crucial component of task-oriented dialogue systems. Recently, some Dialogue State Tracking (DST) methods have used the previous dialogue state as auxiliary input, resulting in errors that propagate and subsequently affect predictions. This paper proposes utilizing dialogue-level... | Dialogue State Tracking (DST) |
Sequence-to-sequence state-of-the-art systems for dialogue state tracking (DST) use the full dialogue history as input, represent the current state as a list with all the slots, and generate the entire state from scratch at each dialogue turn. This approach is inefficient, especially when the number of slots is large a... | Dialogue State Tracking (DST) |
Recently proposed dialogue state tracking (DST) approaches predict the dialogue state of a target turn sequentially based on the previous dialogue state. During the training time, the ground-truth previous dialogue state is utilized as the historical context. However, only the previously predicted dialogue state can be... | Dialogue State Tracking (DST) |
We present a method for performing zero-shot Dialogue State Tracking (DST) by casting the task as a learning-to-ask-questions framework. The framework learns to pair the best question generation (QG) strategy with in-domain question answering (QA) methods to extract slot values from a dialogue without any human interve... | Dialogue State Tracking (DST) |
Different from traditional task-oriented and open-domain dialogue systems, insurance agents aim to engage customers for helping them satisfy specific demands and emotional companionship. As a result, customer-to-agent dialogues are usually very long, and many turns of them are pure chit-chat without any useful marketin... | Dialogue State Tracking (DST) |
Few-shot dialogue state tracking (DST) model tracks user requests in dialogue with reliable accuracy even with a small amount of data. In this paper, we introduce an ontology-free few-shot DST with self-feeding belief state input. The self-feeding belief state input increases the accuracy in multi-turn dialogue by summ... | Dialogue State Tracking (DST) |
Task-oriented dialogue systems depend on dialogue state tracking to keep track of the intentions of users in the course of conversations. Although recent models in dialogue state tracking exhibit good performance, the errors in predicting the value of each slot at the current dialogue turn of these models are easily ca... | Dialogue State Tracking (DST) |
This paper focuses on end-to-end task-oriented dialogue systems, which jointly handle dialogue state tracking (DST) and response generation. Traditional methods usually adopt a supervised paradigm to learn DST from a manually labeled corpus. However, the annotation of the corpus is costly, time-consuming, and cannot co... | Dialogue State Tracking (DST) |
The technological development in current era demands the need of Artificial Intelligence (AI) in all fields. The AI in medical field is not an exception for various real time applications as per user demands. The applications are medical report summarization, image captioning, Visual Question Answering (VQA) and Visual... | Visual QA (VQA) |
Earth vision research typically focuses on extracting geospatial object locations and categories but neglects the exploration of relations between objects and comprehensive reasoning. Based on city planning needs, we develop a multi-modal multi-task VQA dataset (EarthVQA) to advance relational reasoning-based judging, ... | Visual QA (VQA) |
Text-VQA aims at answering questions that require understanding the textual cues in an image. Despite the great progress of existing Text-VQA methods, their performance suffers from insufficient human-labeled question-answer (QA) pairs. However, we observe that, in general, the scene text is not fully exploited in the ... | Visual QA (VQA) |
Visual Question Answering can be a functionally relevant task if purposed as such. In this paper, we aim to investigate and evaluate its efficacy in terms of localization-based question answering. We do this specifically in the context of autonomous driving where this functionality is important. To achieve our aim, we ... | Visual QA (VQA) |
Recently, 3D vision-and-language tasks have attracted increasing research interest. Compared to other vision-and-language tasks, the 3D visual question answering (VQA) task is less exploited and is more susceptible to language priors and co-reference ambiguity. Meanwhile, a couple of recently proposed 3D VQA datasets d... | Visual QA (VQA) |
To contribute to automating the medical vision-language model, we propose a novel Chest-Xray Different Visual Question Answering (VQA) task. Given a pair of main and reference images, this task attempts to answer several questions on both diseases and, more importantly, the differences between them. This is consistent ... | Visual QA (VQA) |
Visual Question Answering (VQA) is one of the most important tasks in autonomous driving, which requires accurate recognition and complex situation evaluations. How-ever, datasets annotated in a QA format, which guarantees precise language generation and scene recognition from driving scenes, have not been established ... | Visual QA (VQA) |
Visual Question Answering (VQA) deep-learning systems tend to capture superficial statistical correlations in the training data because of strong language priors and fail to generalize to test data with a significantly different question-answer (QA) distribution. To address this issue, we introduce a self-critical trai... | Visual QA (VQA) |
Despite Visual Question Answering (VQA) has realized impressive progress over the last few years, today's VQA models tend to capture superficial linguistic correlations in the train set and fail to generalize to the test set with different QA distributions. To reduce the language biases, several recent works introduce ... | Visual QA (VQA) |
While models for Visual Question Answering (VQA) have steadily improved over the years, interacting with one quickly reveals that these models lack consistency. For instance, if a model answers “red” to “What color is the balloon?”, it might answer “no” if asked, “Is the balloon red?”. These responses violate simple no... | Visual QA (VQA) |
Recent state-of-the-art open-domain QA models are typically based on a two stage retriever-reader approach in which the retriever first finds the relevant knowledge/passages and the reader then leverages that to predict the answer. Prior work has shown that the performance of the reader usually tends to improve with th... | Open-Domain QA |
The goal of the open-domain table QA task is to answer a question based on retrieving and extracting information from a large corpus of structured tables. Currently, the accuracy of the most popular framework in open-domain QA: the two-stage retrieval, is limited by the table retriever. Inspired by the research on Text... | Open-Domain QA |
Although counterfactual reasoning is a fundamental aspect of intelligence, the lack of large-scale counterfactual open-domain question-answering (QA) benchmarks makes it difficult to evaluate and improve models on this ability. To address this void, we introduce the first such dataset, named IfQA, where each question i... | Open-Domain QA |
Existing state-of-the-art methods for open-domain question-answering (ODQA) use an open book approach in which information is first retrieved from a large text corpus or knowledge base (KB) and then reasoned over to produce an answer. A recent alternative is to retrieve from a collection of previously-generated questio... | Open-Domain QA |
In recent years, extensive state-of-the-art research has been conducted on natural language processing (NLP) issues. This includes improved text generation and text comprehension models. These solutions are deeply data dependent, as models use high-quality data. The need for more data in a particular language severely ... | Open-Domain QA |
Deep NLP models have been shown to be brittle to input perturbations. Recent work has shown that data augmentation using counterfactuals — i.e. minimally perturbed inputs — can help ameliorate this weakness. We focus on the task of creating counterfactuals for question answering, which presents unique challenges relate... | Open-Domain QA |
While research on explaining predictions of open-domain QA systems (ODQA) is gaining momentum, most works do not evaluate whether these explanations improve user trust. Furthermore, many users interact with ODQA using voice -assistants, yet prior works exclusively focus on visual displays, risking (as we also show) inc... | Open-Domain QA |
Question answering (QA) is a critical task for speech-based retrieval from knowledge sources, by sifting only the answers without requiring to read supporting documents. Specifically, open-domain QA aims to answer user questions on unrestricted knowledge sources. Ideally, adding a source should not decrease the accurac... | Open-Domain QA |
Although open-domain question answering (QA) draws great attention in recent years, it requires large amounts of resources for building the full system and it is often difficult to reproduce previous results due to complex configurations. In this paper, we introduce SF-QA: simple and fair evaluation framework for open-... | Open-Domain QA |
Ambiguous questions persist in open-domain question answering, because formulating a precise question with a unique answer is often challenging. Previously, Min et al. (2020) have tackled this issue by generating disambiguated questions for all possible interpretations of the ambiguous question. This can be effective, ... | Open-Domain QA |
In recent years, multiple-choice Visual Question Answering (VQA) has become topical and achieved remarkable progress. However, most pioneer multiple-choice VQA models are heavily driven by statistical correlations in datasets, which cannot perform well on multimodal understanding and suffer from poor generalization. In... | Multiple Choice QA (MCQA) |
Question answer (QA) system is closely related to NLP and IR tasks. An automated QA system should understand the semantics of question and derive answers relevant to it. In case of MCQ system this tasks becomes difficult as the model needs to understand the semantics and select an answer from a given choice. In this pa... | Multiple Choice QA (MCQA) |
The recent success of machine learning systems on various QA datasets could be interpreted as a significant improvement in models’ language understanding abilities. However, using various perturbations, multiple recent works have shown that good performance on a dataset might not indicate performance that correlates we... | Multiple Choice QA (MCQA) |
Open-domain question answering (QA) involves many knowledge and reasoning challenges, but are successful QA models actually learning such knowledge when trained on benchmark QA tasks? We investigate this via several new diagnostic tasks probing whether multiple-choice QA models know definitions and taxonomic reasoning—... | Multiple Choice QA (MCQA) |
Data contamination in model evaluation has become increasingly prevalent with the growing popularity of large language models. It allows models to"cheat"via memorisation instead of displaying true capabilities. Therefore, contamination analysis has become an crucial part of reliable model evaluation to validate results... | Multiple Choice QA (MCQA) |
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dat... | Multiple Choice QA (MCQA) |
This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS \&NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token l... | Multiple Choice QA (MCQA) |
In a spoken multiple-choice question answering (MCQA) task, where passages, questions, and choices are given in the form of speech, usually only the auto-transcribed text is considered in system development. The acoustic-level information may contain useful cues for answer prediction. However, to the best of our knowle... | Multiple Choice QA (MCQA) |
We introduce KorMedMCQA, the first Korean multiple-choice question answering (MCQA) benchmark derived from Korean healthcare professional licensing examinations, covering from the year 2012 to year 2023. This dataset consists of a selection of questions from the license examinations for doctors, nurses, and pharmacists... | Multiple Choice QA (MCQA) |
Unsupervised question answering is a promising yet challenging task, which alleviates the burden of building large-scale annotated data in a new domain. It motivates us to study the unsupervised multiple-choice question answering (MCQA) problem. In this paper, we propose a novel framework designed to generate synthetic... | Multiple Choice QA (MCQA) |
Due to the enormous and exponential advancement in the online social network, the triad of Facebook, Twitter and Whatsapp posed a great challenge in the form of fake news in front of us. In recent years many events like false propaganda of the ‘US presidential election’, opinion spamming in ‘Brexit referendum’, and lon... | NLP for Social Media |
One prominent dark side of online information behavior is the spreading of rumors. The feature analysis and crowd identification of social media rumor refuters based on machine learning methods can shed light on the rumor refutation process. This paper analyzed the association between user features and rumor refuting b... | NLP for Social Media |
Social media has become a major source of information for healthcare professionals but due to the growing volume of data in unstructured format, analyzing these resources accurately has become a challenge. In this study, we trained health related NER and classification models on different datasets published within the ... | NLP for Social Media |
Information about individuals can help to better understand what they say, particularly in social media where texts are short. Current approaches to modelling social media users pay attention to their social connections, but exploit this information in a static way, treating all connections uniformly. This ignores the ... | NLP for Social Media |
From the day internet came into existence, the era of social networking sprouted. In the beginning, no one may have thought the internet would be a host of numerous amazing services the social networking. Today we can say that online applications and social networking websites have become a non-separable part of one’s ... | NLP for Social Media |
Social media data become an integral part in the business data and should be integrated into the decisional process for better decision making based on information which reflects better the true situation of business in any field. However, social media data are unstructured and generated in very high frequency which ex... | NLP for Social Media |
Participatory moments on social media platforms increasingly add up to something more substantial. Communicating our thoughts and feelings about the book through shared observations, appraisals, and illustrative examples. For instance, the data posted on social media platforms like Twitter can be mined for insights int... | NLP for Social Media |
Social media is an appropriate source for analyzing public attitudes towards the COVID-19 vaccine and various brands. Nevertheless, there are few relevant studies. In the research, we collected tweet posts by the UK and US residents from the Twitter API during the pandemic and designed experiments to answer three main ... | NLP for Social Media |
Profanity is socially offensive language, which may also be called cursing, cussing, swearing, or expletives. Nowadays where everything is digitally managed, there are lots of online platforms and forums which people use. If we take an example of any social media platform like Twitter, their privacy policy suggests tha... | NLP for Social Media |
Despite its relevance, the maturity of NLP for social media pales in comparison with general-purpose models, metrics and benchmarks. This fragmented landscape makes it hard for the community to know, for instance, given a task, which is the best performing model and how it compares with others. To alleviate this issue,... | NLP for Social Media |
The Amount of legal information that is being produced on a daily basis in the law courts is increasing enormously and nowadays this information is available in electronic form also. The application of various machine learning and deep learning methods for processing of legal documents has been receiving considerate at... | NLP for the Legal Domain |
Claims, disputes, and litigations are major legal issues in construction projects, which often result in cost overruns, delays, and adverse working relationships among the contracting parties. Recent advances in natural language processing (NLP) techniques offer great potentials that can process voluminous unstructured... | NLP for the Legal Domain |
LexNLP is an open source Python package focused on natural language processing and machine learning for legal and regulatory text. The package includes functionality to (i) segment documents, (ii) identify key text such as titles and section headings, (iii) extract over eighteen types of structured information like dis... | NLP for the Legal Domain |
With the evolution of time, problem, and expectation of human beings, advancement of science and technology has facilitated scientific analysis of bulk dataset to generate desired output. This approach of bulk data analysis may be specifically implemented using Machine Learning and Data Analytics, which are the sub-dom... | NLP for the Legal Domain |
Natural language processing (NLP) methods for analyzing legal text offer legal scholars and practitioners a range of tools allowing to empirically analyze law on a large scale. However, researchers seem to struggle when it comes to identifying ethical limits to using NLP systems for acquiring genuine insights both abou... | NLP for the Legal Domain |
The EU-funded project Lynx focuses on the creation of a knowledge graph for the legal domain (Legal Knowledge Graph, LKG) and its use for the semantic processing, analysis and enrichment of documents from the legal domain. This article describes the use cases covered in the project, the entire developed platform and th... | NLP for the Legal Domain |
In the last years, the legal domain has been revolutionized by the use of Information and Communication Technologies, producing large amount of digital information. Legal practitioners’ needs, then, in browsing these repositories has required to investigate more efficient retrieval methods, that assume more relevance b... | NLP for the Legal Domain |
We present LEDGAR, a multilabel corpus of legal provisions in contracts. The corpus was crawled and scraped from the public domain (SEC filings) and is, to the best of our knowledge, the first freely available corpus of its kind. Since the corpus was constructed semi-automatically, we apply and discuss various approach... | NLP for the Legal Domain |
Legal documents are unstructured, use legal jargon, and have considerable length, making them difficult to process automatically via conventional text processing techniques. A legal document processing system would benefit substantially if the documents could be segmented into coherent information units. This paper pro... | NLP for the Legal Domain |
We evaluated the capability of a state-of-the-art generative pretrained transformer (GPT) model to perform semantic annotation of short text snippets (one to few sentences) coming from legal documents of various types. Discussions of potential uses (e.g., document drafting, summarization) of this emerging technology in... | NLP for the Legal Domain |
Prompt engineering, as an efficient and effective way to leverage Large Language Models (LLM), has drawn a lot of attention from the research community. The existing research primarily emphasizes the importance of adapting prompts to specific tasks, rather than specific LLMs. However, a good prompt is not solely define... | Prompt Engineering |
Software requirement classification is a longstanding and important problem in requirement engineering. Previous studies have applied various machine learning techniques for this problem, including Support Vector Machine (SVM) and decision trees. With the recent popularity of NLP technique, the state-of-the-art approac... | Prompt Engineering |
In recent years, the advancement of Large Language Models (LLMs) has garnered significant attention in the field of Artificial Intelligence (AI), exhibiting exceptional performance across a wide variety of natural language processing (NLP) tasks. However, despite the high generality of LLMs, there exists a problem in c... | Prompt Engineering |
Abstract Previous work in prompt engineering for large language models has introduced different gradient-free probability-based prompt selection methods that aim to choose the optimal prompt among the candidates for a given task but have failed to provide a comprehensive and fair comparison between each other. In this ... | Prompt Engineering |
In the domain of Natural Language Processing (NLP), the technique of prompt engineering is a strategic method utilized to guide the responses of models such as ChatGPT. This research explores the intricacies of prompt engineering, with a specific focus on its effects on the quality of summaries generated by ChatGPT 3.5... | Prompt Engineering |
Inspired by human cognition, Jiang et al.(2023c) create a benchmark for assessing LLMs' lateral thinking-thinking outside the box. Building upon this benchmark, we investigate how different prompting methods enhance LLMs' performance on this task to reveal their inherent power for outside-the-box thinking ability. Thro... | Prompt Engineering |
Automated theorem proving can benefit a lot from methods employed in natural language processing, knowledge graphs and information retrieval: this non-trivial task combines formal languages understanding, reasoning, similarity search. We tackle this task by enhancing semantic similarity ranking with prompt engineering,... | Prompt Engineering |
Foundation AI models have emerged as powerful pre-trained models on a large scale, capable of seamlessly handling diverse tasks across multiple domains with minimal or no fine-tuning. These models, exemplified by the impressive achievements of GPT-3 and BERT in natural language processing (NLP), as well as CLIP and DAL... | Prompt Engineering |
Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners. However, their effectiveness depends mainly on scaling the model parameters and prompt design, hindering their implementation in most real-world applications.... | Prompt Engineering |
State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding... | Prompt Engineering |
Automatic identification and expansion of ambiguous abbreviations are essential for biomedical natural language processing applications, such as information retrieval and question answering systems. In this paper, we present DEep Contextualized Biomedical. Abbreviation Expansion (DECBAE) model. DECBAE automatically col... | Acronyms and Abbreviations Detection and Expansion |
Acronyms are commonly used in human language as alternative forms of concepts to increase recognition, to reduce duplicate references to the same concept, and to stress important concepts. There are no standard rules for acronym creation; therefore, both machine-based acronym identification and acronym resolution are h... | Acronyms and Abbreviations Detection and Expansion |
Hypernym and synonym matching are one of the mainstream Natural Language Processing (NLP) tasks. In this paper, we present systems that attempt to solve this problem. We designed these systems to participate in the FinSim-3, a shared task of FinNLP workshop at IJCAI-2021. The shared task is focused on solving this prob... | Acronyms and Abbreviations Detection and Expansion |
The current study aimed to explore the linguistic analysis of neologism related to Coronavirus (COVID-19). Recently, a new coronavirus disease COVID-19 has emerged as a respiratory infection with significant concern for global public health hazards. However, with each passing day, more and more confirmed cases are bein... | Acronyms and Abbreviations Detection and Expansion |
The prevalence of ambiguous acronyms make scientific documents harder to understand for humans and machines alike, presenting a need for models that can automatically identify acronyms in text and disambiguate their meaning. We introduce new methods for acronym identification and disambiguation: our acronym identificat... | Acronyms and Abbreviations Detection and Expansion |
Abbreviations and acronyms are shortened forms of words or phrases that are commonly used in technical writing. In this study we focus specifically on abbreviations and introduce a corpus-based method for their expansion. The method divides the processing into three key stages: abbreviation identification, full form ca... | Acronyms and Abbreviations Detection and Expansion |
Acronyms are the short forms of phrases that facilitate conveying lengthy sentences in documents and serve as one of the mainstays of writing. Due to their importance, identifying acronyms and corresponding phrases (i.e., acronym identification (AI)) and finding the correct meaning of each acronym (i.e., acronym disamb... | Acronyms and Abbreviations Detection and Expansion |
Nowadays, there is an increasing tendency for using acronyms in technical texts, which has led to ambiguous acronyms with different possible expansions. Diversity of expansions of a single acronym makes recognizing its expansion a challenging task. Replacing acronyms with incorrect expansions will lead to problems in t... | Acronyms and Abbreviations Detection and Expansion |
In biomedical domain, abbreviations are appearing more and more frequently in various data sets, which has caused significant obstacles to biomedical big data analysis. The dictionary-based approach has been adopted to process abbreviations, but it cannot handle ad hoc abbreviations, and it is impossible to cover all a... | Acronyms and Abbreviations Detection and Expansion |
The adoption of Electronic Health Record (EHR) and other e-health infrastructures over the years has been characterized by an increase in medical errors. This is primarily a result of the widespread usage of medical acronyms and abbreviations with multiple possible senses (i.e., ambiguous acronyms). The advent of Artif... | Acronyms and Abbreviations Detection and Expansion |
The article is focused on automatic development and ranking of a large corpus for Russian paraphrase generation which proves to be the first corpus of such type in Russian computational linguistics. Existing manually annotated paraphrase datasets for Russian are limited to small-sized ParaPhraser corpus and ParaPlag wh... | Paraphrase and Rephrase Generation |
Paraphrase generation is a fundamental problem in natural language processing. Due to the significant success of transfer learning, the “pre-training → fine-tuning” approach has become the standard. However, popular general pre-training methods typically require extensive datasets and great computational resources, and... | Paraphrase and Rephrase Generation |
Paraphrasing is a process to restate the meaning of a text or a passage using different words in the same language to give a clearer understanding of the original sentence to the readers. Paraphrasing is important in many natural language processing tasks such as plagiarism detection, information retrieval, and machine... | Paraphrase and Rephrase Generation |
Paraphrase generation is a fundamental and long-standing task in natural language processing. In this paper, we concentrate on two contributions to the task: (1) we propose Retrieval Augmented Prompt Tuning (RAPT) as a parameter-efficient method to adapt large pre-trained language models for paraphrase generation; (2) ... | Paraphrase and Rephrase Generation |
A noun compound is a sequence of contiguous nouns that acts as a single noun, although the predicate denoting the semantic relation between its components is dropped. Noun Compound Interpretation is the task of uncovering the relation, in the form of a preposition or a free paraphrase. Prepositional paraphrasing refers... | Paraphrase and Rephrase Generation |
This article presents a method extending an existing French corpus of paraphrases of medical terms ANONYMOUS with new data from Web archives created during the Covid-19 pandemic. Our method semi-automatically detects new terms and paraphrase markers introducing paraphrases from these Web archives, followed by a manual ... | Paraphrase and Rephrase Generation |
Inducing diversity in the task of paraphrasing is an important problem in NLP with applications in data augmentation and conversational agents. Previous paraphrasing approaches have mainly focused on the issue of generating semantically similar paraphrases while paying little attention towards diversity. In fact, most ... | Paraphrase and Rephrase Generation |
In this work, we propose TGLS, a novel framework for unsupervised Text Generation by Learning from Search. We start by applying a strong search algorithm (in particular, simulated annealing) towards a heuristically defined objective that (roughly) estimates the quality of sentences. Then, a conditional generative model... | Paraphrase and Rephrase Generation |
In phrase generation (PG), a sentence in the natural language is changed into a new one with a different syntactic structure but having the same semantic meaning. The present sequence-to-sequence strategy aims to recall the words and structures from the training dataset rather than learning the words' semantics. As a r... | Paraphrase and Rephrase Generation |
Existing methods for Dialogue Response Generation (DRG) in Task-oriented Dialogue Systems (TDSs) can be grouped into two categories: template-based and corpus-based. The former prepare a collection of response templates in advance and fill the slots with system actions to produce system responses at runtime. The latter... | Paraphrase and Rephrase Generation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.