Description With nearly 1.4 billion people, India is the second-most populated country in the world. Yet Indian languages, like Hindi and Tamil, are underrepresented on the web. Popular Natural Language Understanding (NLU) models perform worse with Indian languages compared to English, the effects of which lead to subpar experiences in downstream web applications for Indian users. With more attention from the Kaggle community and your novel machine learning solutions, we can help Indian users make the most of the web. Predicting answers to questions is a common NLU task, but not for Hindi and Tamil. Current progress on multilingual modeling requires a concentrated effort to generate high-quality datasets and modeling improvements. Additionally, for languages that are typically underrepresented in public datasets, it can be difficult to build trustworthy evaluations. We hope the dataset provided for this competition—and additional datasets generated by participants—will enable future machine learning for Indian languages. In this competition, your goal is to predict answers to real questions about Wikipedia articles. You will use chaii-1, a new question-answering dataset with question-answer pairs. The dataset covers Hindi and Tamil, collected without the use of translation. It provides a realistic information-seeking task with questions written by native-speaking expert data annotators. You will be provided with a baseline model and inference code to build upon. If successful, you'll improve upon the baseline performance of NLU models in Indian languages. The results could improve the web experience for many of the nearly 1.4 billion people of India. Additionally, you’ll contribute to multilingual NLP, which could be applied beyond the languages in this competition. Acknowledgments Google Research India contributes fundamental advances in computer science and applies their research to big problems impacting India, Google, and communities around the world. The Natural Language Understanding group at Google Research India works specifically with ML to address the unique challenges in the Indian context (such as code mixing in Search, diversity of languages, dialects, and accents in Assistant), learning from limited resources, and advancing multilingual models. chaii (Challenge in AI for India) is a Google Research India initiative created with the purpose of sparking AI applications to address some of the pressing problems in India and to find unique ways to address them. Starting with a focus on NLU, chaii hopes to make progress towards multilingual modeling, as language diversity is significantly underserved on the web. Google Research India is working on transformational approaches to healthcare, agriculture, and education, and also improving apps and services such as search, assistant, and payments, e.g., to deal with challenges arising from the diversity of languages in India. We also acknowledge the support from the AI4Bharat Team at the Indian Institute of Technology Madras. Evaluation The metric in this competition is the word-level Jaccard score. A good description of Jaccard similarity for strings is here. A Python implementation based on the links above, and matched with the output of the C# implementation on the back end, is provided below: ```python def jaccard(str1, str2): a = set(str1.lower().split()) b = set(str2.lower().split()) c = a.intersection(b) return float(len(c)) / (len(a) + len(b) - len(c)) ``` The formula for the overall metric, then, is: \[ \text{score} = \frac{1}{n} \sum_{i=1}^n \text{jaccard}( \text{gt}_i, \text{dt}_i ) \] where: - \( n \) = number of documents - \( \text{jaccard} \) = the function provided above - \( \text{gt}_i \) = the ith ground truth - \( \text{dt}_i \) = the ith prediction Submission File For each ID in the test set, you must predict the string that best answers the provided question based on the context. Note that the selected text needs to be quoted and complete to work correctly. Include punctuation, etc. - the above code splits ONLY on whitespace. The file should contain a header and have the following format: ``` id,PredictionString 8c8ee6504,"1" 3163c22d0,"2 string" 66aae423b,"4 word 6" 722085a7b,"1" ``` Dataset Description In this competition, you will be predicting the answers to questions in Hindi and Tamil. The answers are drawn directly (see the Evaluation page for details) from a limited context. We have provided a small number of samples to check your code with. There is also a hidden test set. All files should be encoded as UTF-8. Files: - train.csv - the training set, containing context, questions, and answers. Also includes the start character of the answer for disambiguation. - test.csv - the test set, containing context and questions. - sample_submission.csv - a sample submission file in the correct format Columns: - id - a unique identifier - context - the text of the Hindi/Tamil sample from which answers should be derived - question - the question, in Hindi/Tamil - answer_text (train only) - the answer to the question (manual annotation) (note: for test, this is what you are attempting to predict) - answer_start (train only) - the starting character in context for the answer (determined using substring match during data preparation) - language - whether the text in question is in Tamil or Hindi Data Annotation Details chaii 2021 dataset was prepared following the two-step process as in TydiQA. In the question elicitation step, the annotators were shown snippets of Wikipedia text and asked to come up with interesting questions that they may be genuinely interested in knowing answers about. They were also asked to make sure the elicited question was not answerable from the snippet of wiki text shown. Annotators were asked to elicit questions that were likely to have precise, unambiguous answers. In the answer labeling step, for each question elicited in the previous step, the first Wikipedia page in the Google search results for that question was selected. For Hindi questions, the selection was restricted to Hindi Wikipedia documents, and similarly for Tamil. Annotators were then asked to select the answer for the question in the document. Annotators were asked to select the first valid answer in the document as the correct answer. Questions which were not answerable from the selected document were marked as non-answerable. These question-document pairs were not included in the chaii 2021 dataset. With (question, wiki_document, answer) now in place, the first substring occurrence of the answer in the wiki_document was automatically calculated and provided as answer_start in the dataset. Since this part was done automatically, some amount of inaccuracy is possible. This was included only for convenience, and participants may consider ignoring this offset during model development (or come up with their own mechanism for offset selection). Please note that during the test, the model is only required to predict the answer string, and not its span offset. Answers in the training data were produced by one annotator, while those in the test were produced by three annotators. Majority voting was then used to come up with the final answer. In test data with minor disagreements, a separate annotator pass was done to select the final answer. For both train and test answer labeling, sampling-based quality checks were carried out and the answer accuracy was routinely observed to be quite high. In spite of all these multi-step checks, some amount of noise in the training data is likely. This is expected and meant to reflect real-world settings where slight noise in the training data may be unavoidable to achieve larger volumes of it. Moreover, this may also result in the development of more robust methods which are more noise-tolerant during training. Update: We ran a few random sampling-based quality checks on the datasets. Based on these checks, we found the Hindi train and test datasets to be 93.8% and 97.8%, respectively. No issues were identified in the sampled Tamil train and test instances.