Model Card for Model ID
Model Description
This is a fine-tuned version of aya-expanse-8b for Matrix Language Identification (MLI) on Hinglish (Hindi-English code-mixed) text. It classifies each sentence at the sentence level into the dominant matrix language governing the grammatical structure: hi (Hindi) or en (English).
The model handles mixed Roman and Devanagari scripts and is optimized for identifying the primary syntactic framework in natural code-mixed sentences.
Achieves 98.77 F1 on the COMI-LINGUA MLI test set (5K instances), outperforming zero-shot closed LLMs (e.g., gpt-4o: ~98.0 F1) and traditional tools, and setting SOTA among open-weight models.
- Model type: LoRA-adapted Transformer LLM (8B params, ~32M trainable)
- Language(s) (NLP): Hindi, English
- License: apache-2.0
- Finetuned from model: CohereForAI/aya-expanse-8b
Model Sources
- Paper: COMI-LINGUA: Expert Annotated Large-Scale Dataset for Multitask NLP in Hindi-English Code-Mixing
- Demo: Integrated in Demo Portal
Uses
Sentence-level MLI in Hinglish pipelines (e.g., preprocessing for downstream tasks like translation, sentiment analysis, or code-switching detection in social media/news).
Helps determine the dominant language structure for better handling of code-mixed content.
Example inference prompt:
Identify the matrix language (hi = Hindi matrix, en = English matrix) in: "PM Narendra Modi ne Google CEO Sundar Pichai se mulakat ki."
Output: 'hi'
Training Details
Training Data
Training Procedure
Preprocessing
Tokenized with base tokenizer; instruction templates + few-shot examples. Filtered: ≥5 tokens, no hate/non-Hinglish, balanced matrix languages.
Training Hyperparameters
- Regime: PEFT LoRA (rank=32, alpha=64, dropout=0.1)
- Epochs: 3
- Batch: 4 (accum=8, effective=32)
- LR: 2e-4 (cosine + warmup=0.1)
- Weight decay: 0.01
Evaluation
Testing Data
COMI-LINGUA MLI test set (5K instances).
Metrics
Macro Precision / Recall / F1 (sentence-level).
Results
| Setting | P | R | F1 |
|---|---|---|---|
| Zero-shot | 98.71 | 59.56 | 74.25 |
| One-shot | 98.35 | 81.36 | 89.00 |
| Fine-tuned | 98.90 | 98.77 | 94.94 |
Summary: Near-perfect performance on MLI; fine-tuning pushes open-weight models to near-ceiling accuracy, outperforming zero/one-shot LLMs and establishing SOTA for Hinglish matrix language detection.
Bias, Risks, and Limitations
This model is a research preview and is subject to ongoing iterative updates. As such, it provides only limited safety measures.
May struggle with highly balanced code-mixing (no clear matrix), rare syntactic patterns, or domain shift outside news/social media. Some zero-shot LLMs showed format instability (e.g., outputting 'Mixed' instead of 'hi'/'en'); fine-tuning resolves this.
Model Card Contact
Lingo Research Group at IIT Gandhinagar, India
Mail at: lingo@iitgn.ac.in
Citation
If you use this model, please cite the following work:
@inproceedings{sheth-etal-2025-comi,
title = "{COMI}-{LINGUA}: Expert Annotated Large-Scale Dataset for Multitask {NLP} in {H}indi-{E}nglish Code-Mixing",
author = "Sheth, Rajvee and
Beniwal, Himanshu and
Singh, Mayank",
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2025",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.findings-emnlp.422/",
pages = "7973--7992",
ISBN = "979-8-89176-335-7",
}
- Downloads last month
- 4
Model tree for LingoIITGN/COMI-LINGUA-MLI
Base model
CohereLabs/aya-expanse-8b