Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-classification
|
| 5 |
+
language:
|
| 6 |
+
- multilingual
|
| 7 |
+
tags:
|
| 8 |
+
- language-identification
|
| 9 |
+
- unigram
|
| 10 |
+
- tokenizer
|
| 11 |
+
- tinyaya
|
| 12 |
+
pretty_name: TinyAya LID Experiment Logs
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# TinyAya LID — Models, Eval Data & Training Artifacts
|
| 16 |
+
|
| 17 |
+
Artifacts for the **Contrastive UniLID** project: language identification using LLM tokenizer vocabularies (TinyAya 261k BPE→Unigram), trained on GlotLID-C, evaluated on CommonLID.
|
| 18 |
+
|
| 19 |
+
Source code: [github.com/divyanshsinghvi/tinyAyaLid](https://github.com/divyanshsinghvi/tinyAyaLid)
|
| 20 |
+
|
| 21 |
+
> **Note**: GlotLID-C training corpus is **not included** here — it can be re-downloaded from [`cis-lmu/glotlid-corpus`](https://huggingface.co/datasets/cis-lmu/glotlid-corpus). This repo only ships the eval data, models, training weights, and LLM cache.
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
## Structure
|
| 26 |
+
|
| 27 |
+
```
|
| 28 |
+
.
|
| 29 |
+
├── models/ # Trained .unilid model files + eval JSONs
|
| 30 |
+
│ ├── tinyaya_v3_200k/ # Best TinyAya model — 200k samples/lang
|
| 31 |
+
│ ├── tinyaya_v3_100k/ # TinyAya, 100k samples/lang
|
| 32 |
+
│ ├── tinyaya_soft_full/ # TinyAya, full GlotLID-C corpus
|
| 33 |
+
│ ├── mistral_v3_200k/ # Mistral-Nemo 131k tokenizer comparison
|
| 34 |
+
│ ├── scratch_v3_200k/ # Scratch 100k vocab comparison
|
| 35 |
+
│ ├── commonlid_20pct/ # Trained on 20% CommonLID split (TinyAya)
|
| 36 |
+
│ ├── commonlid_50pct/ # Trained on 50% CommonLID split (TinyAya)
|
| 37 |
+
│ ├── commonlid_20pct_mistral/ # 20% CommonLID split (Mistral)
|
| 38 |
+
│ ├── commonlid_50pct_mistral/ # 50% CommonLID split (Mistral)
|
| 39 |
+
│ ├── commonlid_20pct_scratch/ # 20% CommonLID split (Scratch)
|
| 40 |
+
│ └── commonlid_50pct_scratch/ # 50% CommonLID split (Scratch)
|
| 41 |
+
│
|
| 42 |
+
├── data/
|
| 43 |
+
│ ├── commonlid/ # CommonLID evaluation corpus (fastText format)
|
| 44 |
+
│ │ ├── commonlid_full.txt # Full test set (373k samples, 109 tags)
|
| 45 |
+
│ │ ├── commonlid_train.txt # Train split
|
| 46 |
+
│ │ ├── commonlid_test.txt # Test split
|
| 47 |
+
│ │ ├── commonlid_50pct_test.txt # 50% split
|
| 48 |
+
│ │ ├── commonlid_80pct_test.txt # 80% split
|
| 49 |
+
│ │ ├── commonlid_50perlang.txt # 50 samples/lang subsample
|
| 50 |
+
│ │ ├── commonlid_150perlang.txt # 150 samples/lang subsample
|
| 51 |
+
│ │ ├── commonlid_200perlang.txt # 200 samples/lang subsample
|
| 52 |
+
│ │ ├── commonlid_20pct_by_lang/ # Per-language files (20pct split)
|
| 53 |
+
│ │ └── commonlid_50pct_by_lang/ # Per-language files (50pct split)
|
| 54 |
+
│ │
|
| 55 |
+
│ └── misc/ # Small training experiment files
|
| 56 |
+
│ ├── train_quick.txt
|
| 57 |
+
│ ├── train_quick_test.txt
|
| 58 |
+
│ ├── train_1k.txt
|
| 59 |
+
│ ├── train_1k_test.txt
|
| 60 |
+
│ └── train_test.txt
|
| 61 |
+
│
|
| 62 |
+
├── training_weights/ # Per-language unigram log-prob dists from soft EM (compressed)
|
| 63 |
+
│ └── *.tar.gz # One tarball per experiment config
|
| 64 |
+
│
|
| 65 |
+
└── cache/ # Cached LLM API responses (two-stage eval)
|
| 66 |
+
└── cache.tar.gz
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
## Data Formats
|
| 70 |
+
|
| 71 |
+
- **fastText format** (`__label__<lang_Script> <text>`): all CommonLID files
|
| 72 |
+
- **Plain text** (one sentence per line): misc training files
|
| 73 |
+
|
| 74 |
+
## Languages
|
| 75 |
+
|
| 76 |
+
- **CommonLID eval**: 109 language tags (373,230 samples in `commonlid_full.txt`)
|
| 77 |
+
- **Alias mapping** (CommonLID→model individual code):
|
| 78 |
+
`ara→arb, aze→azj, bik→bcl, est→ekk, lav→lvs, mlg→plt, msa→zsm, orm→gaz, swa→swh, tgl→fil, uzb→uzn, zho→cmn`
|
| 79 |
+
|
| 80 |
+
## Reproducing Training
|
| 81 |
+
|
| 82 |
+
To retrain a model, download GlotLID-C separately:
|
| 83 |
+
```python
|
| 84 |
+
from datasets import load_dataset
|
| 85 |
+
ds = load_dataset("cis-lmu/glotlid-corpus")
|
| 86 |
+
```
|
| 87 |
+
Then run `train.py` from the source repo using the desired tokenizer.
|
| 88 |
+
|
| 89 |
+
## Contributors
|
| 90 |
+
|
| 91 |
+
Divyansh Singhvi, Megha Agarwal. Mentored by Julia Kreutzer.
|