Datasets:
Request access to KyRuBench-20K
Access is granted automatically upon agreeing to the terms.
By requesting access to KyRuBench-20K, you agree to the following terms:
- You will use this dataset for non-commercial research and educational purposes only.
- You will cite the dataset in any publications, reports, or systems that use it.
- You will not redistribute the dataset or any derivative of it without prior written permission from the authors.
- You will not create derivative datasets based on this benchmark without permission.
- You will use this dataset responsibly and in accordance with applicable laws.
- You accept that the authors provide this dataset as-is and bear no liability for downstream use.
- You agree that your access information (name, email, affiliation) may be stored by the dataset maintainers for record-keeping purposes.
Log in or Sign Up to review the conditions and access this dataset content.
KyRuBench-20K: A Domain-Balanced Benchmark for Kyrgyz→Russian Machine Translation
Overview
KyRuBench-20K is a curated evaluation benchmark for Kyrgyz (кыргызча) to Russian (русский) machine translation. It contains 20,000 parallel sentence pairs drawn from three distinct domains with equal representation, enabling fair cross-domain evaluation of translation systems.
This benchmark addresses the lack of standardized, balanced evaluation resources for Kyrgyz — a low-resource Turkic language with ~5 million speakers.
Dataset Summary
| Property | Value |
|---|---|
| Language pair | Kyrgyz (ky) → Russian (ru) |
| Total examples | 20,000 |
| Domains | 3 (equally balanced) |
| Script | Cyrillic (both languages) |
| Split | Test only (evaluation benchmark) |
Domain Distribution
| Domain | Examples | Description |
|---|---|---|
| General Language | 6,667 | Parallel text from news, government, and general web sources |
| Mighty Kyrgyz (Yudahin) | 6,666 | Yudahin dictionary-based parallel corpus |
| Literature | 6,667 | Literary text (novels, short stories, poetry) |
Quality Filtering
The benchmark was constructed through a rigorous multi-stage filtering pipeline:
- Deduplication — removed duplicate source and target sentences
- Length filtering — retained sentences with 20–300 characters on both sides
- Length ratio — source/target character ratio between 0.5 and 2.0
- Script validation — Cyrillic ratio ≥ 0.6 on both sides
- Domain balancing — equal random sampling from each domain (seed=42)
After filtering, 20,000 sentences were sampled with equal domain representation.
Baseline Results
We evaluate 7 systems across three categories: dedicated MT, frontier LLMs, and web translators. All LLM-based systems use identical translation prompts for fair comparison.
| Rank | System | Type | BLEU | chrF++ | General | Yudahin | Literature |
|---|---|---|---|---|---|---|---|
| 1 | AirUn Translator + LLM | Dedicated MT | 53.32 | 71.81 | 45.92 | 64.61 | 52.27 |
| 2 | AirUn Translator | Dedicated MT | 52.37 | 71.29 | 43.18 | 64.67 | 52.95 |
| 3 | Claude Opus 4.6 | Frontier LLM | 45.49 | 67.83 | 48.99 | 49.64 | 36.66 |
| 4 | GPT-5.4 | Frontier LLM | 43.53 | 66.59 | 47.58 | 48.60 | 33.68 |
| 5 | Aitil Translate | Web Translator | 42.15 | 64.35 | 46.91 | 47.27 | 31.33 |
| 6 | Claude Sonnet 4.6 | Frontier LLM | 38.85 | 63.75 | 41.56 | 44.97 | 30.08 |
| 7 | Grok 4.20 | Frontier LLM | 37.39 | 62.53 | 40.63 | 43.39 | 27.92 |
Key Findings
- Dedicated MT systems significantly outperform frontier LLMs on Kyrgyz→Russian, with a gap of 8–16 BLEU points
- Literature domain exposes the largest quality gap between dedicated MT (BLEU ~52) and frontier LLMs (BLEU ~28–37)
- General Language domain is the most competitive, with some LLMs approaching dedicated MT performance
- Claude Opus 4.6 is the strongest LLM for Kyrgyz, followed by GPT-5.4
- Kyrgyz remains a challenging low-resource language even for state-of-the-art LLMs
Usage
from datasets import load_dataset
dataset = load_dataset("BDigit/KyRuBench-20K")
# Access the test split
for example in dataset["test"]:
print(example["source"]) # Kyrgyz sentence
print(example["target"]) # Russian reference translation
Evaluation
We recommend evaluating with sacreBLEU for reproducibility:
import sacrebleu
# After generating hypotheses
bleu = sacrebleu.corpus_bleu(hypotheses, [references])
chrf = sacrebleu.corpus_chrf(hypotheses, [references], word_order=2) # chrF++
print(f"BLEU: {bleu.score:.2f}")
print(f"chrF++: {chrf.score:.2f}")
Important: Always report per-domain scores alongside overall metrics. The three domains test fundamentally different translation challenges.
Data Fields
- source (
str): Sentence in Kyrgyz (Cyrillic script) - target (
str): Reference translation in Russian (Cyrillic script)
Intended Use
- Benchmarking machine translation systems for Kyrgyz→Russian
- Evaluating multilingual LLMs on low-resource Turkic language translation
- Comparing dedicated MT systems against general-purpose language models
- Studying domain effects on translation quality for low-resource languages
Limitations
- Single reference: Each source has one reference translation. Multi-reference evaluation would give more robust scores.
- Direction: Currently Kyrgyz→Russian only. Russian→Kyrgyz evaluation requires separate references.
- Literature domain: Literary text availability is limited, constraining future expansion of this domain.
- Reference quality: References may contain occasional noise from the original parallel corpus.
Citation
@misc{kyrubench2026,
title={KyRuBench-20K: A Domain-Balanced Benchmark for Kyrgyz-Russian Machine Translation},
author={Alibekov, Nurtilek and Kumarbai uulu, Bektemir and Uvalieva, Zarina and Tashbaltaev, Tynchtykbek and Metinov, Adilet},
year={2026},
organization={Bdigital LLC},
howpublished={\\url{https://huggingface.co/datasets/BDigit/KyRuBench-20K}},
}
License
CC-BY-NC-ND-4.0 (Attribution, Non-Commercial, No Derivatives)
For commercial licensing inquiries, contact an@bdigital.kg.
- Downloads last month
- 17