Llama 70b English to Tatar Translation (Synthetic Data)

  • Developed by: KickItLikeShika
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.3-70b-instruct-unsloth-bnb-4bit

Model was trained as part of Low Resource Machine Translation Workshop (EACL26) https://www.loresmt.org/

The model was trained on a new synthetic dataset introduced as part of my research on the workshop

Training data: https://huggingface.co/datasets/KickItLikeShika/english-tatar-translation

Citation

@inproceedings{khamis-2026-navigating,
    title = "Navigating Data Scarcity in Low-Resource {E}nglish-{T}atar Translation using {LLM} Fine-Tuning",
    author = "Khamis, Ahmed Khaled",
    editor = "Ojha, Atul Kr.  and
      Liu, Chao-hong  and
      Vylomova, Ekaterina  and
      Pirinen, Flammie  and
      Washington, Jonathan  and
      Oco, Nathaniel  and
      Zhao, Xiaobing",
    booktitle = "Proceedings for the Ninth Workshop on Technologies for Machine Translation of Low Resource Languages ({L}o{R}es{MT} 2026)",
    month = mar,
    year = "2026",
    address = "Rabat, Morocco",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2026.loresmt-1.16/",
    pages = "198--202",
    ISBN = "979-8-89176-366-1"
}
Downloads last month
254
Safetensors
Model size
71B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including KickItLikeShika/llama-3.3-70B-Instruct-en-tt