How to Get Started with the Model

import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from peft import PeftModel

model_id = "snjev310/aya-101-english-angika"
base_model_id = "CohereForAI/aya-101"

Load tokenizer and base model

tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(base_model_id, torch_dtype=torch.float16, device_map="auto")

Load the Angika adapter

model = PeftModel.from_pretrained(model, model_id)

Inference Example

text = "translate English to Angika: How are you today?"
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Citation [optional]

@inproceedings{kumar-etal-2026-srcmix,
    title = "{S}rc{M}ix: Mixing of Related Source Languages Benefits Extremely Low-resource Machine Translation",
    author = "Kumar, Sanjeev  and
      Jyothi, Preethi  and
      Bhattacharyya, Pushpak",
    editor = "Demberg, Vera  and
      Inui, Kentaro  and
      Marquez, Llu{\'i}s",
    booktitle = "Findings of the {A}ssociation for {C}omputational {L}inguistics: {EACL} 2026",
    month = mar,
    year = "2026",
    address = "Rabat, Morocco",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2026.findings-eacl.332/",
    doi = "10.18653/v1/2026.findings-eacl.332",
    pages = "6306--6323",
    ISBN = "979-8-89176-386-9",
}

Framework versions

  • PEFT 0.9.0
Downloads last month
148
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for snjev310/aya-101-english-angika

Adapter
(31)
this model

Spaces using snjev310/aya-101-english-angika 2