qwen2.5-1.5b-sa-10k-0

This model is a vocabulary-expanded version of Qwen2.5-1.5B for Sanskrit.

Training Details

Parameter Value
Base Model Qwen2.5-1.5B
Target Language Sanskrit
Training Samples 10,000
Added Tokens 0
Training Data CC-100 (Sanskrit)

Method

  1. Stage 1: Initialize new token embeddings
  2. Stage 2: Full model fine-tuning using LoRA

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Intellexus/qwen2.5-1.5b-sa-10k-0")
tokenizer = AutoTokenizer.from_pretrained("Intellexus/qwen2.5-1.5b-sa-10k-0")

text = "Your text here"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Citations

### Qwen 2.5 (Base Model)

@article{qwen2.5,
    title = "Qwen2.5 Technical Report",
    author = "{Qwen Team, Alibaba}",
    year = "2024",
    url = "https://qwenlm.github.io/blog/qwen2.5/",
}

### CC-100 (Training Data)

@inproceedings{conneau-etal-2020-unsupervised,
    title = "Unsupervised Cross-lingual Representation Learning at Scale",
    author = "Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzman, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    year = "2020",
    url = "https://aclanthology.org/2020.acl-main.747",
}

### Saamayik (Sanskrit Parallel Data)

@inproceedings{maheshwari-etal-2024-samayik,
    title = "Sāmayik: A Benchmark and Dataset for {E}nglish-{S}anskrit Translation",
    author = "Maheshwari, Ayush and Gupta, Ashim and Krishna, Amrith and Singh, Atul Kumar and Ramakrishnan, Ganesh and Kumar, G. Anil and Singla, Jitin",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    year = "2024",
    url = "https://aclanthology.org/2024.lrec-main.1245",
}
Downloads last month
1
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Intellexus/qwen2.5-1.5b-sa-10k-0

Adapter
(510)
this model