qwen2.5-1.5b-bo-50k-256

This model is a vocabulary-expanded version of qwen2.5-1.5b for Tibetan.

Training Details

Parameter Value
Base Model qwen2.5-1.5b
Target Language Tibetan
Training Samples 50,000
Added Tokens 256

Method

  1. Stage 1: Initialize new token embeddings
  2. Stage 2: Full model fine-tuning using LoRA

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Intellexus/qwen2.5-1.5b-bo-50k-256")
tokenizer = AutoTokenizer.from_pretrained("Intellexus/qwen2.5-1.5b-bo-50k-256")

text = "Your text here"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Citations

### Qwen 2.5 (Base Model)

@article{qwen2.5,
    title = "Qwen2.5 Technical Report",
    author = "{Qwen Team, Alibaba}",
    year = "2024",
    url = "https://qwenlm.github.io/blog/qwen2.5/",
}
### CC-100 (Training Data)

@inproceedings{conneau-etal-2020-unsupervised,
    title = "Unsupervised Cross-lingual Representation Learning at Scale",
    author = "Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzman, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    year = "2020",
    url = "https://aclanthology.org/2020.acl-main.747",
}

@inproceedings{wenzek-etal-2020-ccnet,
    title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data",
    author = "Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzman, Francisco and Joulin, Armand and Grave, Edouard",
    booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
    year = "2020",
    url = "https://aclanthology.org/2020.lrec-1.494",
}

### NLLB-200 (Tibetan Parallel Data)

@inproceedings{schwenk-etal-2021-ccmatrix,
    title = "{CCM}atrix: Mining Billions of High-Quality Parallel Sentences on the Web",
    author = "Schwenk, Holger and others",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics",
    year = "2021",
    url = "https://aclanthology.org/2021.acl-long.507",
}

@article{heffernan2022bitext,
    title = "Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages",
    author = "Heffernan, Kevin and others",
    journal = "arXiv preprint arXiv:2205.12654",
    year = "2022",
}

@article{nllb2022,
    title = "No Language Left Behind: Scaling Human-Centered Machine Translation",
    author = "{NLLB Team}",
    journal = "arXiv preprint arXiv:2207.04672",
    year = "2022",
}

Model Citation

@misc{intellexus-qwen2.5-1.5b-bo-50k-256,
  author = {Intellexus},
  title = {qwen2.5-1.5b-bo-50k-256},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/Intellexus/qwen2.5-1.5b-bo-50k-256}
}
Downloads last month
1
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Intellexus/qwen2.5-1.5b-bo-50k-256

Adapter
(510)
this model

Collection including Intellexus/qwen2.5-1.5b-bo-50k-256

Papers for Intellexus/qwen2.5-1.5b-bo-50k-256