qwen2.5-1.5b-bo-100k-512

This model is a vocabulary-expanded version of Qwen2.5-1.5B for Tibetan.

Training Details

Parameter Value
Base Model Qwen2.5-1.5B
Target Language Tibetan
Training Samples 100,000
Added Tokens 512
Training Data CC-100 (Tibetan)

Method

  1. Stage 1: Initialize new token embeddings
  2. Stage 2: Full model fine-tuning using LoRA

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Intellexus/qwen2.5-1.5b-bo-100k-512")
tokenizer = AutoTokenizer.from_pretrained("Intellexus/qwen2.5-1.5b-bo-100k-512")

text = "Your text here"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
3
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Intellexus/qwen2.5-1.5b-bo-100k-512

Adapter
(510)
this model