Model Description This model is a Continual Pre-trained (CPT) version based on URajinda/ShweYon_Qwen2.5-Burmese-1.5B-v1.2.4. It is specifically optimized to further enhance the Burmese (Myanmar) language capabilities, especially for spoken and formal text patterns.

Key Features Base Model: URajinda/ShweYon_Qwen2.5-Burmese-1.5B-v1.2.4 (Modified Tokenizer)

Language: Myanmar (Burmese)

Training Method: LoRA (Low-Rank Adaptation)

Specialization: Optimized for Burmese vocabulary and token efficiency using the ShweYon base.

Intended Use Enhanced Burmese text generation.

Foundation for Burmese Instruction Tuning (SFT).

Efficient token processing for Myanmar script.

Technical Details Tokenizer: Uses the custom Burmese-optimized tokenizer from the ShweYon base model.

Format: PEFT Adapter (LoRA).

Downloads last month
17
Safetensors
Model size
2B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for URajinda/qwen1.5b-myanmar-cpt-final

Adapter
(1)
this model
Quantizations
1 model