You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

license: apache-2.0 base_model: Qwen/Qwen3-0.6B library_name: transformers pipeline_tag: text-generation tags: - tajik - central-asian - education - ameena - sovereign-ai - qwen3 - lightweight - edge-ai - offline - text-generation

Ameena-Qwen3-0.6B-Tajik-v8

A lightweight, fully fine-tuned Qwen3-0.6B model trained on 2B+ tokens of native Tajik educational content over 4 epochs. Optimized for offline, low-resource devices (4GB RAM) in rural Central Asia. Powers **Ameena.tj**’s mobile-first AI tutor.

Model Details

  • Developed by: Saidzoda AI Research Lab (IT Park Tajikistan)
  • Base model: Qwen/Qwen3-0.6B
  • Training data: 2B+ tokens from textbooks, exams, literature, and technical manuals
  • Languages: Tajik (primary), with cross-lingual support for Uzbek, Russian, English
  • Epochs: 4
  • License: Apache 2.0

Use Case

  • Offline AI tutoring on Android phones
  • Homework explanation in low-connectivity regions
  • Lightweight course generation for teachers
  • Edge deployment via llama.cpp or ONNX Runtime

Training Infrastructure

  • Hardware: NVIDIA H100 (80GB VRAM)
  • Provider: RunPod
  • Precision: BF16 mixed precision
  • Duration: ~60 hours

Environmental Impact

  • Cloud Provider: RunPod (EU data centers)
  • Region: Europe
  • Carbon Emitted: ~0.6 kg CO₂eq
    (Estimated via ML CO2 Impact Calculator)

How to Use

With Transformers (online)

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "SaidzodaEng/Ameena_Qwen3-0.6B_e8"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype="auto"
)
For Offline (GGUF version coming soon)

This model is being quantized to GGUF format for CPU-only inference. Follow our org for updates.

Out-of-Scope Use

Not for medical, legal, or financial advice
Not intended for high-stakes decision-making
Citation

APA:
Saidzoda AI Research Lab. (2025). Ameena_Qwen3-0.6B_e8. Hugging Face.

BibTeX:
@misc{saidzoda_ameena_qwen3_06b_2025,
  author = {Saidzoda AI Research Lab},
  title = {Ameena-Qwen3-0.6B-Tajik-v8},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/SaidzodaEng/Ameena_Qwen3-0.6B_e8}}
}
Downloads last month
-
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support