Qwen-TS-500M

A 500M parameter language model specialised in 3GPP and ETSI telecommunications standards, trained via full fine-tuning on TeleSpec-Data.

Part of the tele-SLMs series β€” small language models adapted exclusively to telecommunications standards documents, with zero arXiv or web content in the training corpus.

Instruction-tuned version coming soon: Qwen-TS-500M-it


Model Details

Base model Qwen/Qwen2.5-0.5B
Parameters 494M
Training Full fine-tuning on TeleSpec-Data
Pretraining data TeleSpec-Data (1.87B tokens)
Context length 4096 tokens
Hardware 2Γ— NVIDIA RTX 6000 Ada Generation (48GB) + DeepSpeed ZeRO-2

Training

Full fine-tuning of all model weights on 409,117 packed 4096-token blocks (1.67B tokens) from 38,302 standards documents β€” 15,054 3GPP (Rel-8 to Rel-19) and 23,248 ETSI documents spanning 15 working groups (2000–2024). Zero arXiv or web content β€” 100% standards text.

  • Epochs: 2
  • Effective batch size: 128 β€” LR: 5e-5 (cosine with warmup)
  • Context length: 4096 tokens
  • DeepSpeed ZeRO-2 for memory efficiency

Usage

This is a base model β€” it continues text rather than following instructions. An instruction-tuned version will be released shortly.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id  = "nareshmodina/Qwen-TS-500M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = AutoModelForCausalLM.from_pretrained(
    model_id, dtype=torch.bfloat16, device_map="auto"
)

prompt  = "The RRC Connection Establishment procedure in LTE is"
inputs  = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations

  • Base model only β€” does not follow instructions
  • Standards only β€” strong 3GPP/ETSI knowledge, limited general telecom knowledge
  • Not for production β€” intended for research purposes only

Links


Citation

@misc{modina2025teleslms,
  author    = {Naresh Modina},
  title     = {tele-SLMs: Small Language Models for Telecommunications Standards},
  year      = {2025},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/nareshmodina/Qwen-TS-500M}
}
Downloads last month
25
Safetensors
Model size
0.6B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for nareshmodina/Qwen-TS-500M

Finetuned
(567)
this model

Dataset used to train nareshmodina/Qwen-TS-500M

Collection including nareshmodina/Qwen-TS-500M