Qwen-TS-500M-it

A 500M parameter instruction-tuned language model specialised in 3GPP and ETSI telecommunications standards. Trained via full fine-tuning on TeleSpec-Data followed by LoRA instruction fine-tuning on Alpaca.

Part of the tele-SLMs series β€” small language models adapted exclusively to telecommunications standards documents, with zero arXiv or web content in the training corpus.

Looking for the base pretrained version? See nareshmodina/Qwen-TS-500M


Model Details

Base model Qwen/Qwen2.5-0.5B
Parameters 494M
Training Full FT pretrain β†’ LoRA SFT (Alpaca)
Pretraining data TeleSpec-Data (1.87B tokens)
SFT data Alpaca 52k (full dataset)
Context length 4096 tokens
Hardware 2Γ— NVIDIA RTX 6000 Ada Generation (48GB) + DeepSpeed ZeRO-2

Training

Stage 1 β€” Full fine-tuning on TeleSpec-Data

All model weights updated on 409,117 packed 4096-token blocks (1.67B tokens) from 38,302 standards documents β€” 15,054 3GPP (Rel-8 to Rel-19) and 23,248 ETSI documents spanning 15 working groups (2000–2024). Zero arXiv or web content β€” 100% standards text.

  • Epochs: 2 β€” Effective batch size: 128 β€” LR: 5e-5 (cosine)
  • DeepSpeed ZeRO-2 for memory efficiency on 2Γ—48GB GPUs

Stage 2 β€” LoRA instruction fine-tuning

LoRA (r=16, Ξ±=32) on full Alpaca 52k dataset. Base weights frozen to preserve domain knowledge.

  • Epochs: 1 β€” LR: 1e-5

Evaluation

Evaluated on Tele-Eval using the metrics defined in Maatouk et al. (2024) β€” standards-derived questions only (standard_* IDs, 10,000 examples, seed 42).

Model Ans-PPL ↓ SemScore ↑
Qwen2.5-0.5B-alpaca (base + Alpaca SFT) 10.57 0.6123
Qwen-TS-500M-it (ours) 5.39 0.6901

49.0% Ans-PPL reduction vs base+SFT baseline β€” strongest improvement in the series.

Comparison across model sizes:

Model Ans-PPL ↓ SemScore ↑
SmolLM-TS-135M-it 9.19 0.6504
SmolLM-TS-360M-it 8.62 0.6572
Qwen-TS-500M-it 5.39 0.6901

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id  = "nareshmodina/Qwen-TS-500M-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = AutoModelForCausalLM.from_pretrained(
    model_id, dtype=torch.bfloat16, device_map="auto"
)

prompt = (
    "The following is a question about telecommunications and networking.\n"
    "Question: What is the purpose of the RRC Connection Establishment procedure in LTE?\n"
    "Answer:"
)

inputs  = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=150,
    do_sample=False,
    repetition_penalty=1.3,
)
answer = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(answer)

Note: Use the Alpaca-style Question: ... Answer: prompt format for best results.


Limitations

  • Alpaca SFT β€” trained for Q&A style responses, not multi-turn conversation
  • Standards only β€” strong 3GPP/ETSI knowledge, limited general telecom knowledge
  • Not for production β€” intended for research purposes only

Links


Citation

@misc{modina2025teleslms,
  author    = {Naresh Modina},
  title     = {tele-SLMs: Small Language Models for Telecommunications Standards},
  year      = {2025},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/nareshmodina/Qwen-TS-500M-it}
}

@misc{maatouk2024telellms,
  title         = {Tele-LLMs: A Series of Specialized Large Language Models for Telecommunications},
  author        = {Ali Maatouk and Kenny Chirino Ampudia and Rex Ying and Leandros Tassiulas},
  year          = {2024},
  eprint        = {2409.05314},
  archivePrefix = {arXiv},
  primaryClass  = {cs.IT}
}
Downloads last month
8
Safetensors
Model size
0.5B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for nareshmodina/Qwen-TS-500M-it

Finetuned
(567)
this model

Datasets used to train nareshmodina/Qwen-TS-500M-it

Collection including nareshmodina/Qwen-TS-500M-it

Paper for nareshmodina/Qwen-TS-500M-it