SmolLM-TS-135M

A 135M parameter language model specialised in 3GPP and ETSI telecommunications standards, trained via full fine-tuning on TeleSpec-Data.

Part of the SmolLM-TS series β€” small language models adapted exclusively to telecommunications standards documents, with zero arXiv or web content in the training corpus.

Looking for the instruction-tuned version? See nareshmodina/SmolLM-TS-135M-it


Model Details

Base model HuggingFaceTB/SmolLM2-135M
Parameters 135M
Training Full fine-tuning on TeleSpec-Data
Pretraining data TeleSpec-Data (1.87B tokens)
Context length 4096 tokens
Hardware 3Γ— NVIDIA L40S (48GB)
Training steps 7,054 (2 epochs)
Final eval loss 0.9798

Training

Full fine-tuning of all model weights on 457,160 packed 4096-token blocks (1.87B tokens) from 38,302 standards documents β€” 15,054 3GPP (Rel-8 to Rel-19) and 23,248 ETSI documents spanning 15 working groups (2000–2024). Zero arXiv or web content β€” 100% standards text.

  • Epochs: 2 β€” Loss: 1.326 β†’ 1.040
  • Effective batch size: 128 β€” LR: 1e-4 (cosine with warmup)
  • Context length: 4096 tokens

Usage

This is a base model β€” it continues text rather than following instructions. For instruction following, use SmolLM-TS-135M-it.

from transformers import AutoTokenizer, AutoModelForCausalLM
from pathlib import Path
import torch

model_id  = "nareshmodina/SmolLM-TS-135M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = AutoModelForCausalLM.from_pretrained(
    model_id, dtype=torch.bfloat16, device_map="auto"
)

prompt  = "The RRC Connection Establishment procedure in LTE is"
inputs  = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations

  • Base model only β€” does not follow instructions, use SmolLM-TS-135M-it for Q&A
  • Small capacity β€” 135M parameters limit complex multi-step reasoning
  • Standards only β€” strong 3GPP/ETSI knowledge, limited general telecom knowledge
  • Not for production β€” intended for research purposes only

Links


Citation

@misc{modina2025smollmts,
  author    = {Naresh Modina},
  title     = {SmolLM-TS: Small Language Models for Telecommunications Standards},
  year      = {2025},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/nareshmodina/SmolLM-TS-135M}
}
Downloads last month
15
Safetensors
Model size
0.1B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for nareshmodina/SmolLM-TS-135M

Finetuned
(862)
this model

Dataset used to train nareshmodina/SmolLM-TS-135M

Collection including nareshmodina/SmolLM-TS-135M