Model Card for Romanized Bengali Transliterator
Model Summary
This model is a Marian-based Seq2Seq transliterator trained on the Romanized Bengali dataset (975k pairs).
It maps Bengali written in the Roman alphabet (Banglish) into native Bengali script.
- Architecture: MarianMT (Seq2Seq Transformer)
- Parameters: ~60M
- Training Data: romanized_bengali dataset (rule-based transliteration from Bengali Wikipedia)
- Languages: Bengali (Romanized ↔ Bangla script)
Intended Use
- Transliteration of Banglish into Bengali for:
- Social media analytics
- Search and information retrieval
- Chatbots and assistants
- Hate speech detection tasks
Example Usage
from transformers import MarianMTModel, MarianTokenizer
model_name = "sk-community/romanized_bengali_transliterator"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
input_text = "tumi kemon acho"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Output: "তুমি কেমন আছো"
Performance
- BLEU (Bengali): 77.82
- CER: 0.16
- WER: 0.25
Citation
@article{gharami2025indotranslit,
title={Modeling Romanized Hindi and Bengali: Dataset Creation and Multilingual LLM Integration},
author={Kanchon Gharami and Quazi Sarwar Muhtaseem and Deepti Gupta and Lavanya Elluri and Shafika Showkat Moni},
year={2025}
}
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support