Monolingual Tokenizer - Punjabi (Vocab 128000)
This is a monolingual tokenizer trained on Punjabi text with vocabulary size 128000.
Usage
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("monolingual-tokenizer-native-pan-vocab-128000")
Files
pan.model: SentencePiece model filepan.vocab: Vocabulary fileconfig.json: Tokenizer configuration
Training Details
- Language: Punjabi (pan)
- Vocabulary Size: 128000
- Model Type: SentencePiece Unigram
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support