fineweb2-hq-tokenizers-v2
Collection
182 items • Updated
A Byte-Level BPE tokenizer trained on fra_Latn data from Fineweb-2-HQ.
| Parameter | Value |
|---|---|
| Algorithm | Byte-Level BPE |
| Language | fra_Latn |
| Target Vocab Size | 16,000 |
| Final Vocab Size | 16,937 |
| Pre-tokenizer | custom:fra_Latn |
| Number handling | ltr_3digit |
| Contraction handling | True |
| Normalizer | NFC |
| Special Tokens | <s>, </s>, <pad>, <unk> |
| Training Shards | 2 |
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_ltr_fra_Latn_16000_v2")
tokens = tokenizer.encode("Hello, world!")
tokenizer.json — Full HuggingFace tokenizervocab.json — Vocabulary mappingmerges.txt — BPE merge rules| Text | Tokens | Token IDs |
|---|---|---|
Hello, world! 12345 This is a test. こんにちは |
H, ello, ,, Ġw, orld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġ, ãģ, ĵ, ãĤ, ĵ, ãģ |
42, 14563, 14, 922, 6416, 3, 223, 16802, 3878, 12436, 1528, 272, 1935, 16, 223, 5466, 244, 9600, 244, 5466 |