Byte-Level BPE Tokenizer: fw_edu (4K)

A Byte-Level BPE tokenizer trained on fw_edu data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language fw_edu
Target Vocab Size 4,000
Final Vocab Size 5,048
Pre-tokenizer custom:fw_edu
Number handling ltr_3digit
Contraction handling True
Normalizer NFC
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 2

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_ltr_fw_edu_4000_v2")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json — Full HuggingFace tokenizer
  • vocab.json — Vocabulary mapping
  • merges.txt — BPE merge rules

Sample Encoding

Text Tokens Token IDs
Hello, world! 12345 This is a test. こんにちは H, ell, o, ,, Ġworld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġ, ã, ģ, ĵ, ã, Ĥ 42, 477, 81, 14, 904, 3, 223, 4983, 3405, 655, 313, 260, 1074, 16, 223, 162, 226, 244, 162, 227
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collections including flexitok/bpe_ltr_fw_edu_4000_v2