Datasets:
| # Patent Boundary Notice | |
| ## Scope | |
| The retokenization pipeline (`retokenize_scylla.py`) included in this repository converts | |
| SentencePiece-tokenized FineWeb shards to alternative tokenizer vocabularies. The algorithm | |
| is straightforward: decode tokens to bytes, re-encode with the target tokenizer, validate | |
| byte-level roundtrip fidelity. | |
| ## What is NOT covered | |
| The pre-tokenized data files (`.bin` shards) in this dataset are standard binary token | |
| sequences. They do not embody any patentable method — they are the output of running a | |
| tokenizer on public data. | |
| ## What MAY be covered | |
| Techniques described in patent applications by Light Speed Up LLC related to: | |
| - Sensitivity-guided mixed-precision quantization | |
| - Adaptive tokenizer selection for language model compression | |
| These techniques are implemented in training code (not in this dataset or retokenization tool). | |
| The Apache 2.0 license on the code grants patent rights for the code as-shipped. | |
| ## Contact | |
| For patent-related inquiries: mato@lightspeedup.com | |