Text Generation
Transformers
Safetensors
English
model_binary_affine_code_n_layer_32
feature-extraction
causal-lm
transformer
decoder-only
table-free-input
binary-token-codes
affine-recoding
research
custom_code
Instructions to use Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free
- SGLang
How to use Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free with Docker Model Runner:
docker model run hf.co/Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free
| license: apache-2.0 | |
| library_name: transformers | |
| tags: | |
| - causal-lm | |
| - text-generation | |
| - transformer | |
| - decoder-only | |
| - table-free-input | |
| - binary-token-codes | |
| - affine-recoding | |
| - research | |
| language: | |
| - en | |
| # Affine-Recoded Minimal Code Table-Free Model | |
| Research checkpoint for the paper: | |
| **Language Models Without a Trainable Input Embedding Table: Learning from Fixed Minimal Binary Token Codes** | |
| ## Model variant | |
| This repository contains the **fully table-free affine-recoded minimal binary-code model**. | |
| The model does not use an input embedding table. Instead, token codes are computed directly from token IDs. | |
| For each token ID `t`, the model computes: | |
| ```text | |
| c(t) = bin_16(t) | |
| ``` | |
| and then applies a fixed invertible affine recoding over GF(2): | |
| ```text | |
| c_tilde(t) = A c(t) xor b | |
| ``` | |
| where: | |
| - `A` is an invertible binary matrix in `GL(16, 2)` | |
| - `b` is a fixed binary shift vector | |
| The resulting 16-dimensional binary code is tiled to model width 1024. | |
| The model uses: | |
| ```text | |
| 0 trainable input-embedding parameters | |
| 0 input embedding table | |
| ``` | |
| The output projection remains standard and trainable. | |
| ## Architecture | |
| - decoder-only Transformer | |
| - vocabulary size: 65,536 | |
| - model width: 1024 | |
| - number of layers: 32 | |
| - number of attention heads: 32 | |
| - context length: 1024 | |
| - rotary positional embeddings | |
| - GELU activations | |
| - untied trainable output projection | |
| ## Loading example | |
| ```python | |
| import torch | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| repo_id = "Bochkov/llm-fix-min-affine-recoded-minimal-code-table-free" | |
| tokenizer = AutoTokenizer.from_pretrained(repo_id, trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained(repo_id, trust_remote_code=True) | |
| model.eval() | |
| prompt = "Question: What is the capital of UK?\nAnswer:" | |
| input_ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long) | |
| with torch.no_grad(): | |
| output_ids = model.generate(input_ids, max_new_tokens=3, do_sample=False) | |
| print(tokenizer.decode(output_ids[0].tolist())) | |
| ``` | |
| ## Intended use | |
| This checkpoint is provided for reproducibility. It demonstrates that the fixed minimal-code input interface remains viable even when the canonical token-ID binary code is randomly recoded by an invertible affine transform. | |
| ## Limitations | |
| This model is a research checkpoint. It is not intended for deployment. It may produce incorrect, biased, unsafe, or nonsensical outputs. | |
| ## Training data | |
| The model was trained on the same FineWeb-Edu + Cosmopedia mixture used for the matched comparisons in the paper. Dataset terms and licenses are those of the original datasets. | |
| --- | |
| ## 🧑🔬 Citation & Concept | |
| If you use this model or the underlying concepts in your research, please cite our work: | |
| ``` | |
| @misc{bochkov2026languagemodelstrainableinput, | |
| title={Language Models Without a Trainable Input Embedding Table: Learning from Fixed Minimal Binary Token Codes}, | |
| author={A. Bochkov}, | |
| year={2026}, | |
| eprint={2605.09751}, | |
| archivePrefix={arXiv}, | |
| primaryClass={cs.CL}, | |
| url={https://arxiv.org/abs/2605.09751}, | |
| } | |
| ``` |