GPT-OSS-120B β Original MXFP4 Reference Weights (Mirror)
β οΈ IMPORTANT
This is an archived mirror and NOT the original upstream repository maintained by OpenAI.
It is not affiliated with, endorsed by, or officially supported by OpenAI.
It contains only the original reference checkpoints of GPT-OSS-120B in MXFP4 format,
preserved for archival, research, and conversion purposes.These weights are NOT inference-ready and cannot be used directly for generation without conversion to an appropriate runtime format.
See upstream for inference-ready usage.
Repository Scope
This repository intentionally stores a minimal and authoritative subset of files required to preserve the GPT-OSS-120B model.
The repository does not include inference binaries, converted weights, or example usage code.
Contents of This Repository
π¦ Model Weights (Original Checkpoint)
Located in the original/ directory:
*.safetensors
β Original MXFP4 reference weights (sharded)model.safetensors.index.jsonβ Tensor-to-shard mapping
These files represent the source-of-truth model parameters released by OpenAI.
π Model Configuration & Metadata
config.json
β Model architecture definition (layers, dimensions, MoE structure)- Dtype / quantization metadata
β Documents the MXFP4 numeric format used by the weights LICENSE
β Apache License, Version 2.0
β Not Included
To avoid ambiguity, this repository does NOT include:
- Tokenizer files (
tokenizer.json,tokenizer.model, etc.) - Inference-ready weights (Transformers / vLLM / Metal)
- Runtime binaries or compiled artifacts
- Example inference code
Tokenizer and inference artifacts are available from the upstream OpenAI repositories.
About MXFP4
The GPT-OSS models were post-trained using MXFP4 quantization of the MoE weights.
- MXFP4 is required to fit GPT-OSS-120B within an 80GB GPU
- Specialized kernels and tooling are required for inference
- This repository preserves the original MXFP4 semantics for future compatibility
Intended Use Cases
This repository is suitable for:
- Long-term archival and backup
- Research into model parameters
- Converting weights into runtime-specific formats
- Fine-tuning workflows that support MXFP4
- Audit and reproducibility work
It is not intended for direct inference or deployment.
Upstream References
- Official model repository: https://huggingface.co/openai/gpt-oss-120b
- Source code and tooling: https://github.com/openai/gpt-oss
- Model card (arXiv): https://arxiv.org/abs/2508.10925
License
This repository is distributed under the Apache License 2.0.
All rights to the model architecture, training, and original release remain with OpenAI.
- Downloads last month
- 16