gpt-oss-20b-Opus-4.5-distill-GGUF : GGUF

This model was finetuned and converted to GGUF format using Unsloth.

This was trained on the TeichAI Opus 4.5 dataset https://huggingface.co/datasets/TeichAI/claude-4.5-opus-high-reasoning-250x

This model is trained to sound like Opus do not expect it to perform like Opus does. Hes still a silly lil guy so expect the usual dumbness you get from a model this size.

Since the dataset is 250x examples its for making a style distill there is simply not enough juicy data to create a "Mini Opus"

To do that you would likely need hundreds of thousands of examples and billions of tokens. Even then its only 20b parameters there is only so much that a small model can learn.

It still fails at tool calls in LM studio like the stock GPT OSS 20b does.

Trained using a rank 8 lora and 1e-4 LR

Downloads last month
222
GGUF
Model size
21B params
Architecture
gpt-oss
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train EclipseMist/gpt-oss-20b-Opus-4.5-distill-GGUF-EXP