Super overfitted, but GGUF is still for grabs

Trained on 200 example one shot html pages generated by gpt-oss-120b

use Qwen3-4b-semi-distill-Step3.5-Flash instead if you want

Unsloth stuff

This model is a fine-tuned version of unsloth/qwen3-4b-unsloth-bnb-4bit. It has been trained using TRL.

This model was trained with SFT.

Framework versions:

  • TRL: 0.24.0
  • Transformers: 4.57.3
  • Pytorch: 2.10.0
  • Datasets: 4.3.0
  • Tokenizers: 0.22.1
Downloads last month
10
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NotHereNorThere/qwen3-4b-200html-distill-gpt-oss-120b

Quantized
(98)
this model

Collection including NotHereNorThere/qwen3-4b-200html-distill-gpt-oss-120b