Qwen3.5-0.8B-SFT-Unsloth — Fine-tuned on Claude-Opus-Reasoning
Fine-tuned adapter-merged checkpoint of Qwen3.5-0.8B on the
ermiaazarkhalili/Claude-Opus-4.7-Reasoning dataset, produced via
Unsloth + Hugging Face TRL's SFTTrainer.
| Field | Value |
|---|---|
| Base model | unsloth/Qwen3.5-0.8B |
| Architecture | Qwen3ForCausalLM |
| Parameters | 0.8B |
| Precision | bfloat16 (merged 16-bit) |
| Fine-tuning method | QLoRA (4-bit base, LoRA r=16, α=16) |
| Dataset | ermiaazarkhalili/Claude-Opus-4.7-Reasoning (distillation corpus examples) |
| Training | N=1 full epoch (N=1 epoch steps, effective batch=8) |
| Learning rate | 2e-4 (linear decay, warmup 5 steps) |
| Unsloth version | 2026.4.6 |
| Trained on | DRAC Fir cluster, NVIDIA H100 80GB HBM3 MIG 3g.40gb |
Usage
Python (transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"ermiaazarkhalili/Qwen3.5-0.8B-SFT-Claude-Opus-Reasoning-Unsloth",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("ermiaazarkhalili/Qwen3.5-0.8B-SFT-Claude-Opus-Reasoning-Unsloth", trust_remote_code=True)
messages = [{"role": "user", "content": "Explain step-by-step: if a train travels 60 mph for 2.5 hours, how far does it go?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text=text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Unsloth (2× faster inference)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
"ermiaazarkhalili/Qwen3.5-0.8B-SFT-Claude-Opus-Reasoning-Unsloth",
max_seq_length=2048,
load_in_4bit=False,
)
FastLanguageModel.for_inference(model)
GGUF (llama.cpp / Ollama)
Quantized GGUF versions are available at ermiaazarkhalili/Qwen3.5-0.8B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF:
# llama-cli
llama-cli -hf ermiaazarkhalili/Qwen3.5-0.8B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF --jinja -p "Explain step-by-step: if a train travels 60 mph for 2.5 hours, how far does it go?" -n 256
# Ollama
ollama run hf.co/ermiaazarkhalili/Qwen3.5-0.8B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF:Q4_K_M
Training details
Reasoning SFT fine-tuning on distillation corpus examples from ermiaazarkhalili/Claude-Opus-4.7-Reasoning.
- SFTTrainer (trl >= 0.14) via Unsloth's
FastLanguageModel - LoRA config: r=16, α=16, dropout=0, targeting q_proj/k_proj/v_proj/o_proj/gate_proj/up_proj/down_proj
- Effective batch size: 8 (per_device=2 × grad_accum=4)
- Max sequence length: 2048
- Optimizer: adamw_8bit with linear LR scheduler
- Seed: 3407
Intended use
For research and non-commercial experimentation only. Outputs should be independently verified before any downstream use.
Limitations
- Trained on a single epoch (~N=1 epoch optimizer steps); further training may yield additional gains.
- Fine-tuned from
Qwen3.5-0.8B, inherits its limitations and biases. - Evaluated only on training-data perplexity; no external benchmarks run on this checkpoint.
- Distilled reasoning traces reflect patterns from Claude Opus 4.7 and may not generalize to domains outside the distillation corpus.
Citation
@misc{ qwen35_08b_sft_claude_opus_2026 ,
author = {Ermia Azarkhalili},
title = { Qwen3.5-0.8B-SFT-Unsloth — Reasoning SFT fine-tune of Qwen3.5-0.8B },
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/ermiaazarkhalili/Qwen3.5-0.8B-SFT-Claude-Opus-Reasoning-Unsloth}}
}
This qwen3 model was trained 2× faster with Unsloth and Hugging Face's TRL library.
- Downloads last month
- 1,564
