Qwen3.5 0.8B (Phase 1: Supervised Fine-Tuning)
This model serves as the Phase 1 Supervised Fine-Tuning (SFT) checkpoint in a Two-Phase Knowledge Distillation pipeline. It is built upon Qwen/Qwen3.5-0.8B-Base and fine-tuned to learn the structural formatting and reasoning chains (<think> tags) generated by a larger teacher model (Qwen/Qwen3.5-9B).
π¨ Note: This is an intermediate model. For the final, fully-distilled model with aligned probabilities (Logits Distillation), please use Phonsiri/Qwen3.5-0.8B-Distillation-Phase2.
π Model Details
- Base Model:
Qwen/Qwen3.5-0.8B-Base - Teacher Configuration:
Qwen/Qwen3.5-9B(Used for dataset generation) - Language(s): English (Primary), Thai
- Architecture: Causal Language Modeling (Decoder-only)
- License: Apache 2.0
π§ͺ Training Methodology
This model was trained exclusively offline (Off-Policy) using standard Cross Entropy (CE) Loss.
A custom dataset of 7,500 mixed-domain prompts (comprising Math, General Instructions, and Coding tasks) was fed into the 9B Teacher model to generate high-quality ground-truth responses. The generation explicitly enabled "Thinking Mode" to force the teacher to output step-by-step reasoning encapsulated in <think> ... </think> tags before answering.
The 0.8B-Base student model was then fine-tuned on this dataset to adapt to the instruction format and mimic the presence of reasoning chains. This warm-up phase is critical to ensure both the Teacher and Student output formats are perfectly synchronized before the intensive On-Policy KL-Divergence Distillation in Phase 2.
π Hyperparameters (SFT)
- Epochs: 2 (Intentional early stopping to prevent severe overfitting)
- Optimizer: AdamW
- Learning Rate: 2e-5 (with Cosine scheduling)
- Batch Size (Effective): 32
- Precision:
fp16 - Max Sequence Length: 7000
π» How to Use (Inference)
You can test this intermediate model to observe how it structures its reasoning process.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Phonsiri/Qwen3.5-0.8B-Base-Distillation-Qwen3.5-9B"
subfolder = "step_100" # Use the desired checkpoint step
tokenizer = AutoTokenizer.from_pretrained(model_id, subfolder=subfolder)
model = AutoModelForCausalLM.from_pretrained(
model_id,
subfolder=subfolder,
torch_dtype=torch.float16,
device_map="auto"
)
# Example Prompt
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the square root of 256? Please explain your thinking."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True, # π‘ Command the model to use <think> reasoning
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Model tree for Phonsiri/Qwen3.5-0.8B-Base-Distillation-Qwen3.5-9B
Base model
Qwen/Qwen3.5-0.8B-Base