Harmonic-2B
A reasoning-focused fine-tune of Qwen 3.5 2B trained on the same structurally validated data as Harmonic-9B. Every row passes automated quality gates. No junk, no filler, no shallow traces.
Built primarily as a draft model for speculative decoding with the upcoming Harmonic-27B. Fast enough to propose tokens, trained on the same reasoning patterns so the acceptance rate stays high.
Support This Work
I'm a PhD student in visual neuroscience at the University of Toronto who also happens to spend way too much time fine-tuning, merging, and quantizing open-weight models on rented H100s and a local DGX Spark. All training compute is self-funded — balancing GPU costs against a student budget. If my uploads have been useful to you, consider buying a PhD student a coffee. It goes a long way toward keeping these experiments running.
Training Approach
Same pipeline as Harmonic-9B. 799 curated rows - a small, precisely curated dataset instead of tens of thousands of unfiltered examples.
Every training row contains explicit self-correction, verification, and multi-path exploration. The data was generated from multiple frontier models and filtered through a custom structural quality pipeline. 100% of rows pass all quality gates simultaneously.
Training Data Quality
The same reasoning data as Harmonic-9B, curated using a custom structural process supervision pipeline:
| Metric | Value |
|---|---|
| Signal quality score | 78.7 mean (61.5 min, 90.0 max) |
| Thinking trace depth | 1,667 words average |
| Self-correction | 100% of rows (17.2 per row avg) |
| Verification | 100% of rows (10.3 per row avg) |
| Exploration | 100% of rows (6.3 per row avg) |
| Quality gate pass rate | 100% |
Draft Model Design
Harmonic-2B is designed to pair with larger Harmonic models for speculative decoding:
- Same training data as the 9B and upcoming 27B - the models share reasoning patterns, which improves draft token acceptance rates
- Same reasoning format - uses identical
<think>block structure - 2.3B parameters - small enough to run alongside a 27B on a single node
- Same architecture family (Qwen 3.5) - compatible tokenizer and vocab for seamless speculative decoding
Training Configuration
base_model: unsloth/Qwen3.5-2B
dataset: 799 curated reasoning rows
epochs: 1
learning_rate: 1e-4
lr_scheduler: cosine
warmup_ratio: 0.1
max_seq_length: 8192
lora_rank: 32
lora_alpha: 32
dropout: 0.05
micro_batch_size: 1
gradient_accumulation_steps: 4
weight_decay: 0.01
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("DJLougen/Harmonic-2B")
tokenizer = AutoTokenizer.from_pretrained("DJLougen/Harmonic-2B")
As a draft model (speculative decoding)
from transformers import AutoModelForCausalLM
target = AutoModelForCausalLM.from_pretrained("DJLougen/Harmonic-9B") # or Harmonic-27B
draft = AutoModelForCausalLM.from_pretrained("DJLougen/Harmonic-2B")
# Use with assisted generation
outputs = target.generate(
**inputs,
assistant_model=draft,
max_new_tokens=512,
)
Reasoning format
The model uses think blocks for reasoning:
<|thinking|>
The user is asking about X. Let me consider two approaches...
Approach 1: ...
Approach 2: ...
I will go with Approach 1 because...
Wait, I need to be careful here - this assumes Y, which may not hold.
Let me verify by checking a special case...
Yes, that confirms the result.
<|/thinking|>
[Final answer here]
Intended Use
- Draft model for speculative decoding with Harmonic-9B / Harmonic-27B
- Lightweight reasoning on resource-constrained hardware
- Edge deployment where reasoning quality matters but compute is limited
- Base model for Stage 2 agentic fine-tuning at small scale
Limitations
- 2B parameter model - limited world knowledge compared to 9B/27B
- Reasoning traces may be less reliable on complex multi-step problems
- Designed as a draft model first, standalone use second
- Not optimized for tool calling - see Harmonic-Hermes-9B for agentic use
Architecture
- Base: Qwen 3.5 2B (2.27B parameters)
- Training: LoRA fine-tuning, merged into base weights
- Precision: BF16
- Context: 8192 tokens
License
Apache 2.0 - same as the base model. All training data is from Apache 2.0 or MIT licensed sources. Fully commercial use permitted.
Links
- 9B variant: DJLougen/Harmonic-9B
- 9B GGUF: DJLougen/Harmonic-9B-GGUF
- Agentic variant: DJLougen/Harmonic-Hermes-9B
- 27B variant: DJLougen/Harmonic-27B
- Downloads last month
- 582
