Auroic Router 0.6B
A compact, edge-deployable group chat routing model fine-tuned on Qwen3-0.6B. Given a 5-message history window and up to 3 unprocessed candidate messages, it outputs a single structured routing decision in under 4 seconds on CPU — no response generation, pure intent classification.
Part of the Auroic project — a personal AI assistant for Indian group and personal chats.
What it does
The router sits at the front of a conversational AI pipeline. It reads the recent chat context and decides what action to take next — so downstream models only activate when actually needed.
H1: bhai sun
H2: kya hua bata
H3: okay
H4: arre
H5: haan bol
C1: ...
C2: ...
C3: bhai parents divorce ho raha hai adjust karna mushkil hai kya karoon
→ R: TYPE=text | TARGET=C3 | EFFORT=high
Output Format
R: TYPE=text | TARGET=C2 | EFFORT=low|medium|high
R: TYPE=react | TARGET=C1 | TITLE=🔥
R: TYPE=media | TARGET=C3 | TITLE=rain cozy vibes
R: TYPE=ignore
| Type | When | Fields |
|---|---|---|
text |
Help request, advice, emotional support, technical question | EFFORT=low/medium/high |
react |
Notable moment deserving an emoji reaction | TITLE=<emoji> |
media |
Situation implying a gif, meme, or sticker | TITLE=<2-3 word search query> |
ignore |
Pure noise, spam, keyboard smash, laughter chains | none |
Input Format
H1: <message> ← oldest processed message
H2: <message>
H3: <message>
H4: <message>
H5: <message> ← most recent processed message
C1: <message> ← unprocessed candidate (use ... for filler)
C2: <message>
C3: <message> ← newest unprocessed candidate
Always provide exactly 5 history messages and 3 candidates. Use ... for empty slots — never omit them.
Quickstart — Ollama
ollama create auroic-router-0.6b -f Modelfile
ollama run auroic-router-0.6b
Important — the router is stateless. Every call must be a fresh context with no conversation history. In the Ollama CLI use /clear between calls. In production call the API directly:
response = client.chat.completions.create(
model="auroic-router-0.6b",
messages=[
{"role": "system", "content": "You are the Auroic Router. Given history messages H1-H5 and candidate messages C1-C3, output exactly one routing decision."},
{"role": "user", "content": formatted_window} # fresh window every call
],
)
Real Inference Examples
INPUT:
H1: lol H2: 😂😂 H3: haha H4: 💀 H5: ded
C1: hahaha C2: 😭😭 C3: ...
OUTPUT: R: TYPE=ignore
INPUT:
H1: sahi hai H2: haan bhai H3: okay H4: chal H5: theek hai
C1: ...
C2: bhai salary negotiate kaise karoon first time offer mein help
C3: ...
OUTPUT: R: TYPE=text | TARGET=C2 | EFFORT=medium
INPUT:
H1: bhai hungry hun H2: same yaar H3: kuch khate hain H4: haan H5: chal
C1: ... C2: ...
C3: bhai biryani ki yaad aa rahi hai mummy wali ghar ki bahut miss kar raha
OUTPUT: R: TYPE=media | TARGET=C3 | TITLE=biryani craving
INPUT:
H1: haha H2: bhai sach mein H3: lol H4: no way H5: 💀
C1: ...
C2: teacher ne galti se apna tiktok projector pe chala diya class mein
C3: ...
OUTPUT: R: TYPE=media | TARGET=C2 | TITLE=tiktok projector fail
INPUT:
H1: bhai sun H2: haan H3: bata H4: okay H5: hmm
C1: ...
C2: @BOT yaar dost ne 3 baje uthke help kiya exam ke liye true friendship hai
C3: ...
OUTPUT: R: TYPE=react | TARGET=C2 | TITLE=❤
Thinking Mode
The model uses Qwen3's native thinking capability. For ambiguous inputs it reasons through the context before deciding:
<think>
H1-H5 are casual affirmations. C2 asks 'salary negotiate kaise karoon first time' —
clear, actionable life advice needed. Text with medium effort because it requires
step-by-step guidance, not just emojis or media.
</think>
R: TYPE=text | TARGET=C2 | EFFORT=medium
For obvious cases (clear ignore, clear media) it skips thinking entirely and outputs the decision directly — keeping latency low where reasoning isn't needed.
Thinking enabled (default): better accuracy on ambiguous cases, ~3-4s on CPU
Thinking disabled: faster, minor accuracy drop on edge cases
To disable thinking add this to your Modelfile template:
<|im_start|>assistant
<think>
</think>
Recommended Inference Settings
temperature 0.3 (deterministic routing)
top_p 0.95
top_k 20
repeat_penalty 1.1
Model Details
| Property | Value |
|---|---|
| Base model | unsloth/Qwen3-0.6B |
| Total parameters | 616M |
| Trainable parameters | 20.2M (3.28%) |
| Fine-tuning framework | Unsloth + TRL SFTTrainer |
| LoRA rank | r=32, alpha=32 |
| Target modules | q/k/v/o/gate/up/down proj |
| Epochs | 2 |
| Effective batch size | 16 (batch=2, grad_accum=8) |
| Learning rate | 2e-4 cosine |
| Max sequence length | 2048 |
| Training samples | 9,300 |
| Final training loss | 0.667 |
| Quantization | Q8_0 GGUF |
| Hardware trained on | NVIDIA T4 16GB (Google Colab) |
| Training time | ~44 minutes |
Dataset — v4
9,300 samples of Indian group chat routing scenarios:
| Split | Count |
|---|---|
| Normal windows | 7,000 |
| @BOT mention windows | 1,500 |
| Filler/sparse windows | 800 |
| Type | Count | % |
|---|---|---|
| text | 4,042 | 43.5% |
| ignore | 1,942 | 20.9% |
| react | 1,688 | 18.2% |
| media | 1,628 | 17.5% |
Language distribution: 58.8% Hinglish, 23% English, 18.2% mixed
Think blocks annotated by qwen/qwen3-next-80b-a3b-instruct on NVIDIA NIM — hard tier (39.3%) gets full reasoning, medium tier (32.8%) gets short reasoning, easy tier (27.8%) gets no think block.
Dataset and generation pipeline: github.com/kaushalkrishnax/auroic-router
Performance
| Metric | Value |
|---|---|
| Warm inference (CPU) | ~2-3 seconds |
| Cold start | ~5 seconds |
| Hardware tested | Intel i3-6100U 2.3GHz, 12GB DDR4 |
| Format compliance | 100% on test set |
Known Limitations
- Optimized for Hinglish and Indian English group chat patterns
- Stateless by design — do not pass conversation history between calls
- Not designed for factual QA, coding assistance, or long-form generation
License
Apache-2.0 — inherits from Qwen3 base model.
Author
Kaushal Krishna
- GitHub: kaushalkrishnax
- Project: github.com/kaushalkrishnax/auroic
- Router research: github.com/kaushalkrishnax/auroic-router
- Downloads last month
- 612