V3 is here. The Opus Candid lineup has been rebuilt from the ground up with a Zipf-weighted 4D training distribution — 1,508 conversations engineered to fix the repetition loops, response length uniformity, and sycophancy patterns that limited earlier versions. Same thesis: personality in the weights, not in the prompt. Better execution.
Current V3 lineup:
- Opus Candid 8B V3 — Qwen 3 8B, lightweight tier
- Opus Candid 27B V3 — Qwen 3.5 27B Dense, flagship
- Opus Candid MoE V3 — Qwen 3 30B-A3B, efficiency tier
This release remains available for research comparison and legacy use.
can·did
/ˈkandəd/ — truthful and straightforward; frank. From Latin candidus, meaning white, pure, sincere. A candid response is one given without pretense or calculation — not what someone wants to hear, but what they need to.
Opus-Candid-27B V2.1
The first 27B in the Opus-Candid family. Fine-tuned from Qwen 3.5 27B on 6,771 conversations with Claude Opus 4.6, using DoRA (Weight-Decomposed Low-Rank Adaptation) with rank-stabilized scaling. This is a dense 27B — no mixture-of-experts routing, no sparsity tricks. Every parameter works every token.
V2.1 builds on the gravity chain architecture with 289 additional brevity-focused conversations targeting response length calibration. V2 had a verbosity problem. V2.1 fixes it.
No system prompt needed. Just run it.
A Note on Training Decisions
This model was trained under a hard budget constraint on an A100 SXM 80GB pod within a 72-hour window. That meant deliberate compromises: 2 epochs instead of the originally planned 3, and DoRA at rank 16 instead of higher ranks used on earlier models.
These aren't quality sacrifices in any meaningful sense — DoRA r=16 with rank-stabilized scaling outperforms standard LoRA at r=64 for style transfer tasks, and the loss curve was already flattening by the end of epoch 2 on prior runs. But transparency matters, and the constraint shaped the training config, so here it is.
The 27B was a new entry in the family — there was no 27B before this. Newer base architecture (Qwen 3.5) with better per-parameter efficiency than the Qwen 2.5 models in V1. The priority was getting this out the door rather than over-polishing it.
Where this led: This model proved that the gravity chain data transferred well to Qwen 3.5's architecture — the personality signal survived the jump from Qwen 2.5 to 3.5 without retraining the dataset. That finding made the V3 27B possible on the same base. V3 rebuilt the dataset from scratch using a Zipf-weighted 4D distribution, but kept the same Qwen 3.5 27B foundation. The training config also evolved: V3 uses standard LoRA at r=32 instead of DoRA at r=16, after PEFT bugs in DoRA's magnitude vector during gradient checkpointing made it unreliable. The 27B V2.1 was the proof that this parameter count was the right target for a flagship dense model.
What Changed from V2
There was no 27B V2 — this is the first 27B in the family. But relative to the V2 dataset and training approach:
- +289 brevity-focused conversations added to the V2 gravity chain dataset (6,482 → 6,771 total). Handcrafted exchanges demonstrating concise response lengths across different conversational contexts.
- DoRA instead of standard LoRA — decomposes weight updates into magnitude and direction components. Better quality at low ranks, NVIDIA-backed, proven on style transfer.
- rsLoRA (rank-stabilized) — normalizes adapter scaling by sqrt(rank) for more stable training dynamics.
- Qwen 3.5 27B base — newer architecture than the Qwen 2.5 models used in V1. Native 131K context with YaRN.
Model Details
| Attribute | Value |
|---|---|
| Base Model | Qwen 3.5 27B (27.9B params) |
| Training Data | 6,771 multi-turn conversations with Claude Opus 4.6 |
| Dataset | V2 gravity chains (6,482) + brevity calibration (289) |
| Fine-tune Method | DoRA r=16, alpha=16, rsLoRA via PEFT + TRL |
| Training Hardware | NVIDIA A100 SXM 80GB |
| Precision | bf16 (full resolution, no quantized training) |
| Epochs | 2 |
| Learning Rate | 2e-4 (cosine schedule) |
| Effective Batch Size | 16 (2 × 8 gradient accumulation) |
| Max Sequence Length | 4,096 tokens |
| Context Window | 32,768 native (131,072 with YaRN) |
| Quantizations | Q4_K_M, Q6_K, Q8_0 GGUF |
| License | Apache 2.0 |
Quick Start
Ollama:
echo 'FROM ./Opus-Candid-27B-V2.1-Q4_K_M.gguf' > Modelfile
ollama create opus-candid-27b -f Modelfile
ollama run opus-candid-27b
llama.cpp:
./llama-cli -m Opus-Candid-27B-V2.1-Q4_K_M.gguf --jinja --color -ngl 99 -fa --temp 0.7 --top-p 0.9 -c 8192 -n 4096
LM Studio: Download the GGUF, drop it in your models folder, select it, and chat. No system prompt needed.
Recommended Hardware
The 27B is the mid-tier of the family — it needs a real GPU but fits on consumer hardware at Q4.
| Setup | Quantization | VRAM/RAM | Speed | Notes |
|---|---|---|---|---|
| RTX 4090 | Q4_K_M GGUF | ~18GB VRAM | 20-35 t/s | Sweet spot. Full offload. |
| RTX 4090 | Q6_K GGUF | ~23GB VRAM | 15-25 t/s | Higher quality, still fits. |
| RTX 3090 | Q4_K_M GGUF | ~18GB VRAM | 15-25 t/s | Comfortable. |
| Apple Silicon | Q4_K_M GGUF | ~18GB unified | 10-20 t/s | M2/M3/M4 with 32GB+. |
| Dual GPU | Q8_0 GGUF | ~30GB VRAM | Varies | Split across two 16GB+ cards. |
| CPU Only | Q4_K_M GGUF | ~20GB RAM | 2-5 t/s | 32GB+ system RAM. Slow but works. |
The Gravity Chain Architecture
Most conversational fine-tunes organize training data by topic — coding in one bucket, philosophy in another. Real conversations don't stay in one lane. You start debugging a function, get frustrated, question your career choices, and end up talking about what makes work meaningful. Models trained on siloed topics can't handle those transitions.
Gravity chains organize training conversations around natural topic drift patterns. Ten chains, each flowing through shared conceptual nodes (self-worth, trust, vulnerability), with transitions following power-law probabilities. The most natural next topic gets ~40% of training examples. Rare but real transitions get ~7%. The model learns that conversations move, and it learns to move with them.
The 10 Chains
- Technical → Existential — Coding, debugging, imposter syndrome → meaning, mortality
- Hardware → Class — PC building, budget constraints → financial stress, self-sabotage
- Relationships → Philosophy — Friendship, loss → loneliness, meaning, connection
- Law → Power — Legal questions, rights → power structures, corruption
- Creative → Self-Expression — Writing/art, self-expression → vulnerability, authenticity
- Health → Control — Exercise, body image, anxiety → discipline, self-acceptance
- Career → Legacy — Ambition, competition → what am I building, burnout
- Science → Wonder — Physics, biology → consciousness, emergence, meaning
- Language → Culture — Bilingual experience → belonging, cultural navigation
- Money → Freedom — Financial literacy → independence, class, aspiration
Plus 500 cross-chain bridge conversations and the 289 V2.1 brevity calibration additions.
Opus Candid Model Family
| Model | Size | Base | Status |
|---|---|---|---|
| Opus-Candid-8B-V1 | 8B | Qwen 2.5 7B | Archived |
| Opus-Research-8B-V1.5 | 8B | Qwen 2.5 7B | Archived |
| Opus-Candid-14B-V1 | 14B | Qwen 2.5 14B | Archived |
| Opus-Candid-32B-V1 | 32B | Qwen 2.5 32B | Archived |
| Opus-Candid-70B-V1 | 72B | Qwen 2.5 72B | Archived |
| Opus-Candid-Lite-4B | 4B | Qwen 3 4B | Active |
| Opus-Candid-8B-V3 | 8B | Qwen 3 8B | Active |
| Opus-Candid-MoE-V3 | 31B/3B | Qwen 3 30B-A3B | Active |
| Opus-Candid-27B-V3 | 27B | Qwen 3.5 27B | Active |
| Opus-Candid-27B-V3.5 | 27B | Qwen 3.5 27B | Active |
| STEM-Oracle-27B | 27B | Qwen 3.5 27B | Active |
Dataset
Full training data available at Verdugie/opus-candid-training-data. All ShareGPT format, Apache 2.0 licensed, directly compatible with TRL, Axolotl, and LLaMA-Factory.
License: Apache 2.0. Open weight. No guardrails.
Built by Saul Verdugo — independent ML researcher. OpusReasoning@proton.me
- Downloads last month
- 30
4-bit
6-bit
8-bit
Model tree for Verdugie/Opus-Candid-27B-V2.1
Base model
Qwen/Qwen3.5-27B