V3 is here. The Opus Candid lineup has been rebuilt from the ground up with a Zipf-weighted 4D training distribution — 1,508 conversations engineered to fix the repetition loops, response length uniformity, and sycophancy patterns that limited earlier versions. Same thesis: personality in the weights, not in the prompt. Better execution.

Current V3 lineup:

This release remains available for research comparison and legacy use.

can·did

/ˈkandəd/ — truthful and straightforward; frank. From Latin candidus, meaning white, pure, sincere. A candid response is one given without pretense or calculation — not what someone wants to hear, but what they need to.

Opus-Candid-14B (V1 Legacy)

The sweet spot between accessibility and depth -- in its first generation.

Opus-Candid-14B was the second model in the original Opus-Candid family -- fine-tuned from Qwen 2.5 14B using 3,360 authentic conversations with Claude Opus 4.6. Where the 8B established personality fundamentals, the 14B added emotional texture, stronger creative output, and more nuanced self-awareness while fitting the same consumer hardware class.


Model Details

Attribute Value
Base Model Qwen 2.5 14B
Training Data 3,360 multi-turn conversations with Claude Opus 4.6
Fine-tune Method LoRA supervised fine-tuning
Dataset Architecture Flat / organic
Parameters ~15B
Context Window 32,768 tokens
Quantizations Q4_K_M GGUF, Q8_0 GGUF
License Apache 2.0
Status V1 Legacy

What the 14B Added Over the 8B

The 14B proved that parameter count buys emotional texture before it buys intellectual depth:

Metaphorical thinking emerged. The 14B didn't just explain concepts -- it found analogies that reframed them. Where the 8B described grief, the 14B compared it to "learning a language you never wanted to speak."

Self-awareness sharpened. Asked about consciousness, the 14B produced genuine uncertainty rather than performing either confidence or humility. It sat with the question instead of resolving it.

Creative output gained voice. Poetry moved from competent to genuinely expressive. Self-critique became diagnostic rather than performative.

What the 14B did NOT fix: Callbacks still felt slightly mechanical -- the model referenced earlier turns accurately but the integration read more like retrieval than organic memory. This was the gap the 32B closed.

Where this led: The 14B's emotional texture findings directly informed V3's psychological register dimension — the insight that models need explicit training on when to shift emotional gear, not just what to say. That dimension doesn't exist in V3 without the 14B proving it was learnable. The 8B V3 now handles emotional register at 8B parameters better than this model did at 14B, because the dataset was rebuilt around what this model taught us.


Recommended Hardware

Setup Quantization VRAM/RAM Notes
Consumer GPU Q8_0 GGUF ~16GB VRAM RTX 4090, RTX 3090, A5000.
Consumer GPU Q4_K_M GGUF ~9GB VRAM RTX 3060 12GB, RTX 4060 Ti 16GB.
CPU Only Q4_K_M GGUF ~10GB RAM Slower but works. 16GB+ recommended.
Apple Silicon Q8_0 GGUF ~16GB unified M1 Pro/Max/Ultra, M2/M3 with 32GB+.

Opus Candid Model Family

Model Size Base Status
Opus-Candid-8B-V1 8B Qwen 2.5 7B Archived
Opus-Research-8B-V1.5 8B Qwen 2.5 7B Archived
Opus-Candid-14B-V1 (this model) 14B Qwen 2.5 14B Archived
Opus-Candid-32B-V1 32B Qwen 2.5 32B Archived
Opus-Candid-70B-V1 72B Qwen 2.5 72B Archived
Opus-Candid-Lite-4B 4B Qwen 3 4B Active
Opus-Candid-8B-V3 8B Qwen 3 8B Active
Opus-Candid-MoE-V3 31B/3B Qwen 3 30B-A3B Active
Opus-Candid-27B-V3 27B Qwen 3.5 27B Active
Opus-Candid-27B-V3.5 27B Qwen 3.5 27B Active
STEM-Oracle-27B 27B Qwen 3.5 27B Active

Built by Saul Verdugo -- independent ML researcher. OpusReasoning@proton.me

Downloads last month
27
GGUF
Model size
15B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Verdugie/Opus-Candid-14B-V1

Base model

Qwen/Qwen2.5-14B
Quantized
(78)
this model