Darwin-27B-Opus

Quality: quantized (mxfp4, 4.449 bpw)

Darwin-27B-Opus is a 27-billion-parameter language model produced entirely through evolutionary crossbreeding of pretrained models, requiring zero additional training, zero data, and a single GPU. On the GPQA Diamond benchmark — a graduate-level scientific reasoning evaluation comprising 198 expert-crafted questions in physics, chemistry, and biology — Darwin-27B-Opus achieves 86.9%, surpassing its progenitor Qwen3.5-27B (85.5%) by +1.4 percentage points and securing 5th place on the HuggingFace GPQA leaderboard.

Model Specifications

Architecture Qwen3.5 Dense (GatedDeltaNet)
Parameters 27B
Hidden Size 4096
Intermediate Size 17408
Layers 64
Context Length 262,144 (extensible to 1M via YaRN)
Precision BF16
Languages 201
Thinking Mode Enabled

Parent Models

Role Model Contribution
Father (Structure) Qwen/Qwen3.5-27B Foundation architecture, native reasoning, 201-language support
Mother (Knowledge) Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled Claude 4.6 Opus structured reasoning patterns via SFT distillation

Both parents share identical architecture: hidden_size=4096, intermediate_size=17408, 64 layers — ensuring 100% structural compatibility for FFN crossbreeding.


Source

This model was converted to MLX format from FINAL-Bench/Darwin-27B-Opus using mlx-vlm version 0.4.4.

Downloads last month
120
Safetensors
Model size
27B params
Tensor type
U8
·
U32
·
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Darwin-27B-Opus-MLX-mxfp4

Collection including TheCluster/Darwin-27B-Opus-MLX-mxfp4