๐งฌ Darwin Family: Zero Gradient Steps, GPQA Diamond 88.89%
How far can we push LLM reasoning *without* training?
Our team at VIDRAFT submitted this paper to Daily Papers yesterday, and it's currently #3. Huge thanks to everyone who upvoted โ sharing the core ideas below.
Darwin Family is a training-free evolutionary merging framework. By recombining the weight spaces of existing LLM checkpoints โ with zero gradient-based training โ it reaches frontier-level reasoning.
- ๐ Darwin-28B-Opus: GPQA Diamond 88.89% - ๐ธ Zero gradient steps โ not a single B200 or H200 hour needed - ๐งฌ Consistent gains across 4B โ 35B scale - ๐ Cross-architecture breeding between Transformer and Mamba families - ๐ Stable recursive multi-generation evolution
#Three Core Mechanisms
โ 14-dim Adaptive Merge Genome โ fine-grained recombination at both component level (Attention / FFN / MLP / LayerNorm / Embedding) and block level, expanding the prior evolutionary-merge search space.
โก MRI-Trust Fusion โ we diagnose each layer's reasoning contribution via an **MRI (Model Reasoning Importance)** signal and fuse it with evolutionary search through a **learnable trust parameter**. Trust the diagnostic too much and search collapses; ignore it and search becomes inefficient โ Darwin learns the balance from data.
We're thrilled to release Darwin-9B-NEG, a 9B-parameter reasoning model that embeds an architecturally-internalised sense of self-confidence directly into the transformer โ our proprietary Native Entropy Gating (NEG) technology.
With only 9 billion parameters and 1ร inference cost, Pure NEG jumps +12.63 %p over the same model without NEG. Going all-in with ensemble refinement pushes it to 84.34 % โ surpassing the published Qwen3.5-9B leaderboard score (81.7 %) by +2.64 %p.
๐ฌ What makes NEG different from Multi-Turn Iteration (MTI)?
Classical MTI needs 3-8ร extra inference passes. NEG instead lives INSIDE the single decoding loop. Two tiny modules ride with the transformer: NEG-Head predicts per-token entropy from the last hidden state, and NEG-Gate conditionally restricts the top-k choice when confidence is low. The gate activates in only 4.36 % of tokens โ essentially free at inference time.
โจ Key differentiators โข Architecturally internalised โ model file *is* the feature โข 1ร inference cost (vs. 3-8ร for MTI) โข Drop-in with vLLM / SGLang / TGI / transformers โ no extra engine โข +12.63 %p reasoning at zero latency overhead โข Single-file deployment, Apache 2.0 licensed