Fine-tuned Gemma 4 MLX
Collection
4 items • Updated
Quality: quantized (8 bit, group size: 32, 8.643 bpw)
Darwin-31B-Opus is a reasoning-enhanced model created by merging google/gemma-4-31B-it (Father) and TeichAI/gemma-4-31B-it-Claude-Opus-Distill (Mother) using the Darwin V6 engine.
Darwin V6 diagnoses both parent models at the tensor level before merging, assigning an independent optimal ratio to each of the 1,188 tensors. This is fundamentally different from conventional merging tools that apply a single uniform ratio across all tensors.
| Architecture | Gemma 4 Dense (Hybrid Attention: Sliding Window + Global) |
| Parameters | 31B |
| Precision | BF16 |
| Context | 256,072 |
| Languages | 140+ |
| Thinking | enable_thinking=True chain-of-thought |
| License | Apache 2.0 |
| Role | Model | Characteristics |
|---|---|---|
| Father | google/gemma-4-31B-it | Gemma 4 Dense 31B, multimodal, 256K context, LMArena 1452 (open model #3) |
| Mother | TeichAI/gemma-4-31B-it-Claude-Opus-Distill | Claude 4.6 Opus high-effort reasoning distillation, code/science/analysis |
This model was converted to MLX format from FINAL-Bench/Darwin-31B-Opus using mlx-vlm version 0.4.4.
8-bit
Base model
google/gemma-4-31B-it