Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
1605.6
TFLOPS
76
7
338
ginipick
ginipick
Follow
Helen2112's profile picture
Chikedor's profile picture
luk158's profile picture
701 followers
Β·
156 following
AI & ML interests
None yet
Recent Activity
updated
a Space
1 day ago
ginipick/ai-news-daily
published
a Space
1 day ago
ginipick/ai-news-daily
reacted
to
SeaWolf-AI
's
post
with π
1 day ago
Why This Matters β David Defeats Goliath MODEL: https://huggingface.co/FINAL-Bench/Darwin-4B-David SPACE: https://huggingface.co/spaces/FINAL-Bench/Darwin-4B-david We're releasing Darwin-4B-David, the first second-generation model in the Darwin Opus family. By evolving an already-evolved model, it achieves 85.0% on GPQA Diamond β surpassing its 58.6% original ancestor and even gemma-4-31B (84.3%) β with just 4.5B parameters. Second-Generation Evolution Most merges start from a base model and produce a single offspring. Darwin-4B-David breaks this pattern. The Father (Darwin-4B-Opus) was already evolved from gemma-4-E4B-it with Claude Opus reasoning distillation β a Gen-1 model. The Mother (DavidAU's DECKARD-Expresso-Universe) brings Unsloth deep tuning across 5 in-house datasets with thinking mode by default. Crossbreeding these two produced the first Gen-2 Darwin model. Darwin V6's Model MRI scanned both parents across all 42 layers, assigning independent optimal ratios per layer. The Mother's creativity and Korean language hotspot (Layer 22-25, weight 0.95) was maximally absorbed, while the Father's reasoning core (Layer 30-40, weight 0.48) was preserved. This is "Merge = Evolve" applied recursively β evolution of evolution. Benchmarks Darwin-4B-David scores 85.0% on GPQA Diamond (+26.4%p over original 58.6%), evaluated generatively with maj@8 (8 generations per question, majority vote), Epoch AI prompt format, thinking mode enabled, 50 sampled questions. On ARC-Challenge (25-shot, loglikelihood), both score 64.93% β expected, as loglikelihood doesn't capture thinking-mode reasoning differences. Why This Matters gemma-4-31B (30.7B) scores 84.3%. Darwin-4B-David surpasses it at 1/7th the size β no training, no RL, just 45 minutes of MRI-guided DARE-TIES on one H100. The name "David" honors Mother creator DavidAU and evokes David vs. Goliath.
View all activity
Organizations
ginipick
's models
11
Sort:Β Recently updated
ginipick/Qwen-Image-Edit-Rapid-AIO
Text-to-Image
β’
Updated
Nov 2, 2025
β’
1
ginipick/GLM-4.6
Text Generation
β’
357B
β’
Updated
Nov 2, 2025
β’
11
ginipick/neutts-air
Text-to-Speech
β’
0.7B
β’
Updated
Nov 2, 2025
β’
12
β’
1
ginipick/MiniMax-M2
Text Generation
β’
229B
β’
Updated
Nov 2, 2025
β’
12
ginipick/PaddleOCR-VL
Image-Text-to-Text
β’
1.0B
β’
Updated
Nov 2, 2025
β’
9
ginipick/DeepSeek-OCR
Image-Text-to-Text
β’
3B
β’
Updated
Nov 2, 2025
β’
8
ginipick/Gemma-3-R1984-4B
Image-Text-to-Text
β’
4B
β’
Updated
Apr 22, 2025
β’
8
β’
8
ginipick/QwQ-32B-NF4
Text Generation
β’
33B
β’
Updated
Mar 21, 2025
β’
9
β’
4
ginipick/wan-lora-cat
Text-to-Video
β’
Updated
Mar 16, 2025
ginipick/c-bag
Updated
Mar 13, 2025
ginipick/flux-lora-eric-cat
Text-to-Image
β’
Updated
Dec 2, 2024
β’
113
β’
β’
80