clemsail commited on
Commit
21fede5
·
verified ·
1 Parent(s): 7e22190

docs: add base vs tuned bench comparison

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -126,3 +126,22 @@ LoRA weights: **apache-2.0** — see License chain table above for derivation ra
126
  ## Related
127
 
128
  See the full [Ailiance-fr LoRA collection](https://huggingface.co/Ailiance-fr).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  ## Related
127
 
128
  See the full [Ailiance-fr LoRA collection](https://huggingface.co/Ailiance-fr).
129
+
130
+
131
+ ## Bench comparison (2026-05-11)
132
+
133
+ ### Base model (Devstral-Small-2-24B-MLX-4bit) capability
134
+
135
+ | Task | Score | Notes |
136
+ |---|---:|---|
137
+ | GSM8K-CoT flex EM | **0.96** | W3 lm-eval-harness (--limit 100) |
138
+ | ARC-Easy acc / acc_norm | **0.80 / 0.75** | |
139
+ | MMLU-Pro Computer Science | **0.64** | |
140
+
141
+ Source: <https://github.com/ailiance/ailiance/tree/main/output/lm-eval-base-2026-05-11>
142
+
143
+ ### This LoRA (tuned) — bench PENDING
144
+
145
+ Will include kicad-sch / iact-bench validators + W3 lm-eval delta. See spec for
146
+ methodology:
147
+ <https://github.com/ailiance/ailiance-bench/blob/main/docs/superpowers/specs/2026-05-11-kicad-sch-gap-design.md>