Ailiance β Devstral-Small-2-24B-BF16 kicad-pcb (fullseq) LoRA
LoRA adapter fine-tuned on mistralai/Devstral-Small-2-24B-Instruct-2512 for kicad-pcb tasks.
Variant: trained with full-sequence loss for stronger schema adherence.
Maintained by Ailiance β French AI org publishing EU AI Act aligned LoRA adapters and datasets.
ATTRIBUTION AUDIT COMPLETED
The training dataset
Ailiance-fr/mascarade-kicad-datasetwent through a full Stack Exchange attribution audit (2026-05-11): 61 samples (2.3%) carry per-sample URL+author+post_id attribution; 169 samples flagged91%) are LLM-synthetic. Audit report:not_found_on_se(likely synthetic); 2 413 samples (docs/audit_mascarade_se_attribution.mdinelectron-bench.
Quick start (MLX)
from mlx_lm import load, generate
model, tokenizer = load(
"mistralai/Devstral-Small-2-24B-Instruct-2512",
adapter_path="Ailiance-fr/devstral-kicad-pcb-fullseq-lora",
)
print(generate(model, tokenizer, prompt="..."))
Training
| Hyperparameter | Value |
|---|---|
| Base model | mistralai/Devstral-Small-2-24B-Instruct-2512 |
| Method | LoRA via mlx-lm |
| Rank | 16 |
| Scale | 2.0 |
| Alpha | 32 |
| Max seq length | 16384 |
| Iterations | 1000 |
| Optimizer | Adam, LR 1e-5 |
| Hardware | Apple M3 Ultra 512 GB |
Training data lineage
| Role | Dataset | License |
|---|---|---|
| Primary corpus | Ailiance-fr/mascarade-kicad-dataset |
cc-by-sa-4.0 |
For per-sample provenance and attribution status, consult the dataset card.
Benchmark roadmap
This LoRA has not yet been evaluated through electron-bench (the current
pipeline supports gemma-4-E4B base only). Training was completed with the
standard mlx-lm LoRA trainer (rank 16, alpha 32, scale 2.0, AdamW
LR 1e-5, 500 iters) β full hyperparameters are in the Training table above.
Planned evaluations:
- Perplexity on the validation split of the training data
- Functional benchmark on devstral-specific tasks
- Comparison vs base
mistralai/Devstral-Small-2-24B-Instruct-2512
Track progress: ailiance-bench issues.
For reference benchmarks on the gemma-4-E4B base, see the
base-vs-LoRA matrix.
License chain
| Component | License |
|---|---|
Base model (mistralai/Devstral-Small-2-24B-Instruct-2512) |
apache-2.0 |
Training data (Ailiance-fr/mascarade-kicad-dataset) |
cc-by-sa-4.0 |
| LoRA adapter (this repo) | cc-by-sa-4.0 |
Most restrictive license in the chain (CC-BY-SA-4.0 share-alike) propagates to derivatives.
EU AI Act compliance
- Article 53(1)(c): training data licenses preserved (per-dataset cards declare upstream licenses).
- Article 53(1)(d): training data summary β see upstream dataset cards on Ailiance-fr.
- GPAI Code of Practice (July 2025): base
mistralai/Devstral-Small-2-24B-Instruct-2512released under apache-2.0. - No web scraping by Ailiance, no licensed data, no PII.
- Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.
License
LoRA weights: cc-by-sa-4.0 β see License chain table above for derivation rationale.
Citation
@misc{ailiance_devstral_kicad_pcb_fullseq_2026,
author = {Ailiance},
title = {Ailiance β Devstral-Small-2-24B-BF16 kicad-pcb (fullseq) LoRA},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/Ailiance-fr/devstral-kicad-pcb-fullseq-lora}
}
Related
See the full Ailiance-fr LoRA collection.
Bench comparison (2026-05-11)
Base model (Devstral-Small-2-24B-MLX-4bit) capability
| Task | Score | Notes |
|---|---|---|
| GSM8K-CoT flex EM | 0.96 | W3 lm-eval-harness (--limit 100) |
| ARC-Easy acc / acc_norm | 0.80 / 0.75 | |
| MMLU-Pro Computer Science | 0.64 |
Source: https://github.com/ailiance/ailiance/tree/main/output/lm-eval-base-2026-05-11
This LoRA (tuned) β bench PENDING
Will include kicad-sch / iact-bench validators + W3 lm-eval delta. See spec for methodology: https://github.com/ailiance/ailiance-bench/blob/main/docs/superpowers/specs/2026-05-11-kicad-sch-gap-design.md
- Downloads last month
- 40
Quantized
Model tree for Ailiance-fr/devstral-kicad-pcb-fullseq-lora
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503