license: apache-2.0
base_model: mistralai/Devstral-Small-2-24B-Instruct-2512
library_name: peft
tags:
- mlx
- lora
- peft
- ailiance
- devstral
- typescript
language:
- en
- fr
pipeline_tag: text-generation
Ailiance — Devstral-Small-2-24B-Instruct typescript LoRA
LoRA adapter fine-tuned on mistralai/Devstral-Small-2-24B-Instruct-2512 for typescript tasks.
Maintained by Ailiance — French AI org publishing EU AI Act aligned LoRA adapters and datasets.
Quick start (MLX)
from mlx_lm import load, generate
model, tokenizer = load(
"mistralai/Devstral-Small-2-24B-Instruct-2512",
adapter_path="Ailiance-fr/devstral-typescript-lora",
)
print(generate(model, tokenizer, prompt="..."))
Training
| Hyperparameter | Value |
|---|---|
| Base model | mistralai/Devstral-Small-2-24B-Instruct-2512 |
| Method | LoRA via mlx-lm |
| Rank | 16 |
| Scale | 2.0 |
| Alpha | 32 |
| Max seq length | 2048 |
| Iterations | 500 |
| Optimizer | Adam, LR 1e-5 |
| Hardware | Apple M3 Ultra 512 GB |
Training data lineage
Derived from the internal eu-kiki / mascarade curation. All upstream samples are synthetic, permissively-licensed, or generated from Apache-2.0 base resources. See the Ailiance-fr catalog for related cards.
Benchmark roadmap
This LoRA has not yet been evaluated through electron-bench (the current
pipeline supports gemma-4-E4B base only). Training was completed with the
standard mlx-lm LoRA trainer (rank 16, alpha 32, scale 2.0, AdamW
LR 1e-5, 500 iters) — full hyperparameters are in the Training table above.
Planned evaluations:
- Perplexity on the validation split of the training data
- Functional benchmark on devstral-specific tasks
- Comparison vs base
mistralai/Devstral-Small-2-24B-Instruct-2512
Track progress: ailiance-bench issues.
For reference benchmarks on the gemma-4-E4B base, see the
base-vs-LoRA matrix.
License chain
| Component | License |
|---|---|
Base model (mistralai/Devstral-Small-2-24B-Instruct-2512) |
apache-2.0 |
| Training data (internal Ailiance curation (synthetic + permissive sources)) | apache-2.0 |
| LoRA adapter (this repo) | apache-2.0 |
All upstream components are Apache 2.0 / MIT — LoRA inherits permissive terms.
EU AI Act compliance
- Article 53(1)(c): training data licenses preserved (per-dataset cards declare upstream licenses).
- Article 53(1)(d): training data summary — see upstream dataset cards on Ailiance-fr.
- GPAI Code of Practice (July 2025): base
mistralai/Devstral-Small-2-24B-Instruct-2512released under apache-2.0. - No web scraping by Ailiance, no licensed data, no PII.
- Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.
License
LoRA weights: apache-2.0 — see License chain table above for derivation rationale.
Citation
@misc{ailiance_devstral_typescript_2026,
author = {Ailiance},
title = {Ailiance — Devstral-Small-2-24B-Instruct typescript LoRA},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/Ailiance-fr/devstral-typescript-lora}
}
Related
See the full Ailiance-fr LoRA collection.
Bench comparison (2026-05-11)
Base model (Devstral-Small-2-24B-MLX-4bit) capability
| Task | Score | Notes |
|---|---|---|
| GSM8K-CoT flex EM | 0.96 | W3 lm-eval-harness (--limit 100) |
| ARC-Easy acc / acc_norm | 0.80 / 0.75 | |
| MMLU-Pro Computer Science | 0.64 |
Source: https://github.com/ailiance/ailiance/tree/main/output/lm-eval-base-2026-05-11
This LoRA (tuned) — bench PENDING
Will include kicad-sch / iact-bench validators + W3 lm-eval delta. See spec for methodology: https://github.com/ailiance/ailiance-bench/blob/main/docs/superpowers/specs/2026-05-11-kicad-sch-gap-design.md