devstral-cpp-lora / README.md
clemsail's picture
docs: add Benchmark / Training metrics section
6b0ca45 verified
|
raw
history blame
4.32 kB
metadata
license: apache-2.0
base_model: mistralai/Devstral-Small-2-24B-Instruct-2512
library_name: peft
tags:
  - mlx
  - lora
  - peft
  - ailiance
  - devstral
  - cpp
language:
  - en
  - fr
pipeline_tag: text-generation

Ailiance — Devstral-Small-2-24B-Instruct cpp LoRA

LoRA adapter fine-tuned on mistralai/Devstral-Small-2-24B-Instruct-2512 for cpp tasks.

Maintained by Ailiance — French AI org publishing EU AI Act aligned LoRA adapters and datasets.

Quick start (MLX)

from mlx_lm import load, generate

model, tokenizer = load(
    "mistralai/Devstral-Small-2-24B-Instruct-2512",
    adapter_path="Ailiance-fr/devstral-cpp-lora",
)

print(generate(model, tokenizer, prompt="..."))

Training

Hyperparameter Value
Base model mistralai/Devstral-Small-2-24B-Instruct-2512
Method LoRA via mlx-lm
Rank 16
Scale 2.0
Alpha 32
Max seq length 2048
Iterations 500
Optimizer Adam, LR 1e-5
Hardware Apple M3 Ultra 512 GB

Training data lineage

Derived from the internal eu-kiki / mascarade curation. All upstream samples are synthetic, permissively-licensed, or generated from Apache-2.0 base resources. See the Ailiance-fr catalog for related cards.

Training metrics

Extracted from training log (batch_eu_kiki_v2.log):

Metric Value
Final train loss 0.603
Final validation loss 0.401
Val loss reduction +1.779 (from 2.180)
Iterations completed 500
Trainable parameters 0.224% (279.708M / 125025.989M)

Validation loss is measured every 200 iterations on a held-out split of the training corpus (val_batches=5, mlx-lm LoRA trainer).

Benchmark on production tasks

This LoRA has not yet been evaluated through the electron-bench functional benchmark pipeline. The current pipeline targets the gemma-4-E4B base only; support for the devstral base is on the roadmap (open issues).

For a comparable reference matrix on a related domain (electronics, embedded, KiCad), see the Gemma champions:

Adapter Highlights
Ailiance-fr/gemma-4-E4B-eukiki-lora +55 P1-DSL, +42 P1-PCB, +25 SPICE, +38 P3
Ailiance-fr/gemma-4-E4B-mascarade-lora +48 P3 extraction

Full base-vs-LoRA matrix: compare_base_vs_lora.md.

License chain

Component License
Base model (mistralai/Devstral-Small-2-24B-Instruct-2512) apache-2.0
Training data (internal Ailiance curation (synthetic + permissive sources)) apache-2.0
LoRA adapter (this repo) apache-2.0

All upstream components are Apache 2.0 / MIT — LoRA inherits permissive terms.

EU AI Act compliance

  • Article 53(1)(c): training data licenses preserved (per-dataset cards declare upstream licenses).
  • Article 53(1)(d): training data summary — see upstream dataset cards on Ailiance-fr.
  • GPAI Code of Practice (July 2025): base mistralai/Devstral-Small-2-24B-Instruct-2512 released under apache-2.0.
  • No web scraping by Ailiance, no licensed data, no PII.
  • Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.

License

LoRA weights: apache-2.0 — see License chain table above for derivation rationale.

Citation

@misc{ailiance_devstral_cpp_2026,
  author    = {Ailiance},
  title     = {Ailiance — Devstral-Small-2-24B-Instruct cpp LoRA},
  year      = {2026},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/Ailiance-fr/devstral-cpp-lora}
}

Related

See the full Ailiance-fr LoRA collection.