devstral-cpp-lora / README.md
clemsail's picture
docs: EU AI Act self-contained model card v0.4.1
9c8d4f8 verified
|
raw
history blame
7.56 kB
metadata
license: apache-2.0
base_model: mistralai/Devstral-Small-2-24B-Instruct-2512
tags:
  - lora
  - peft
  - mlx
  - eu-kiki
  - eu-ai-act
  - art-52
  - art-53
language:
  - fr
  - en
library_name: peft

eu-kiki-devstral-cpp-lora

LoRA adapter for mistralai/Devstral-Small-2-24B-Instruct-2512, part of the eu-kiki project — a 100 % EU-sovereign multi-model LLM serving pipeline. EU AI Act Article 52 / 53 compliant (limited risk, GPAI fine-tune).

1. Model identity

Field Value
Adapter name eu-kiki-devstral-cpp-lora
Base model mistralai/Devstral-Small-2-24B-Instruct-2512
Adapter method LoRA (rank 16, alpha 32, dropout 0.05)
Target modules q_proj, k_proj, v_proj, o_proj (attention only)
Precision BF16
Domain cpp
Training records 2,850 (curated, deduplicated)
License Apache-2.0 (matches base model)
Risk class Limited risk (Art. 52). Not safety-critical.
System operator L'Électron Rare (clemsail), Saillant Clément
Live demo https://ml.saillant.cc
Source repo https://github.com/L-electron-Rare/eu-kiki

2. Performance evaluation (Art. 53(1)(d))

HumanEval (custom Studio scorer, EvalPlus extra-tests not run — Linux-only sandbox): base 87.20 → +cpp 85.98 = −1.22 pts. For rigorous HumanEval+ Δ, sample re-scoring on Linux is required.

Full bench results, methodology, env.json, and rerun.sh per measurement: eval/results/SUMMARY.md · MODEL_CARD.md.

3. Training data (Art. 53(1)(b)+(d))

The following sources were used to fine-tune this specific adapter. Per-record _provenance fields (source, SPDX license, record_idx, access_date) are present in the source dataset; see system-level transparency record for full audit trail.

Source HF / URL SPDX License Records used
CommitPackFT bigcode/commitpackft MIT 1,500
ESP-IDF examples espressif/esp-idf Apache-2.0 700
STM32Cube examples STMicroelectronics/STM32CubeF4 BSD-3-Clause 450
Arduino examples arduino/Arduino CC0-1.0 200

Total records used for this LoRA: 2,850.

System-level inventory (all 35+ domains, full SPDX, scraping manifests, PDF pipeline DSM Art. 4 TDM compliance): docs/eu-ai-act-transparency.md.

3.1 Copyright policy (Art. 53(1)(c))

  • All HF-traced datasets carry permissive licenses (Apache-2.0, MIT, CC-BY-*, BSD); copyleft compatibility verified via SPDX matrix.
  • PDF datasheets (when used) processed under EU DSM Directive Article 4 TDM exception: robots.txt respected, SHA-256 manifests, dedicated audit at docs/pdf-compliance-report.md.
  • Opt-out / removal requests: open an issue on the source repo or email the system operator (see §5).

3.2 PII statement (Art. 10 + Art. 53(1)(d))

Training data scanned with Microsoft Presidio + en_core_web_lg (2026-04-28) across all 35+ domain directories. One email address detected in the unrelated traduction-tech corpus was redacted before training. No high-signal PII (email, phone, credit card, SSN, IBAN) remains. Low-signal detections (PERSON, LOCATION, DATE_TIME) are common false positives in technical text and were left in place. Full report: data/pii-scan-report.json in the source repo.

4. Training configuration

Parameter Value
Method LoRA
Rank 16
Alpha 32
Dropout 0.05
Target modules q_proj, k_proj, v_proj, o_proj
Precision BF16
Optimiser AdamW
Learning rate 1e-5
Batch size × grad-accum 1 × 4–8
Framework MLX (mlx_lm fork on Apple Silicon)
Hardware Mac Studio M3 Ultra 512 GB unified memory
Energy footprint ≪ training a foundation model from scratch (LoRA is parameter-efficient by design)

5. Usage

from mlx_lm import load
from mlx_lm.tuner.utils import linear_to_lora_layers
from huggingface_hub import snapshot_download

base_path = snapshot_download("mistralai/Devstral-Small-2-24B-Instruct-2512")
adapter_path = snapshot_download("clemsail/eu-kiki-devstral-cpp-lora")

model, tokenizer = load(base_path)
linear_to_lora_layers(model, num_layers=32, config={"rank": 16, "alpha": 32})
model.load_weights(f"{adapter_path}/adapters.safetensors", strict=False)

Or fuse and serve as a self-contained checkpoint:

python -m mlx_lm fuse \
    --model mistralai/Devstral-Small-2-24B-Instruct-2512 \
    --adapter-path <adapter_path> \
    --save-path /tmp/eu-kiki-devstral-cpp-lora-fused \
    --dequantize

6. Limitations & out-of-scope use

  • Not for safety-critical decisions (medical, legal, structural, life-safety, biometric).
  • Not for high-stakes individual decisions (hiring, credit, law enforcement) — that would re-classify under EU AI Act Art. 6 high-risk and require additional obligations.
  • Hallucination present at typical instruction-tuned LLM levels; pair with a verifier or human-in-the-loop for factual outputs.
  • LoRA is a fine-tune of the base model: it inherits all base-model limitations and biases (training data cutoff, language coverage, refusal patterns).

7. Contact (Art. 53(1)(d))

Subject Contact
Operator clemsail (L-electron-Rare on GitHub)
Issues / audit requests https://github.com/L-electron-Rare/eu-kiki/issues
Base model PII / copyright See base model card on Hugging Face
Apertus PII / copyright llm-privacy-requests@swiss-ai.org, llm-copyright-requests@swiss-ai.org

8. EU AI Act compliance summary

Article Coverage
Art. 52 (transparency to users) Adapter publishes its purpose, base, fine-tune nature, and limitations in this card
Art. 53(1)(a) (technical doc) This card + system-level MODEL_CARD.md
Art. 53(1)(b) (training data summary) §3 above + system-level transparency.md §4
Art. 53(1)(c) (copyright policy) §3.1 above + DSM Art. 4 TDM compliance for PDF-derived corpora
Art. 53(1)(d) (evaluation summary) §2 above + per-bench reproducible results in eval/results/SUMMARY.md
Art. 53(2) (open-source exemption) All weights Apache-2.0, datasets traceable, no proprietary teacher used in deployed inference
Art. 55 (systemic risk) Not applicable — no foundation model > 10²⁵ FLOPs trained here; this is a LoRA fine-tune

9. Citation

@misc{eu-kiki-2026,
  title  = {eu-kiki: EU-sovereign multi-model LLM serving with HF-traceable LoRA adapters},
  author = {Saillant, Clément},
  year   = {2026},
  url    = {https://github.com/L-electron-Rare/eu-kiki},
  note   = {Live demo: https://ml.saillant.cc}
}

10. Changelog

Date Change
2026-05-06 First HF release — Apache-2.0, EU AI Act self-contained model card v0.4.1