Refresh model card: license chain + DISCLOSURE bandeau v2
Browse files
README.md
CHANGED
|
@@ -1,227 +1,93 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
base_model: mistralai/Devstral-Small-2-24B-Instruct-2512
|
|
|
|
| 4 |
tags:
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
- art-52
|
| 12 |
-
- art-53
|
| 13 |
-
- gpai-fine-tune
|
| 14 |
-
- pst-2025-07-24
|
| 15 |
language:
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
---
|
| 20 |
|
| 21 |
-
#
|
| 22 |
-
|
| 23 |
-
LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [ailiance](https://github.com/ailiance/ailiance) project. Live demo: https://www.ailiance.fr.
|
| 24 |
-
|
| 25 |
-
> **EU AI Act compliance.** This card follows the **European Commission's
|
| 26 |
-
> *Template for the Public Summary of Training Content* for general-purpose
|
| 27 |
-
> AI models** (Art. 53(1)(d) of Regulation (EU) 2024/1689, published by the
|
| 28 |
-
> AI Office on 2025-07-24). Section numbering and field labels reproduce
|
| 29 |
-
> the official template. Where this card and the official template differ
|
| 30 |
-
> in wording, the **official template wins** — see the
|
| 31 |
-
> [AI Office page](https://digital-strategy.ec.europa.eu/en/library/explanatory-notice-and-template-public-summary-training-content-general-purpose-ai-models).
|
| 32 |
-
|
| 33 |
-
---
|
| 34 |
-
|
| 35 |
-
# 1. General information
|
| 36 |
-
|
| 37 |
-
## 1.1. Provider identification
|
| 38 |
-
|
| 39 |
-
| Field | Value |
|
| 40 |
-
|---|---|
|
| 41 |
-
| **Provider name and contact details** | Ailiance (Saillant Clément) — `clemsail` on Hugging Face — Issues: https://github.com/ailiance/ailiance/issues |
|
| 42 |
-
| **Authorised representative name and contact details** | Not applicable — provider is established within the European Union (France). |
|
| 43 |
-
|
| 44 |
-
## 1.2. Model identification
|
| 45 |
-
|
| 46 |
-
| Field | Value |
|
| 47 |
-
|---|---|
|
| 48 |
-
| **Versioned model name(s)** | `Ailiance-fr/devstral-python-lora` (this LoRA adapter, v0.4.2) |
|
| 49 |
-
| **Model dependencies** | This is a **fine-tune (LoRA, rank 16)** of the general-purpose AI model [`mistralai/Devstral-Small-2-24B-Instruct-2512`](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512). Refer to the base-model provider's PST for the underlying training summary. |
|
| 50 |
-
| **Date of placement of the model on the Union market** | 2026-05-06 |
|
| 51 |
-
|
| 52 |
-
## 1.3. Modalities, overall training data size and other characteristics
|
| 53 |
-
|
| 54 |
-
| Field | Value |
|
| 55 |
-
|---|---|
|
| 56 |
-
| **Modality** | ☒ Text ☐ Image ☐ Audio ☐ Video ☐ Other |
|
| 57 |
-
| **Training data size** (text bucket) | ☒ Less than 1 billion tokens ☐ 1 billion to 10 trillion tokens ☐ More than 10 trillion tokens |
|
| 58 |
-
| **Types of content** | Instruction-tuning pairs, technical text, source code, multilingual instruction templates (EU official languages where applicable). |
|
| 59 |
-
| **Approximate size in alternative units** | ≈ 0.6 M tokens (2 850 rows × ≈ 200 tokens/row, single-pass). |
|
| 60 |
-
| **Latest date of data acquisition / collection for model training** | 11/2024 (StarCoder2 Self-Instruct release). The model is **not** continuously trained on new data after this date. |
|
| 61 |
-
| **Linguistic characteristics of the overall training data** | English (primary, instruction language); French (system-prompt context). No other natural languages in training rows. |
|
| 62 |
-
| **Other relevant characteristics / additional comments** | LoRA fine-tune (rank 16, alpha 32, dropout 0.05); only attention projections (`q_proj`, `k_proj`, `v_proj`, `o_proj`) are trained. Per-record `_provenance` (source, SPDX licence, `record_idx`, `access_date`) attached at the system level (see [`docs/eu-ai-act-transparency.md`](https://github.com/ailiance/ailiance/blob/main/docs/eu-ai-act-transparency.md) §4.4). Tokenizer: inherited from the base model. |
|
| 63 |
-
|
| 64 |
-
---
|
| 65 |
-
|
| 66 |
-
# 2. List of data sources
|
| 67 |
-
|
| 68 |
-
## 2.1. Publicly available datasets
|
| 69 |
-
|
| 70 |
-
**Have you used publicly available datasets to train the model?** ☒ Yes ☐ No
|
| 71 |
-
|
| 72 |
-
**Modality(ies) of the content covered:** ☒ Text ☐ Image ☐ Video ☐ Audio ☐ Other
|
| 73 |
-
|
| 74 |
-
**List of large publicly available datasets:**
|
| 75 |
-
|
| 76 |
-
| Dataset | URL | SPDX licence | Records | Notes |
|
| 77 |
-
|---|---|---|---:|---|
|
| 78 |
-
| StarCoder2 Self-Instruct (Python subset filtered by language keyword) | https://huggingface.co/datasets/bigcode/starcoder2-self-align | `Apache-2.0` | 2,850 | Public HF dataset; instruction-tuning pairs. |
|
| 79 |
-
|
| 80 |
-
## 2.2. Private non-publicly available datasets obtained from third parties
|
| 81 |
-
|
| 82 |
-
### 2.2.1. Datasets commercially licensed by rightsholders or their representatives
|
| 83 |
-
|
| 84 |
-
**Have you concluded transactional commercial licensing agreement(s) with rightsholder(s) or with their representatives?** ☐ Yes ☒ No
|
| 85 |
-
|
| 86 |
-
_(N/A — no commercial licensing agreements concluded.)_
|
| 87 |
-
|
| 88 |
-
### 2.2.2. Private datasets obtained from other third parties
|
| 89 |
-
|
| 90 |
-
**Have you obtained private datasets from third parties that are not licensed as described in Section 2.2.1?** ☐ Yes ☒ No
|
| 91 |
-
|
| 92 |
-
_(N/A — no private third-party datasets obtained.)_
|
| 93 |
-
|
| 94 |
-
## 2.3. Data crawled and scraped from online sources
|
| 95 |
-
|
| 96 |
-
**Were crawlers used by the provider or on behalf of?** ☐ Yes ☒ No
|
| 97 |
-
|
| 98 |
-
_(N/A — no crawler used.)_
|
| 99 |
-
|
| 100 |
-
## 2.4. User data
|
| 101 |
-
|
| 102 |
-
**Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model?** ☐ Yes ☒ No
|
| 103 |
|
| 104 |
-
|
| 105 |
|
| 106 |
-
|
| 107 |
|
| 108 |
-
##
|
| 109 |
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
_(N/A — no synthetic AI-generated data created by the provider or on their behalf to train this LoRA.)_
|
| 113 |
-
|
| 114 |
-
## 2.6. Other sources of data
|
| 115 |
-
|
| 116 |
-
**Have data sources other than those described in Sections 2.1 to 2.5 been used to train the model?** ☐ Yes ☒ No
|
| 117 |
-
|
| 118 |
-
_(N/A — no other data sources used.)_
|
| 119 |
-
|
| 120 |
-
---
|
| 121 |
-
|
| 122 |
-
# 3. Data processing aspects
|
| 123 |
-
|
| 124 |
-
## 3.1. Respect of reservation of rights from text and data mining exception or limitation
|
| 125 |
-
|
| 126 |
-
**Are you a Signatory to the Code of Practice for general-purpose AI models that includes commitments to respect reservations of rights from the TDM exception or limitation?** ☐ Yes ☒ No *(SME / individual provider; commitments equivalent in substance, see below.)*
|
| 127 |
-
|
| 128 |
-
**Measures implemented before model training to respect reservations of rights from the TDM exception or limitation:**
|
| 129 |
-
|
| 130 |
-
- **Public HF datasets (§2.1):** all carry permissive open licences (Apache-2.0, MIT, CC-BY-*, BSD); SPDX matrix verified per-source. The licences explicitly authorise instructional / model-training use for the rows actually selected.
|
| 131 |
-
- **Web-scraped sources (§2.3):** prior to collection the provider verified `robots.txt`, `<meta name="robots" content="noai">`, `ai.txt`, and TDM-Reservation HTTP headers. Any source returning a reservation under Article 4(3) of Directive (EU) 2019/790 was excluded from collection. Scraping was limited to authoritative vendor-controlled repositories (ESP-IDF, STM32Cube, Arduino, KiCad symbols/footprints) operating under permissive licences.
|
| 132 |
-
- **Vendor PDF datasheets (§2.2.2 where present):** processed under the EU DSM Directive Article 4 TDM exception. SHA-256 manifests and per-source legal-basis records are published in [`docs/pdf-compliance-report.md`](https://github.com/ailiance/ailiance/blob/main/docs/pdf-compliance-report.md).
|
| 133 |
-
- **Public copyright policy (Art. 53(1)(c)):** [`docs/eu-ai-act-transparency.md`](https://github.com/ailiance/ailiance/blob/main/docs/eu-ai-act-transparency.md). Removal requests are handled via the issue tracker on the source repository; the provider commits to remove disputed content within 30 days and re-train on the next release cycle.
|
| 134 |
-
|
| 135 |
-
## 3.2. Removal of illegal content
|
| 136 |
-
|
| 137 |
-
**General description of measures taken:**
|
| 138 |
-
|
| 139 |
-
- The provider does not crawl the open web at large; sources are restricted to curated public HF datasets and authoritative vendor repositories where the risk of illegal content (CSAM, terrorist content, IP-violating works) is structurally low.
|
| 140 |
-
- Personal data was screened with **Microsoft Presidio + en_core_web_lg** (2026-04-28) across all 35+ system-level domain directories. **One** email address detected in the unrelated `traduction-tech` corpus was redacted before training. Full report: `data/pii-scan-report.json`.
|
| 141 |
-
- No special-category data (GDPR Art. 9: health, religion, sexual orientation, etc.) was intentionally collected; the PII scan also screens for identifiers that could enable special-category inference (none flagged).
|
| 142 |
-
- License compatibility is enforced via per-source SPDX matrix; works under non-permissive licences are excluded.
|
| 143 |
-
|
| 144 |
-
## 3.3. Other information (optional)
|
| 145 |
-
|
| 146 |
-
- **Per-record provenance:** 49 956 system-level training records carry `_provenance.{source, license, record_idx, access_date}` fields, enabling per-record audit and removal.
|
| 147 |
-
- **Compute footprint:** LoRA training updates ≈ 0.1–0.5 % of base-model parameters. **Estimated training compute for this LoRA ≪ 10²⁵ FLOPs**, well below the systemic-risk threshold of EU AI Act Art. 51. No proprietary teacher model is used in deployed inference.
|
| 148 |
-
- **Risk classification:** Limited risk (Art. 52). Not deployed in safety-critical contexts.
|
| 149 |
-
|
| 150 |
-
---
|
| 151 |
-
|
| 152 |
-
# Appendix A — Performance evaluation (Art. 53(1)(a))
|
| 153 |
|
| 154 |
-
|
|
|
|
|
|
|
|
|
|
| 155 |
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
[`MODEL_CARD.md`](https://github.com/ailiance/ailiance/blob/main/MODEL_CARD.md).
|
| 159 |
|
| 160 |
-
|
| 161 |
|
| 162 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 163 |
|
| 164 |
-
|
| 165 |
-
from mlx_lm import load
|
| 166 |
-
from mlx_lm.tuner.utils import linear_to_lora_layers
|
| 167 |
-
from huggingface_hub import snapshot_download
|
| 168 |
|
| 169 |
-
|
| 170 |
-
|
|
|
|
| 171 |
|
| 172 |
-
|
| 173 |
-
linear_to_lora_layers(model, num_layers=32, config={"rank": 16, "alpha": 32})
|
| 174 |
-
model.load_weights(f"{adapter_path}/adapters.safetensors", strict=False)
|
| 175 |
-
```
|
| 176 |
|
| 177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 178 |
|
| 179 |
-
|
| 180 |
-
python -m mlx_lm fuse \
|
| 181 |
-
--model mistralai/Devstral-Small-2-24B-Instruct-2512 \
|
| 182 |
-
--adapter-path <adapter_path> \
|
| 183 |
-
--save-path /tmp/devstral-python-lora-fused \
|
| 184 |
-
--dequantize
|
| 185 |
-
```
|
| 186 |
|
| 187 |
-
|
| 188 |
|
| 189 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 190 |
|
| 191 |
-
|
| 192 |
-
- Not for high-stakes individual decisions (hiring, credit, law enforcement) — that would re-classify under EU AI Act Art. 6 high-risk and require additional obligations.
|
| 193 |
-
- Hallucination present at typical instruction-tuned LLM levels; pair with a verifier or human-in-the-loop for factual outputs.
|
| 194 |
-
- LoRA inherits all base-model limitations (training cutoff, language coverage, refusal patterns).
|
| 195 |
|
| 196 |
-
-
|
| 197 |
|
| 198 |
-
#
|
| 199 |
|
| 200 |
```bibtex
|
| 201 |
-
@misc{
|
| 202 |
-
|
| 203 |
-
|
| 204 |
-
year
|
| 205 |
-
|
| 206 |
-
|
| 207 |
}
|
| 208 |
```
|
| 209 |
|
| 210 |
-
|
| 211 |
-
|
| 212 |
-
# Appendix E — Changelog
|
| 213 |
-
|
| 214 |
-
| Date | Card version | Change |
|
| 215 |
-
|---|---|---|
|
| 216 |
-
| 2026-05-06 | v0.4.0 | Initial HF release |
|
| 217 |
-
| 2026-05-06 | v0.4.1 | Self-contained EU AI Act card (per-adapter dataset table, PII statement, contact) |
|
| 218 |
-
| 2026-05-06 | v0.4.2 | PST-aligned (Commission template structure, Sections §1–4) |
|
| 219 |
-
| 2026-05-06 | **v0.4.3** | **PST-verbatim** — section labels and field names reproduced from the official Commission template (PDF 2025-07-24, English version). |
|
| 220 |
-
|
| 221 |
-
## Validated in `ailiance/ailiance-bench` v0.2
|
| 222 |
-
|
| 223 |
-
This model is referenced in the [Ailiance benchmark suite](https://github.com/ailiance/ailiance-bench)
|
| 224 |
-
(Phase 6 scoreboard, 7-task hardware-design evaluation).
|
| 225 |
|
| 226 |
-
See the full
|
| 227 |
-
[ailiance-bench README#scoreboard-lora-phase-6](https://github.com/ailiance/ailiance-bench#scoreboard-lora-phase-6--2026-05-11).
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
base_model: mistralai/Devstral-Small-2-24B-Instruct-2512
|
| 4 |
+
library_name: peft
|
| 5 |
tags:
|
| 6 |
+
- mlx
|
| 7 |
+
- lora
|
| 8 |
+
- peft
|
| 9 |
+
- ailiance
|
| 10 |
+
- devstral
|
| 11 |
+
- python
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
language:
|
| 13 |
+
- en
|
| 14 |
+
- fr
|
| 15 |
+
pipeline_tag: text-generation
|
| 16 |
---
|
| 17 |
|
| 18 |
+
# Ailiance — Devstral-Small-2-24B-Instruct python LoRA
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
LoRA adapter fine-tuned on `mistralai/Devstral-Small-2-24B-Instruct-2512` for **python** tasks.
|
| 21 |
|
| 22 |
+
> Maintained by **Ailiance** — French AI org publishing EU AI Act aligned LoRA adapters and datasets.
|
| 23 |
|
| 24 |
+
## Quick start (MLX)
|
| 25 |
|
| 26 |
+
```python
|
| 27 |
+
from mlx_lm import load, generate
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
+
model, tokenizer = load(
|
| 30 |
+
"mistralai/Devstral-Small-2-24B-Instruct-2512",
|
| 31 |
+
adapter_path="Ailiance-fr/devstral-python-lora",
|
| 32 |
+
)
|
| 33 |
|
| 34 |
+
print(generate(model, tokenizer, prompt="..."))
|
| 35 |
+
```
|
|
|
|
| 36 |
|
| 37 |
+
## Training
|
| 38 |
|
| 39 |
+
| Hyperparameter | Value |
|
| 40 |
+
|------------------|------------------------|
|
| 41 |
+
| Base model | `mistralai/Devstral-Small-2-24B-Instruct-2512` |
|
| 42 |
+
| Method | LoRA via `mlx-lm` |
|
| 43 |
+
| Rank | 16 |
|
| 44 |
+
| Scale | 2.0 |
|
| 45 |
+
| Alpha | 32 |
|
| 46 |
+
| Max seq length | 2048 |
|
| 47 |
+
| Iterations | 500 |
|
| 48 |
+
| Optimizer | Adam, LR 1e-5 |
|
| 49 |
+
| Hardware | Apple M3 Ultra 512 GB |
|
| 50 |
|
| 51 |
+
## Training data lineage
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
+
Derived from the internal **eu-kiki / mascarade** curation. All upstream samples
|
| 54 |
+
are synthetic, permissively-licensed, or generated from Apache-2.0 base resources.
|
| 55 |
+
See the [Ailiance-fr catalog](https://huggingface.co/Ailiance-fr) for related cards.
|
| 56 |
|
| 57 |
+
## License chain
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
+
| Component | License |
|
| 60 |
+
|-----------------------------------|-------------------|
|
| 61 |
+
| Base model (`mistralai/Devstral-Small-2-24B-Instruct-2512`) | apache-2.0 |
|
| 62 |
+
| Training data (internal Ailiance curation (synthetic + permissive sources)) | apache-2.0 |
|
| 63 |
+
| **LoRA adapter (this repo)** | **apache-2.0**|
|
| 64 |
|
| 65 |
+
_All upstream components are Apache 2.0 / MIT — LoRA inherits permissive terms._
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
+
## EU AI Act compliance
|
| 68 |
|
| 69 |
+
- **Article 53(1)(c)**: training data licenses preserved (per-dataset cards declare upstream licenses).
|
| 70 |
+
- **Article 53(1)(d)**: training data summary — see upstream dataset cards on Ailiance-fr.
|
| 71 |
+
- **GPAI Code of Practice (July 2025)**: base `mistralai/Devstral-Small-2-24B-Instruct-2512` released under apache-2.0.
|
| 72 |
+
- **No web scraping by Ailiance**, **no licensed data**, **no PII**.
|
| 73 |
+
- Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.
|
| 74 |
|
| 75 |
+
## License
|
|
|
|
|
|
|
|
|
|
| 76 |
|
| 77 |
+
LoRA weights: **apache-2.0** — see License chain table above for derivation rationale.
|
| 78 |
|
| 79 |
+
## Citation
|
| 80 |
|
| 81 |
```bibtex
|
| 82 |
+
@misc{ailiance_devstral_python_2026,
|
| 83 |
+
author = {Ailiance},
|
| 84 |
+
title = {Ailiance — Devstral-Small-2-24B-Instruct python LoRA},
|
| 85 |
+
year = {2026},
|
| 86 |
+
publisher = {Hugging Face},
|
| 87 |
+
url = {https://huggingface.co/Ailiance-fr/devstral-python-lora}
|
| 88 |
}
|
| 89 |
```
|
| 90 |
|
| 91 |
+
## Related
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
+
See the full [Ailiance-fr LoRA collection](https://huggingface.co/Ailiance-fr).
|
|
|