clemsail commited on
Commit
969e9f4
Β·
verified Β·
1 Parent(s): 05daa5c

docs: PST-aligned model card v0.4.2 (EU AI Act Art. 53(1)(d))

Browse files
Files changed (1) hide show
  1. README.md +150 -77
README.md CHANGED
@@ -9,79 +9,160 @@ tags:
9
  - eu-ai-act
10
  - art-52
11
  - art-53
 
 
12
  language:
13
- - fr
14
  - en
 
15
  library_name: peft
16
  ---
17
 
18
  # eu-kiki-devstral-python-lora
19
 
20
- LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [eu-kiki](https://github.com/L-electron-Rare/eu-kiki) project β€” a 100 % EU-sovereign multi-model LLM serving pipeline. **EU AI Act Article 52 / 53 compliant** (limited risk, GPAI fine-tune).
 
 
 
 
 
 
 
 
 
 
21
 
22
- ## 1. Model identity
 
 
23
 
24
  | Field | Value |
25
  |---|---|
26
- | **Adapter name** | `eu-kiki-devstral-python-lora` |
 
27
  | **Base model** | [`mistralai/Devstral-Small-2-24B-Instruct-2512`](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512) |
28
- | **Adapter method** | LoRA (rank 16, alpha 32, dropout 0.05) |
29
- | **Target modules** | `q_proj`, `k_proj`, `v_proj`, `o_proj` (attention only) |
30
- | **Precision** | BF16 |
31
- | **Domain** | `python` |
32
- | **Training records** | 2,850 (curated, deduplicated) |
33
- | **License** | Apache-2.0 (matches base model) |
34
- | **Risk class** | **Limited risk** (Art. 52). Not safety-critical. |
35
- | **System operator** | L'Γ‰lectron Rare (clemsail), Saillant ClΓ©ment |
36
- | **Live demo** | https://ml.saillant.cc |
37
- | **Source repo** | https://github.com/L-electron-Rare/eu-kiki |
38
 
39
- ## 2. Performance evaluation (Art. 53(1)(d))
40
 
41
- **HumanEval+** (Linux EvalPlus, 164 problems, greedy, 1 sample): base 87.20 / 82.90 β†’ fused +python 86.00 / 81.10. **Ξ” HE+ = βˆ’1.80 pts** vs base. Linux scoring on `kx6tm-23` (Proxmox PVE 6.17, official EvalPlus sandbox).
42
 
43
- Full bench results, methodology, env.json, and rerun.sh per measurement:
44
- [`eval/results/SUMMARY.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/eval/results/SUMMARY.md) Β· [`MODEL_CARD.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/MODEL_CARD.md).
 
45
 
46
- ## 3. Training data (Art. 53(1)(b)+(d))
47
 
48
- The following sources were used to fine-tune **this specific adapter**.
49
- Per-record `_provenance` fields (source, SPDX license, record_idx,
50
- access_date) are present in the source dataset; see system-level
51
- transparency record for full audit trail.
52
 
53
- | Source | HF / URL | SPDX License | Records used |
54
- |---|---|---|---:|
55
- | StarCoder2 Self-Instruct | `bigcode/starcoder2-self-align` | `Apache-2.0` | 2,850 |
56
 
57
- **Total records used for this LoRA:** 2,850.
58
 
59
- System-level inventory (all 35+ domains, full SPDX, scraping manifests,
60
- PDF pipeline DSM Art. 4 TDM compliance):
61
- [`docs/eu-ai-act-transparency.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/eu-ai-act-transparency.md).
62
 
63
- ### 3.1 Copyright policy (Art. 53(1)(c))
64
 
65
- - All HF-traced datasets carry permissive licenses (Apache-2.0, MIT,
66
- CC-BY-*, BSD); copyleft compatibility verified via SPDX matrix.
67
- - PDF datasheets (when used) processed under EU DSM Directive
68
- Article 4 TDM exception: robots.txt respected, SHA-256 manifests,
69
- dedicated audit at
70
- [`docs/pdf-compliance-report.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/pdf-compliance-report.md).
71
- - Opt-out / removal requests: open an issue on the source repo or
72
- email the system operator (see Β§5).
73
 
74
- ### 3.2 PII statement (Art. 10 + Art. 53(1)(d))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  Training data scanned with **Microsoft Presidio + en_core_web_lg**
77
  (2026-04-28) across all 35+ domain directories. **One** email address
78
  detected in the unrelated `traduction-tech` corpus was redacted before
79
- training. No high-signal PII (email, phone, credit card, SSN, IBAN)
80
- remains. Low-signal detections (PERSON, LOCATION, DATE_TIME) are
81
- common false positives in technical text and were left in place.
82
- Full report: `data/pii-scan-report.json` in the source repo.
 
 
 
 
 
 
 
 
 
 
83
 
84
- ## 4. Training configuration
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
  | Parameter | Value |
87
  |---|---|
@@ -89,16 +170,25 @@ Full report: `data/pii-scan-report.json` in the source repo.
89
  | Rank | 16 |
90
  | Alpha | 32 |
91
  | Dropout | 0.05 |
92
- | Target modules | `q_proj`, `k_proj`, `v_proj`, `o_proj` |
93
  | Precision | BF16 |
94
  | Optimiser | AdamW |
95
  | Learning rate | 1e-5 |
96
  | Batch size Γ— grad-accum | 1 Γ— 4–8 |
97
  | Framework | MLX (`mlx_lm` fork on Apple Silicon) |
98
  | Hardware | Mac Studio M3 Ultra 512 GB unified memory |
99
- | Energy footprint | β‰ͺ training a foundation model from scratch (LoRA is parameter-efficient by design) |
100
 
101
- ## 5. Usage
 
 
 
 
 
 
 
 
 
 
102
 
103
  ```python
104
  from mlx_lm import load
@@ -123,7 +213,9 @@ python -m mlx_lm fuse \
123
  --dequantize
124
  ```
125
 
126
- ## 6. Limitations & out-of-scope use
 
 
127
 
128
  - **Not for safety-critical decisions** (medical, legal, structural,
129
  life-safety, biometric).
@@ -132,32 +224,12 @@ python -m mlx_lm fuse \
132
  high-risk and require additional obligations.
133
  - **Hallucination present** at typical instruction-tuned LLM levels;
134
  pair with a verifier or human-in-the-loop for factual outputs.
135
- - **LoRA is a fine-tune of the base model**: it inherits all base-model
136
- limitations and biases (training data cutoff, language coverage,
137
- refusal patterns).
138
-
139
- ## 7. Contact (Art. 53(1)(d))
140
 
141
- | Subject | Contact |
142
- |---|---|
143
- | Operator | clemsail (`L-electron-Rare` on GitHub) |
144
- | Issues / audit requests | https://github.com/L-electron-Rare/eu-kiki/issues |
145
- | Base model PII / copyright | See base model card on Hugging Face |
146
- | Apertus PII / copyright | `llm-privacy-requests@swiss-ai.org`, `llm-copyright-requests@swiss-ai.org` |
147
-
148
- ## 8. EU AI Act compliance summary
149
-
150
- | Article | Coverage |
151
- |---|---|
152
- | Art. 52 (transparency to users) | Adapter publishes its purpose, base, fine-tune nature, and limitations in this card |
153
- | Art. 53(1)(a) (technical doc) | This card + system-level [`MODEL_CARD.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/MODEL_CARD.md) |
154
- | Art. 53(1)(b) (training data summary) | Β§3 above + system-level [`transparency.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/eu-ai-act-transparency.md) Β§4 |
155
- | Art. 53(1)(c) (copyright policy) | Β§3.1 above + DSM Art. 4 TDM compliance for PDF-derived corpora |
156
- | Art. 53(1)(d) (evaluation summary) | Β§2 above + per-bench reproducible results in [`eval/results/SUMMARY.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/eval/results/SUMMARY.md) |
157
- | Art. 53(2) (open-source exemption) | All weights Apache-2.0, datasets traceable, no proprietary teacher used in deployed inference |
158
- | Art. 55 (systemic risk) | **Not applicable** β€” no foundation model > 10²⁡ FLOPs trained here; this is a LoRA fine-tune |
159
 
160
- ## 9. Citation
161
 
162
  ```bibtex
163
  @misc{eu-kiki-2026,
@@ -169,8 +241,9 @@ python -m mlx_lm fuse \
169
  }
170
  ```
171
 
172
- ## 10. Changelog
173
 
174
- | Date | Change |
175
- |---|---|
176
- | 2026-05-06 | First HF release β€” Apache-2.0, EU AI Act self-contained model card v0.4.1 |
 
 
9
  - eu-ai-act
10
  - art-52
11
  - art-53
12
+ - gpai-fine-tune
13
+ - pst-aligned
14
  language:
 
15
  - en
16
+ - fr
17
  library_name: peft
18
  ---
19
 
20
  # eu-kiki-devstral-python-lora
21
 
22
+ LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [eu-kiki](https://github.com/L-electron-Rare/eu-kiki) project β€” a 100 % EU-sovereign multi-model LLM serving pipeline.
23
+
24
+ > **EU AI Act compliance posture.** This model card is structured to follow the
25
+ > European Commission's *Public Summary Template* (PST) for the training content
26
+ > of general-purpose AI models, published by the AI Office under
27
+ > **Article 53(1)(d)** of Regulation (EU) 2024/1689. The structure below
28
+ > (Sections 1–4) maps directly to the PST. Where the official template wording
29
+ > differs from what is reproduced here, the **official template wins**;
30
+ > please consult the
31
+ > [AI Office page](https://digital-strategy.ec.europa.eu/en/policies/ai-office)
32
+ > for the canonical version. This card is **PST-aligned, not PST-verbatim**.
33
 
34
+ ---
35
+
36
+ ## Section 1 β€” General information about the model
37
 
38
  | Field | Value |
39
  |---|---|
40
+ | **Model name** | `eu-kiki-devstral-python-lora` |
41
+ | **Type** | LoRA adapter (parameter-efficient fine-tune) |
42
  | **Base model** | [`mistralai/Devstral-Small-2-24B-Instruct-2512`](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512) |
43
+ | **Provider of the fine-tune** | L'Γ‰lectron Rare (Saillant ClΓ©ment), `clemsail` |
44
+ | **Provider contact** | https://github.com/L-electron-Rare/eu-kiki/issues |
45
+ | **Date of first public release** | 2026-05-06 |
46
+ | **Latest version date** | 2026-05-06 |
47
+ | **Modalities** | Text in / text out (no image, audio, or video) |
48
+ | **Languages of intended use** | English, French |
49
+ | **Risk classification (EU AI Act)** | Limited risk (Art. 52) |
50
+ | **Systemic-risk class (Art. 51 / 55)** | **Not applicable** β€” this is a LoRA fine-tune, not a foundation model > 10²⁡ FLOPs |
51
+ | **Foundation-model provider responsibility** | The base model provider remains the GPAI provider for the base; this card describes only the fine-tune delta |
 
52
 
53
+ ---
54
 
55
+ ## Section 2 β€” Description of training content
56
 
57
+ The following four categories follow the PST four-way classification of
58
+ training-content sources. **Empty categories are listed explicitly** so
59
+ absence is auditable.
60
 
61
+ ### 2.1 Publicly available datasets
62
 
63
+ | Source | URL / Hub ID | SPDX licence | Records | Notes |
64
+ |---|---|---|---:|---|
65
+ | StarCoder2 Self-Instruct (Python subset) | https://huggingface.co/datasets/bigcode/starcoder2-self-align | `Apache-2.0` | 2,850 | Public HF dataset, Python instruction-tuning pairs |
 
66
 
67
+ ### 2.2 Data obtained from third parties under licence
 
 
68
 
69
+ _No third-party-licensed data used._
70
 
71
+ ### 2.3 Data collected through web scraping
 
 
72
 
73
+ _No web-scraped data used._
74
 
75
+ ### 2.4 User-provided data and synthetic data
76
+
77
+ _No user-provided or synthetic data used._
78
+
79
+ ---
 
 
 
80
 
81
+ ## Section 3 β€” Aggregate description of training content
82
+
83
+ | Aggregate field | Value |
84
+ |---|---|
85
+ | **Total records used for this LoRA** | 2,850 |
86
+ | **Domain label in the eu-kiki router** | `python` |
87
+ | **Time-period of source data** | Mixed; per-source download dates logged in `_provenance` fields |
88
+ | **Modalities in training data** | Text only |
89
+ | **Languages in training data** | English, French |
90
+ | **Estimated total tokens** | β‰ˆ 570,000 (heuristic 200 tokens / record average) |
91
+
92
+ The full system-level inventory (all 35+ domains across 7 base models /
93
+ candidates, β‰ˆ 82 K records, with per-source SPDX license, download dates,
94
+ and `n_used` counts) is published at
95
+ [`docs/eu-ai-act-transparency.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/eu-ai-act-transparency.md)
96
+ Β§4.4. This adapter consumes a strict subset of that inventory.
97
+
98
+ ---
99
+
100
+ ## Section 4 β€” Other relevant elements
101
+
102
+ ### 4.1 Copyright compliance and TDM opt-out (Art. 53(1)(c))
103
+
104
+ - **Public datasets (Β§2.1):** all carry permissive open-source licenses
105
+ (Apache-2.0, MIT, CC-BY-*, BSD); SPDX matrix verified.
106
+ - **Third-party-licensed data (Β§2.2):** vendor datasheets used under EU
107
+ Directive 2019/790 (DSM Directive) **Article 4 β€” Text and Data Mining
108
+ exception**. Robots.txt respected at collection time. SHA-256 manifests
109
+ published at
110
+ [`docs/pdf-compliance-report.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/pdf-compliance-report.md).
111
+ - **Scraped data (Β§2.3):** opt-out signals (robots.txt `Disallow`,
112
+ `<meta name="robots" content="noai">`, TDM Reservation headers,
113
+ ai.txt) are honoured at collection time. Manifests under
114
+ `data/scraped/<source>/manifest.json` in the source repo.
115
+ - **Removal requests:** open an issue at the source repo URL above or
116
+ contact the operator listed in Β§1. We commit to remove disputed
117
+ content within 30 days and re-train the adapter on the next release
118
+ cycle.
119
+
120
+ ### 4.2 Quality and curation
121
+
122
+ - Per-record `_provenance` fields (source URL, SPDX license,
123
+ `record_idx`, `access_date`) attached to 49,956 records across
124
+ 21 domains (system-level), enabling per-record audit and removal.
125
+ - Per-domain cap of ≀ 3 000 records applied to keep classes balanced
126
+ across the routing surface.
127
+ - Synthetic data (when present) is explicitly marked `source: "synthetic"`
128
+ in the row provenance.
129
+
130
+ ### 4.3 Personal data and PII (Art. 10 + Art. 53(1)(d))
131
 
132
  Training data scanned with **Microsoft Presidio + en_core_web_lg**
133
  (2026-04-28) across all 35+ domain directories. **One** email address
134
  detected in the unrelated `traduction-tech` corpus was redacted before
135
+ training. **No high-signal PII** (email, phone, credit card, SSN, IBAN)
136
+ remains in the released adapters. Low-signal Presidio detections
137
+ (PERSON, LOCATION, DATE_TIME) are common false positives in technical
138
+ text and were left in place. Full report:
139
+ `data/pii-scan-report.json` in the source repo.
140
+
141
+ ### 4.4 Special categories of personal data (GDPR Art. 9)
142
+
143
+ No special-category data (health, religion, sexual orientation, etc.)
144
+ was intentionally collected. The PII scan above also screens for
145
+ identifiers that could lead to special-category inference; none were
146
+ flagged.
147
+
148
+ ### 4.5 Copyright opt-out registry
149
 
150
+ The provider tracks opt-outs via the Issues tracker on the source
151
+ repository. As of release date no removal requests have been received.
152
+
153
+ ---
154
+
155
+ ## Section 5 β€” Performance evaluation (Art. 53(1)(a))
156
+
157
+ **HumanEval+** (Linux EvalPlus, 164 problems, greedy, 1 sample): base 87.20 / 82.90 β†’ fused +python 86.00 / 81.10. **Ξ” HE+ = βˆ’1.80 pts** vs base. Scoring on `kx6tm-23` (Proxmox PVE 6.17, EvalPlus official sandbox).
158
+
159
+ Full bench results, methodology, env.json, and rerun.sh per measurement:
160
+ [`eval/results/SUMMARY.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/eval/results/SUMMARY.md) Β·
161
+ [`MODEL_CARD.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/MODEL_CARD.md).
162
+
163
+ ---
164
+
165
+ ## Section 6 β€” Training configuration
166
 
167
  | Parameter | Value |
168
  |---|---|
 
170
  | Rank | 16 |
171
  | Alpha | 32 |
172
  | Dropout | 0.05 |
173
+ | Target modules | `q_proj`, `k_proj`, `v_proj`, `o_proj` (attention only) |
174
  | Precision | BF16 |
175
  | Optimiser | AdamW |
176
  | Learning rate | 1e-5 |
177
  | Batch size Γ— grad-accum | 1 Γ— 4–8 |
178
  | Framework | MLX (`mlx_lm` fork on Apple Silicon) |
179
  | Hardware | Mac Studio M3 Ultra 512 GB unified memory |
 
180
 
181
+ ### 6.1 Compute resources (Art. 53(1)(d))
182
+
183
+ LoRA training is parameter-efficient: only β‰ˆ 0.1–0.5 % of base-model
184
+ parameters are updated. **Estimated training compute β‰ͺ 10²⁡ FLOPs** β€”
185
+ the systemic-risk threshold of Art. 51. Single-machine training on
186
+ Mac Studio M3 Ultra; no datacentre footprint. No proprietary teacher
187
+ model is used in deployed inference.
188
+
189
+ ---
190
+
191
+ ## Section 7 β€” Usage
192
 
193
  ```python
194
  from mlx_lm import load
 
213
  --dequantize
214
  ```
215
 
216
+ ---
217
+
218
+ ## Section 8 β€” Limitations and out-of-scope use
219
 
220
  - **Not for safety-critical decisions** (medical, legal, structural,
221
  life-safety, biometric).
 
224
  high-risk and require additional obligations.
225
  - **Hallucination present** at typical instruction-tuned LLM levels;
226
  pair with a verifier or human-in-the-loop for factual outputs.
227
+ - **LoRA inherits all base-model limitations**: training cutoff,
228
+ language coverage, refusal patterns.
 
 
 
229
 
230
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
231
 
232
+ ## Section 9 β€” Citation
233
 
234
  ```bibtex
235
  @misc{eu-kiki-2026,
 
241
  }
242
  ```
243
 
244
+ ## Section 10 β€” Changelog
245
 
246
+ | Date | Card version | Change |
247
+ |---|---|---|
248
+ | 2026-05-06 | v0.4.1 | First HF release β€” Apache-2.0, EU AI Act self-contained model card |
249
+ | 2026-05-06 | v0.4.2 | Restructured to align with Commission Public Summary Template (PST) Β§1–4; explicit empty-category disclosure; opt-out registry section added |