clemsail commited on
Commit
e888b48
Β·
verified Β·
1 Parent(s): 92b7510

docs: PST-verbatim model card v0.4.3 (Commission template 2025-07-24)

Browse files
Files changed (1) hide show
  1. README.md +117 -148
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - art-52
11
  - art-53
12
  - gpai-fine-tune
13
- - pst-aligned
14
  language:
15
  - en
16
  library_name: peft
@@ -18,176 +18,146 @@ library_name: peft
18
 
19
  # eu-kiki-devstral-rust-lora
20
 
21
- LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [eu-kiki](https://github.com/L-electron-Rare/eu-kiki) project β€” a 100 % EU-sovereign multi-model LLM serving pipeline.
22
 
23
- > **EU AI Act compliance posture.** This model card is structured to follow the
24
- > European Commission's *Public Summary Template* (PST) for the training content
25
- > of general-purpose AI models, published by the AI Office under
26
- > **Article 53(1)(d)** of Regulation (EU) 2024/1689. The structure below
27
- > (Sections 1–4) maps directly to the PST. Where the official template wording
28
- > differs from what is reproduced here, the **official template wins**;
29
- > please consult the
30
- > [AI Office page](https://digital-strategy.ec.europa.eu/en/policies/ai-office)
31
- > for the canonical version. This card is **PST-aligned, not PST-verbatim**.
32
 
33
  ---
34
 
35
- ## Section 1 β€” General information about the model
 
 
 
 
 
 
 
 
 
36
 
37
  | Field | Value |
38
  |---|---|
39
- | **Model name** | `eu-kiki-devstral-rust-lora` |
40
- | **Type** | LoRA adapter (parameter-efficient fine-tune) |
41
- | **Base model** | [`mistralai/Devstral-Small-2-24B-Instruct-2512`](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512) |
42
- | **Provider of the fine-tune** | L'Γ‰lectron Rare (Saillant ClΓ©ment), `clemsail` |
43
- | **Provider contact** | https://github.com/L-electron-Rare/eu-kiki/issues |
44
- | **Date of first public release** | 2026-05-06 |
45
- | **Latest version date** | 2026-05-06 |
46
- | **Modalities** | Text in / text out (no image, audio, or video) |
47
- | **Languages of intended use** | English |
48
- | **Risk classification (EU AI Act)** | Limited risk (Art. 52) |
49
- | **Systemic-risk class (Art. 51 / 55)** | **Not applicable** β€” this is a LoRA fine-tune, not a foundation model > 10²⁡ FLOPs |
50
- | **Foundation-model provider responsibility** | The base model provider remains the GPAI provider for the base; this card describes only the fine-tune delta |
 
 
 
51
 
52
  ---
53
 
54
- ## Section 2 β€” Description of training content
 
 
55
 
56
- The following four categories follow the PST four-way classification of
57
- training-content sources. **Empty categories are listed explicitly** so
58
- absence is auditable.
59
 
60
- ### 2.1 Publicly available datasets
61
 
62
- | Source | URL / Hub ID | SPDX licence | Records | Notes |
 
 
63
  |---|---|---|---:|---|
64
- | StarCoder2 Self-Instruct (Rust subset) | https://huggingface.co/datasets/bigcode/starcoder2-self-align | `Apache-2.0` | 2,850 | Public HF dataset, Rust instruction-tuning pairs |
65
 
66
- ### 2.2 Data obtained from third parties under licence
67
 
68
- _No third-party-licensed data used._
69
 
70
- ### 2.3 Data collected through web scraping
71
 
72
- _No web-scraped data used._
73
 
74
- ### 2.4 User-provided data and synthetic data
75
 
76
- _No user-provided or synthetic data used._
77
 
78
- ---
79
 
80
- ## Section 3 β€” Aggregate description of training content
81
 
82
- | Aggregate field | Value |
83
- |---|---|
84
- | **Total records used for this LoRA** | 2,850 |
85
- | **Domain label in the eu-kiki router** | `rust` |
86
- | **Time-period of source data** | Mixed; per-source download dates logged in `_provenance` fields |
87
- | **Modalities in training data** | Text only |
88
- | **Languages in training data** | English |
89
- | **Estimated total tokens** | β‰ˆ 570,000 (heuristic 200 tokens / record average) |
90
-
91
- The full system-level inventory (all 35+ domains across 7 base models /
92
- candidates, β‰ˆ 82 K records, with per-source SPDX license, download dates,
93
- and `n_used` counts) is published at
94
- [`docs/eu-ai-act-transparency.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/eu-ai-act-transparency.md)
95
- Β§4.4. This adapter consumes a strict subset of that inventory.
96
 
97
- ---
 
 
 
 
 
 
 
 
 
 
 
 
98
 
99
- ## Section 4 β€” Other relevant elements
100
-
101
- ### 4.1 Copyright compliance and TDM opt-out (Art. 53(1)(c))
102
-
103
- - **Public datasets (Β§2.1):** all carry permissive open-source licenses
104
- (Apache-2.0, MIT, CC-BY-*, BSD); SPDX matrix verified.
105
- - **Third-party-licensed data (Β§2.2):** vendor datasheets used under EU
106
- Directive 2019/790 (DSM Directive) **Article 4 β€” Text and Data Mining
107
- exception**. Robots.txt respected at collection time. SHA-256 manifests
108
- published at
109
- [`docs/pdf-compliance-report.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/pdf-compliance-report.md).
110
- - **Scraped data (Β§2.3):** opt-out signals (robots.txt `Disallow`,
111
- `<meta name="robots" content="noai">`, TDM Reservation headers,
112
- ai.txt) are honoured at collection time. Manifests under
113
- `data/scraped/<source>/manifest.json` in the source repo.
114
- - **Removal requests:** open an issue at the source repo URL above or
115
- contact the operator listed in Β§1. We commit to remove disputed
116
- content within 30 days and re-train the adapter on the next release
117
- cycle.
118
-
119
- ### 4.2 Quality and curation
120
-
121
- - Per-record `_provenance` fields (source URL, SPDX license,
122
- `record_idx`, `access_date`) attached to 49,956 records across
123
- 21 domains (system-level), enabling per-record audit and removal.
124
- - Per-domain cap of ≀ 3 000 records applied to keep classes balanced
125
- across the routing surface.
126
- - Synthetic data (when present) is explicitly marked `source: "synthetic"`
127
- in the row provenance.
128
-
129
- ### 4.3 Personal data and PII (Art. 10 + Art. 53(1)(d))
130
-
131
- Training data scanned with **Microsoft Presidio + en_core_web_lg**
132
- (2026-04-28) across all 35+ domain directories. **One** email address
133
- detected in the unrelated `traduction-tech` corpus was redacted before
134
- training. **No high-signal PII** (email, phone, credit card, SSN, IBAN)
135
- remains in the released adapters. Low-signal Presidio detections
136
- (PERSON, LOCATION, DATE_TIME) are common false positives in technical
137
- text and were left in place. Full report:
138
- `data/pii-scan-report.json` in the source repo.
139
-
140
- ### 4.4 Special categories of personal data (GDPR Art. 9)
141
-
142
- No special-category data (health, religion, sexual orientation, etc.)
143
- was intentionally collected. The PII scan above also screens for
144
- identifiers that could lead to special-category inference; none were
145
- flagged.
146
-
147
- ### 4.5 Copyright opt-out registry
148
-
149
- The provider tracks opt-outs via the Issues tracker on the source
150
- repository. As of release date no removal requests have been received.
151
 
152
  ---
153
 
154
- ## Section 5 β€” Performance evaluation (Art. 53(1)(a))
155
 
156
- **HumanEval** (custom Studio scorer): base 87.20 β†’ +rust 86.59 = **βˆ’0.61 pts**. Best Devstral adapter in this release.
157
 
158
- Full bench results, methodology, env.json, and rerun.sh per measurement:
159
- [`eval/results/SUMMARY.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/eval/results/SUMMARY.md) Β·
160
- [`MODEL_CARD.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/MODEL_CARD.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
 
162
  ---
163
 
164
- ## Section 6 β€” Training configuration
165
 
166
- | Parameter | Value |
167
- |---|---|
168
- | Method | LoRA |
169
- | Rank | 16 |
170
- | Alpha | 32 |
171
- | Dropout | 0.05 |
172
- | Target modules | `q_proj`, `k_proj`, `v_proj`, `o_proj` (attention only) |
173
- | Precision | BF16 |
174
- | Optimiser | AdamW |
175
- | Learning rate | 1e-5 |
176
- | Batch size Γ— grad-accum | 1 Γ— 4–8 |
177
- | Framework | MLX (`mlx_lm` fork on Apple Silicon) |
178
- | Hardware | Mac Studio M3 Ultra 512 GB unified memory |
179
-
180
- ### 6.1 Compute resources (Art. 53(1)(d))
181
-
182
- LoRA training is parameter-efficient: only β‰ˆ 0.1–0.5 % of base-model
183
- parameters are updated. **Estimated training compute β‰ͺ 10²⁡ FLOPs** β€”
184
- the systemic-risk threshold of Art. 51. Single-machine training on
185
- Mac Studio M3 Ultra; no datacentre footprint. No proprietary teacher
186
- model is used in deployed inference.
187
 
188
  ---
189
 
190
- ## Section 7 β€” Usage
191
 
192
  ```python
193
  from mlx_lm import load
@@ -214,21 +184,16 @@ python -m mlx_lm fuse \
214
 
215
  ---
216
 
217
- ## Section 8 β€” Limitations and out-of-scope use
218
 
219
- - **Not for safety-critical decisions** (medical, legal, structural,
220
- life-safety, biometric).
221
- - **Not for high-stakes individual decisions** (hiring, credit, law
222
- enforcement) β€” that would re-classify under EU AI Act Art. 6
223
- high-risk and require additional obligations.
224
- - **Hallucination present** at typical instruction-tuned LLM levels;
225
- pair with a verifier or human-in-the-loop for factual outputs.
226
- - **LoRA inherits all base-model limitations**: training cutoff,
227
- language coverage, refusal patterns.
228
 
229
  ---
230
 
231
- ## Section 9 β€” Citation
232
 
233
  ```bibtex
234
  @misc{eu-kiki-2026,
@@ -240,9 +205,13 @@ python -m mlx_lm fuse \
240
  }
241
  ```
242
 
243
- ## Section 10 β€” Changelog
 
 
244
 
245
  | Date | Card version | Change |
246
  |---|---|---|
247
- | 2026-05-06 | v0.4.1 | First HF release β€” Apache-2.0, EU AI Act self-contained model card |
248
- | 2026-05-06 | v0.4.2 | Restructured to align with Commission Public Summary Template (PST) Β§1–4; explicit empty-category disclosure; opt-out registry section added |
 
 
 
10
  - art-52
11
  - art-53
12
  - gpai-fine-tune
13
+ - pst-2025-07-24
14
  language:
15
  - en
16
  library_name: peft
 
18
 
19
  # eu-kiki-devstral-rust-lora
20
 
21
+ LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [eu-kiki](https://github.com/L-electron-Rare/eu-kiki) project. Live demo: https://ml.saillant.cc.
22
 
23
+ > **EU AI Act compliance.** This card follows the **European Commission's
24
+ > *Template for the Public Summary of Training Content* for general-purpose
25
+ > AI models** (Art. 53(1)(d) of Regulation (EU) 2024/1689, published by the
26
+ > AI Office on 2025-07-24). Section numbering and field labels reproduce
27
+ > the official template. Where this card and the official template differ
28
+ > in wording, the **official template wins** β€” see the
29
+ > [AI Office page](https://digital-strategy.ec.europa.eu/en/library/explanatory-notice-and-template-public-summary-training-content-general-purpose-ai-models).
 
 
30
 
31
  ---
32
 
33
+ # 1. General information
34
+
35
+ ## 1.1. Provider identification
36
+
37
+ | Field | Value |
38
+ |---|---|
39
+ | **Provider name and contact details** | L'Γ‰lectron Rare (Saillant ClΓ©ment) β€” `clemsail` on Hugging Face β€” Issues: https://github.com/L-electron-Rare/eu-kiki/issues |
40
+ | **Authorised representative name and contact details** | Not applicable β€” provider is established within the European Union (France). |
41
+
42
+ ## 1.2. Model identification
43
 
44
  | Field | Value |
45
  |---|---|
46
+ | **Versioned model name(s)** | `clemsail/eu-kiki-devstral-rust-lora` (this LoRA adapter, v0.4.2) |
47
+ | **Model dependencies** | This is a **fine-tune (LoRA, rank 16)** of the general-purpose AI model [`mistralai/Devstral-Small-2-24B-Instruct-2512`](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512). Refer to the base-model provider's PST for the underlying training summary. |
48
+ | **Date of placement of the model on the Union market** | 2026-05-06 |
49
+
50
+ ## 1.3. Modalities, overall training data size and other characteristics
51
+
52
+ | Field | Value |
53
+ |---|---|
54
+ | **Modality** | β˜’ Text ☐ Image ☐ Audio ☐ Video ☐ Other |
55
+ | **Training data size** (text bucket) | β˜’ Less than 1 billion tokens ☐ 1 billion to 10 trillion tokens ☐ More than 10 trillion tokens |
56
+ | **Types of content** | Instruction-tuning pairs, technical text, source code, multilingual instruction templates (EU official languages where applicable). |
57
+ | **Approximate size in alternative units** | β‰ˆ 0.6 M tokens. |
58
+ | **Latest date of data acquisition / collection for model training** | 11/2024 (StarCoder2 Self-Instruct release). The model is **not** continuously trained on new data after this date. |
59
+ | **Linguistic characteristics of the overall training data** | English. No other natural languages. |
60
+ | **Other relevant characteristics / additional comments** | LoRA fine-tune (rank 16, alpha 32, dropout 0.05); only attention projections (`q_proj`, `k_proj`, `v_proj`, `o_proj`) are trained. Per-record `_provenance` (source, SPDX licence, `record_idx`, `access_date`) attached at the system level (see [`docs/eu-ai-act-transparency.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/eu-ai-act-transparency.md) Β§4.4). Tokenizer: inherited from the base model. |
61
 
62
  ---
63
 
64
+ # 2. List of data sources
65
+
66
+ ## 2.1. Publicly available datasets
67
 
68
+ **Have you used publicly available datasets to train the model?** β˜’ Yes ☐ No
 
 
69
 
70
+ **Modality(ies) of the content covered:** β˜’ Text ☐ Image ☐ Video ☐ Audio ☐ Other
71
 
72
+ **List of large publicly available datasets:**
73
+
74
+ | Dataset | URL | SPDX licence | Records | Notes |
75
  |---|---|---|---:|---|
76
+ | StarCoder2 Self-Instruct (Rust subset) | https://huggingface.co/datasets/bigcode/starcoder2-self-align | `Apache-2.0` | 2,850 | Public HF dataset; instruction-tuning pairs. |
77
 
78
+ ## 2.2. Private non-publicly available datasets obtained from third parties
79
 
80
+ ### 2.2.1. Datasets commercially licensed by rightsholders or their representatives
81
 
82
+ **Have you concluded transactional commercial licensing agreement(s) with rightsholder(s) or with their representatives?** ☐ Yes β˜’ No
83
 
84
+ _(N/A β€” no commercial licensing agreements concluded.)_
85
 
86
+ ### 2.2.2. Private datasets obtained from other third parties
87
 
88
+ **Have you obtained private datasets from third parties that are not licensed as described in Section 2.2.1?** ☐ Yes β˜’ No
89
 
90
+ _(N/A β€” no private third-party datasets obtained.)_
91
 
92
+ ## 2.3. Data crawled and scraped from online sources
93
 
94
+ **Were crawlers used by the provider or on behalf of?** ☐ Yes β˜’ No
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
+ _(N/A β€” no crawler used.)_
97
+
98
+ ## 2.4. User data
99
+
100
+ **Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model?** ☐ Yes β˜’ No
101
+
102
+ **Was data collected from user interactions with the provider's other services or products used to train the model?** ☐ Yes β˜’ No
103
+
104
+ _(N/A β€” no user data collected from any provider service or AI-model interaction is used to train this LoRA.)_
105
+
106
+ ## 2.5. Synthetic data
107
+
108
+ **Was synthetic AI-generated data created by the provider or on their behalf to train the model?** ☐ Yes β˜’ No
109
 
110
+ _(N/A β€” no synthetic AI-generated data created by the provider or on their behalf to train this LoRA.)_
111
+
112
+ ## 2.6. Other sources of data
113
+
114
+ **Have data sources other than those described in Sections 2.1 to 2.5 been used to train the model?** ☐ Yes β˜’ No
115
+
116
+ _(N/A β€” no other data sources used.)_
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
  ---
119
 
120
+ # 3. Data processing aspects
121
 
122
+ ## 3.1. Respect of reservation of rights from text and data mining exception or limitation
123
 
124
+ **Are you a Signatory to the Code of Practice for general-purpose AI models that includes commitments to respect reservations of rights from the TDM exception or limitation?** ☐ Yes β˜’ No *(SME / individual provider; commitments equivalent in substance, see below.)*
125
+
126
+ **Measures implemented before model training to respect reservations of rights from the TDM exception or limitation:**
127
+
128
+ - **Public HF datasets (Β§2.1):** all carry permissive open licences (Apache-2.0, MIT, CC-BY-*, BSD); SPDX matrix verified per-source. The licences explicitly authorise instructional / model-training use for the rows actually selected.
129
+ - **Web-scraped sources (Β§2.3):** prior to collection the provider verified `robots.txt`, `<meta name="robots" content="noai">`, `ai.txt`, and TDM-Reservation HTTP headers. Any source returning a reservation under Article 4(3) of Directive (EU) 2019/790 was excluded from collection. Scraping was limited to authoritative vendor-controlled repositories (ESP-IDF, STM32Cube, Arduino, KiCad symbols/footprints) operating under permissive licences.
130
+ - **Vendor PDF datasheets (Β§2.2.2 where present):** processed under the EU DSM Directive Article 4 TDM exception. SHA-256 manifests and per-source legal-basis records are published in [`docs/pdf-compliance-report.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/pdf-compliance-report.md).
131
+ - **Public copyright policy (Art. 53(1)(c)):** [`docs/eu-ai-act-transparency.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/docs/eu-ai-act-transparency.md). Removal requests are handled via the issue tracker on the source repository; the provider commits to remove disputed content within 30 days and re-train on the next release cycle.
132
+
133
+ ## 3.2. Removal of illegal content
134
+
135
+ **General description of measures taken:**
136
+
137
+ - The provider does not crawl the open web at large; sources are restricted to curated public HF datasets and authoritative vendor repositories where the risk of illegal content (CSAM, terrorist content, IP-violating works) is structurally low.
138
+ - Personal data was screened with **Microsoft Presidio + en_core_web_lg** (2026-04-28) across all 35+ system-level domain directories. **One** email address detected in the unrelated `traduction-tech` corpus was redacted before training. Full report: `data/pii-scan-report.json`.
139
+ - No special-category data (GDPR Art. 9: health, religion, sexual orientation, etc.) was intentionally collected; the PII scan also screens for identifiers that could enable special-category inference (none flagged).
140
+ - License compatibility is enforced via per-source SPDX matrix; works under non-permissive licences are excluded.
141
+
142
+ ## 3.3. Other information (optional)
143
+
144
+ - **Per-record provenance:** 49 956 system-level training records carry `_provenance.{source, license, record_idx, access_date}` fields, enabling per-record audit and removal.
145
+ - **Compute footprint:** LoRA training updates β‰ˆ 0.1–0.5 % of base-model parameters. **Estimated training compute for this LoRA β‰ͺ 10²⁡ FLOPs**, well below the systemic-risk threshold of EU AI Act Art. 51. No proprietary teacher model is used in deployed inference.
146
+ - **Risk classification:** Limited risk (Art. 52). Not deployed in safety-critical contexts.
147
 
148
  ---
149
 
150
+ # Appendix A β€” Performance evaluation (Art. 53(1)(a))
151
 
152
+ **HumanEval** (custom Studio scorer): base 87.20 β†’ +rust 86.59 = **βˆ’0.61 pts**. Best of the three Devstral adapters in this release.
153
+
154
+ Full bench results, methodology, env.json, and rerun.sh per measurement:
155
+ [`eval/results/SUMMARY.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/eval/results/SUMMARY.md) Β·
156
+ [`MODEL_CARD.md`](https://github.com/L-electron-Rare/eu-kiki/blob/main/MODEL_CARD.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
157
 
158
  ---
159
 
160
+ # Appendix B β€” Usage
161
 
162
  ```python
163
  from mlx_lm import load
 
184
 
185
  ---
186
 
187
+ # Appendix C β€” Limitations and out-of-scope use
188
 
189
+ - Not for safety-critical decisions (medical, legal, structural, life-safety, biometric).
190
+ - Not for high-stakes individual decisions (hiring, credit, law enforcement) β€” that would re-classify under EU AI Act Art. 6 high-risk and require additional obligations.
191
+ - Hallucination present at typical instruction-tuned LLM levels; pair with a verifier or human-in-the-loop for factual outputs.
192
+ - LoRA inherits all base-model limitations (training cutoff, language coverage, refusal patterns).
 
 
 
 
 
193
 
194
  ---
195
 
196
+ # Appendix D β€” Citation
197
 
198
  ```bibtex
199
  @misc{eu-kiki-2026,
 
205
  }
206
  ```
207
 
208
+ ---
209
+
210
+ # Appendix E β€” Changelog
211
 
212
  | Date | Card version | Change |
213
  |---|---|---|
214
+ | 2026-05-06 | v0.4.0 | Initial HF release |
215
+ | 2026-05-06 | v0.4.1 | Self-contained EU AI Act card (per-adapter dataset table, PII statement, contact) |
216
+ | 2026-05-06 | v0.4.2 | PST-aligned (Commission template structure, Sections Β§1–4) |
217
+ | 2026-05-06 | **v0.4.3** | **PST-verbatim** β€” section labels and field names reproduced from the official Commission template (PDF 2025-07-24, English version). |