chore: rebrand card to Ailiance
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ tags:
|
|
| 6 |
- peft
|
| 7 |
- mlx
|
| 8 |
- ailiance
|
| 9 |
-
-
|
| 10 |
- eu-ai-act
|
| 11 |
- art-52
|
| 12 |
- art-53
|
|
@@ -20,7 +20,7 @@ library_name: peft
|
|
| 20 |
|
| 21 |
# devstral-python-lora
|
| 22 |
|
| 23 |
-
LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [ailiance](https://github.com/
|
| 24 |
|
| 25 |
> **EU AI Act compliance.** This card follows the **European Commission's
|
| 26 |
> *Template for the Public Summary of Training Content* for general-purpose
|
|
@@ -38,7 +38,7 @@ LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [
|
|
| 38 |
|
| 39 |
| Field | Value |
|
| 40 |
|---|---|
|
| 41 |
-
| **Provider name and contact details** |
|
| 42 |
| **Authorised representative name and contact details** | Not applicable — provider is established within the European Union (France). |
|
| 43 |
|
| 44 |
## 1.2. Model identification
|
|
@@ -59,7 +59,7 @@ LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [
|
|
| 59 |
| **Approximate size in alternative units** | ≈ 0.6 M tokens (2 850 rows × ≈ 200 tokens/row, single-pass). |
|
| 60 |
| **Latest date of data acquisition / collection for model training** | 11/2024 (StarCoder2 Self-Instruct release). The model is **not** continuously trained on new data after this date. |
|
| 61 |
| **Linguistic characteristics of the overall training data** | English (primary, instruction language); French (system-prompt context). No other natural languages in training rows. |
|
| 62 |
-
| **Other relevant characteristics / additional comments** | LoRA fine-tune (rank 16, alpha 32, dropout 0.05); only attention projections (`q_proj`, `k_proj`, `v_proj`, `o_proj`) are trained. Per-record `_provenance` (source, SPDX licence, `record_idx`, `access_date`) attached at the system level (see [`docs/eu-ai-act-transparency.md`](https://github.com/
|
| 63 |
|
| 64 |
---
|
| 65 |
|
|
@@ -129,8 +129,8 @@ _(N/A — no other data sources used.)_
|
|
| 129 |
|
| 130 |
- **Public HF datasets (§2.1):** all carry permissive open licences (Apache-2.0, MIT, CC-BY-*, BSD); SPDX matrix verified per-source. The licences explicitly authorise instructional / model-training use for the rows actually selected.
|
| 131 |
- **Web-scraped sources (§2.3):** prior to collection the provider verified `robots.txt`, `<meta name="robots" content="noai">`, `ai.txt`, and TDM-Reservation HTTP headers. Any source returning a reservation under Article 4(3) of Directive (EU) 2019/790 was excluded from collection. Scraping was limited to authoritative vendor-controlled repositories (ESP-IDF, STM32Cube, Arduino, KiCad symbols/footprints) operating under permissive licences.
|
| 132 |
-
- **Vendor PDF datasheets (§2.2.2 where present):** processed under the EU DSM Directive Article 4 TDM exception. SHA-256 manifests and per-source legal-basis records are published in [`docs/pdf-compliance-report.md`](https://github.com/
|
| 133 |
-
- **Public copyright policy (Art. 53(1)(c)):** [`docs/eu-ai-act-transparency.md`](https://github.com/
|
| 134 |
|
| 135 |
## 3.2. Removal of illegal content
|
| 136 |
|
|
@@ -151,11 +151,11 @@ _(N/A — no other data sources used.)_
|
|
| 151 |
|
| 152 |
# Appendix A — Performance evaluation (Art. 53(1)(a))
|
| 153 |
|
| 154 |
-
**HumanEval+** (EvalPlus official Linux scorer, 164 problems, greedy, 1 sample): base 87.20 / 82.90 → +python 86.00 / 81.10. **Δ HE+ = −1.80 pts** vs base. Scoring on `kx6tm-23` (Proxmox PVE 6.17). Full reproducer in [`eval/results/2026-05-04/devstral-python-fused-humanevalplus/rerun.sh`](https://github.com/
|
| 155 |
|
| 156 |
Full bench results, methodology, env.json, and rerun.sh per measurement:
|
| 157 |
-
[`eval/results/SUMMARY.md`](https://github.com/
|
| 158 |
-
[`MODEL_CARD.md`](https://github.com/
|
| 159 |
|
| 160 |
---
|
| 161 |
|
|
@@ -198,11 +198,11 @@ python -m mlx_lm fuse \
|
|
| 198 |
# Appendix D — Citation
|
| 199 |
|
| 200 |
```bibtex
|
| 201 |
-
@misc{
|
| 202 |
-
title = {
|
| 203 |
author = {Saillant, Clément},
|
| 204 |
year = {2026},
|
| 205 |
-
url = {https://github.com/
|
| 206 |
note = {Live demo: https://www.ailiance.fr}
|
| 207 |
}
|
| 208 |
```
|
|
|
|
| 6 |
- peft
|
| 7 |
- mlx
|
| 8 |
- ailiance
|
| 9 |
+
- ailiance
|
| 10 |
- eu-ai-act
|
| 11 |
- art-52
|
| 12 |
- art-53
|
|
|
|
| 20 |
|
| 21 |
# devstral-python-lora
|
| 22 |
|
| 23 |
+
LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [ailiance](https://github.com/ailiance/ailiance) project. Live demo: https://www.ailiance.fr.
|
| 24 |
|
| 25 |
> **EU AI Act compliance.** This card follows the **European Commission's
|
| 26 |
> *Template for the Public Summary of Training Content* for general-purpose
|
|
|
|
| 38 |
|
| 39 |
| Field | Value |
|
| 40 |
|---|---|
|
| 41 |
+
| **Provider name and contact details** | Ailiance (Saillant Clément) — `clemsail` on Hugging Face — Issues: https://github.com/ailiance/ailiance/issues |
|
| 42 |
| **Authorised representative name and contact details** | Not applicable — provider is established within the European Union (France). |
|
| 43 |
|
| 44 |
## 1.2. Model identification
|
|
|
|
| 59 |
| **Approximate size in alternative units** | ≈ 0.6 M tokens (2 850 rows × ≈ 200 tokens/row, single-pass). |
|
| 60 |
| **Latest date of data acquisition / collection for model training** | 11/2024 (StarCoder2 Self-Instruct release). The model is **not** continuously trained on new data after this date. |
|
| 61 |
| **Linguistic characteristics of the overall training data** | English (primary, instruction language); French (system-prompt context). No other natural languages in training rows. |
|
| 62 |
+
| **Other relevant characteristics / additional comments** | LoRA fine-tune (rank 16, alpha 32, dropout 0.05); only attention projections (`q_proj`, `k_proj`, `v_proj`, `o_proj`) are trained. Per-record `_provenance` (source, SPDX licence, `record_idx`, `access_date`) attached at the system level (see [`docs/eu-ai-act-transparency.md`](https://github.com/ailiance/ailiance/blob/main/docs/eu-ai-act-transparency.md) §4.4). Tokenizer: inherited from the base model. |
|
| 63 |
|
| 64 |
---
|
| 65 |
|
|
|
|
| 129 |
|
| 130 |
- **Public HF datasets (§2.1):** all carry permissive open licences (Apache-2.0, MIT, CC-BY-*, BSD); SPDX matrix verified per-source. The licences explicitly authorise instructional / model-training use for the rows actually selected.
|
| 131 |
- **Web-scraped sources (§2.3):** prior to collection the provider verified `robots.txt`, `<meta name="robots" content="noai">`, `ai.txt`, and TDM-Reservation HTTP headers. Any source returning a reservation under Article 4(3) of Directive (EU) 2019/790 was excluded from collection. Scraping was limited to authoritative vendor-controlled repositories (ESP-IDF, STM32Cube, Arduino, KiCad symbols/footprints) operating under permissive licences.
|
| 132 |
+
- **Vendor PDF datasheets (§2.2.2 where present):** processed under the EU DSM Directive Article 4 TDM exception. SHA-256 manifests and per-source legal-basis records are published in [`docs/pdf-compliance-report.md`](https://github.com/ailiance/ailiance/blob/main/docs/pdf-compliance-report.md).
|
| 133 |
+
- **Public copyright policy (Art. 53(1)(c)):** [`docs/eu-ai-act-transparency.md`](https://github.com/ailiance/ailiance/blob/main/docs/eu-ai-act-transparency.md). Removal requests are handled via the issue tracker on the source repository; the provider commits to remove disputed content within 30 days and re-train on the next release cycle.
|
| 134 |
|
| 135 |
## 3.2. Removal of illegal content
|
| 136 |
|
|
|
|
| 151 |
|
| 152 |
# Appendix A — Performance evaluation (Art. 53(1)(a))
|
| 153 |
|
| 154 |
+
**HumanEval+** (EvalPlus official Linux scorer, 164 problems, greedy, 1 sample): base 87.20 / 82.90 → +python 86.00 / 81.10. **Δ HE+ = −1.80 pts** vs base. Scoring on `kx6tm-23` (Proxmox PVE 6.17). Full reproducer in [`eval/results/2026-05-04/devstral-python-fused-humanevalplus/rerun.sh`](https://github.com/ailiance/ailiance/blob/main/eval/results/2026-05-04/devstral-python-fused-humanevalplus/).
|
| 155 |
|
| 156 |
Full bench results, methodology, env.json, and rerun.sh per measurement:
|
| 157 |
+
[`eval/results/SUMMARY.md`](https://github.com/ailiance/ailiance/blob/main/eval/results/SUMMARY.md) ·
|
| 158 |
+
[`MODEL_CARD.md`](https://github.com/ailiance/ailiance/blob/main/MODEL_CARD.md).
|
| 159 |
|
| 160 |
---
|
| 161 |
|
|
|
|
| 198 |
# Appendix D — Citation
|
| 199 |
|
| 200 |
```bibtex
|
| 201 |
+
@misc{ailiance-2026,
|
| 202 |
+
title = {ailiance: EU-sovereign multi-model LLM serving with HF-traceable LoRA adapters},
|
| 203 |
author = {Saillant, Clément},
|
| 204 |
year = {2026},
|
| 205 |
+
url = {https://github.com/ailiance/ailiance},
|
| 206 |
note = {Live demo: https://www.ailiance.fr}
|
| 207 |
}
|
| 208 |
```
|