clemsail commited on
Commit
f6d1762
·
verified ·
1 Parent(s): f5cc53f

Refresh model card: license chain + DISCLOSURE bandeau v2

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ base_model: mistralai/Devstral-Small-2-24B-Instruct-2512
4
+ library_name: peft
5
+ tags:
6
+ - mlx
7
+ - lora
8
+ - peft
9
+ - ailiance
10
+ - devstral
11
+ - platformio
12
+ language:
13
+ - en
14
+ - fr
15
+ pipeline_tag: text-generation
16
+ ---
17
+
18
+ # Ailiance — Devstral-Small-2-24B-BF16 platformio LoRA
19
+
20
+ LoRA adapter fine-tuned on `mistralai/Devstral-Small-2-24B-Instruct-2512` for **platformio** tasks.
21
+
22
+ > Maintained by **Ailiance** — French AI org publishing EU AI Act aligned LoRA adapters and datasets.
23
+
24
+ ## Quick start (MLX)
25
+
26
+ ```python
27
+ from mlx_lm import load, generate
28
+
29
+ model, tokenizer = load(
30
+ "mistralai/Devstral-Small-2-24B-Instruct-2512",
31
+ adapter_path="Ailiance-fr/devstral-platformio-lora",
32
+ )
33
+
34
+ print(generate(model, tokenizer, prompt="..."))
35
+ ```
36
+
37
+ ## Training
38
+
39
+ | Hyperparameter | Value |
40
+ |------------------|------------------------|
41
+ | Base model | `mistralai/Devstral-Small-2-24B-Instruct-2512` |
42
+ | Method | LoRA via `mlx-lm` |
43
+ | Rank | 16 |
44
+ | Scale | 2.0 |
45
+ | Alpha | 32 |
46
+ | Max seq length | 2048 |
47
+ | Iterations | 500 |
48
+ | Optimizer | Adam, LR 1e-5 |
49
+ | Hardware | Apple M3 Ultra 512 GB |
50
+
51
+ ## Training data lineage
52
+
53
+ | Role | Dataset | License |
54
+ |-----------------|--------------------------------------------------------------------------------------------------|----------------|
55
+ | Primary corpus | [`Ailiance-fr/mascarade-platformio-dataset`](https://huggingface.co/datasets/Ailiance-fr/mascarade-platformio-dataset) | cc-by-sa-4.0 |
56
+
57
+ For per-sample provenance and attribution status, consult the dataset card.
58
+
59
+ ## License chain
60
+
61
+ | Component | License |
62
+ |-----------------------------------|-------------------|
63
+ | Base model (`mistralai/Devstral-Small-2-24B-Instruct-2512`) | apache-2.0 |
64
+ | Training data ([`Ailiance-fr/mascarade-platformio-dataset`](https://huggingface.co/datasets/Ailiance-fr/mascarade-platformio-dataset)) | cc-by-sa-4.0 |
65
+ | **LoRA adapter (this repo)** | **cc-by-sa-4.0**|
66
+
67
+ _Most restrictive license in the chain (CC-BY-SA-4.0 share-alike) propagates to derivatives._
68
+
69
+ ## EU AI Act compliance
70
+
71
+ - **Article 53(1)(c)**: training data licenses preserved (per-dataset cards declare upstream licenses).
72
+ - **Article 53(1)(d)**: training data summary — see upstream dataset cards on Ailiance-fr.
73
+ - **GPAI Code of Practice (July 2025)**: base `mistralai/Devstral-Small-2-24B-Instruct-2512` released under apache-2.0.
74
+ - **No web scraping by Ailiance**, **no licensed data**, **no PII**.
75
+ - Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.
76
+
77
+ ## License
78
+
79
+ LoRA weights: **cc-by-sa-4.0** — see License chain table above for derivation rationale.
80
+
81
+ ## Citation
82
+
83
+ ```bibtex
84
+ @misc{ailiance_devstral_platformio_2026,
85
+ author = {Ailiance},
86
+ title = {Ailiance — Devstral-Small-2-24B-BF16 platformio LoRA},
87
+ year = {2026},
88
+ publisher = {Hugging Face},
89
+ url = {https://huggingface.co/Ailiance-fr/devstral-platformio-lora}
90
+ }
91
+ ```
92
+
93
+ ## Related
94
+
95
+ See the full [Ailiance-fr LoRA collection](https://huggingface.co/Ailiance-fr).