clemsail commited on
Commit
091383e
·
verified ·
1 Parent(s): 9470f7e

Refresh model card: license chain + DISCLOSURE bandeau v2

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: mistralai/Devstral-Small-2-24B-Instruct-2512
4
+ library_name: peft
5
+ tags:
6
+ - mlx
7
+ - lora
8
+ - peft
9
+ - ailiance
10
+ - devstral
11
+ - typescript
12
+ language:
13
+ - en
14
+ - fr
15
+ pipeline_tag: text-generation
16
+ ---
17
+
18
+ # Ailiance — Devstral-Small-2-24B-BF16 typescript (bf16) LoRA
19
+
20
+ LoRA adapter fine-tuned on `mistralai/Devstral-Small-2-24B-Instruct-2512` for **typescript** tasks.
21
+
22
+ > **Variant**: trained on the BF16 base for higher numerical fidelity.
23
+
24
+ > Maintained by **Ailiance** — French AI org publishing EU AI Act aligned LoRA adapters and datasets.
25
+
26
+ ## Quick start (MLX)
27
+
28
+ ```python
29
+ from mlx_lm import load, generate
30
+
31
+ model, tokenizer = load(
32
+ "mistralai/Devstral-Small-2-24B-Instruct-2512",
33
+ adapter_path="Ailiance-fr/devstral-typescript-bf16-lora",
34
+ )
35
+
36
+ print(generate(model, tokenizer, prompt="..."))
37
+ ```
38
+
39
+ ## Training
40
+
41
+ | Hyperparameter | Value |
42
+ |------------------|------------------------|
43
+ | Base model | `mistralai/Devstral-Small-2-24B-Instruct-2512` |
44
+ | Method | LoRA via `mlx-lm` |
45
+ | Rank | 16 |
46
+ | Scale | 2.0 |
47
+ | Alpha | 32 |
48
+ | Max seq length | 2048 |
49
+ | Iterations | 500 |
50
+ | Optimizer | Adam, LR 1e-5 |
51
+ | Hardware | Apple M3 Ultra 512 GB |
52
+
53
+ ## Training data lineage
54
+
55
+ Derived from the internal **eu-kiki / mascarade** curation. All upstream samples
56
+ are synthetic, permissively-licensed, or generated from Apache-2.0 base resources.
57
+ See the [Ailiance-fr catalog](https://huggingface.co/Ailiance-fr) for related cards.
58
+
59
+ ## License chain
60
+
61
+ | Component | License |
62
+ |-----------------------------------|-------------------|
63
+ | Base model (`mistralai/Devstral-Small-2-24B-Instruct-2512`) | apache-2.0 |
64
+ | Training data (internal Ailiance curation (synthetic + permissive sources)) | apache-2.0 |
65
+ | **LoRA adapter (this repo)** | **apache-2.0**|
66
+
67
+ _All upstream components are Apache 2.0 / MIT — LoRA inherits permissive terms._
68
+
69
+ ## EU AI Act compliance
70
+
71
+ - **Article 53(1)(c)**: training data licenses preserved (per-dataset cards declare upstream licenses).
72
+ - **Article 53(1)(d)**: training data summary — see upstream dataset cards on Ailiance-fr.
73
+ - **GPAI Code of Practice (July 2025)**: base `mistralai/Devstral-Small-2-24B-Instruct-2512` released under apache-2.0.
74
+ - **No web scraping by Ailiance**, **no licensed data**, **no PII**.
75
+ - Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.
76
+
77
+ ## License
78
+
79
+ LoRA weights: **apache-2.0** — see License chain table above for derivation rationale.
80
+
81
+ ## Citation
82
+
83
+ ```bibtex
84
+ @misc{ailiance_devstral_typescript_bf16_2026,
85
+ author = {Ailiance},
86
+ title = {Ailiance — Devstral-Small-2-24B-BF16 typescript (bf16) LoRA},
87
+ year = {2026},
88
+ publisher = {Hugging Face},
89
+ url = {https://huggingface.co/Ailiance-fr/devstral-typescript-bf16-lora}
90
+ }
91
+ ```
92
+
93
+ ## Related
94
+
95
+ See the full [Ailiance-fr LoRA collection](https://huggingface.co/Ailiance-fr).