clemsail commited on
Commit
db344e0
·
verified ·
1 Parent(s): ff49f72

docs: rebrand legacy clemsail/eu-kiki-devstral-python-lora -> Ailiance-fr/devstral-python-lora

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -18,7 +18,7 @@ language:
18
  library_name: peft
19
  ---
20
 
21
- # eu-kiki-devstral-python-lora
22
 
23
  LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [ailiance](https://github.com/L-electron-Rare/ailiance) project. Live demo: https://www.ailiance.fr.
24
 
@@ -45,7 +45,7 @@ LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [
45
 
46
  | Field | Value |
47
  |---|---|
48
- | **Versioned model name(s)** | `clemsail/eu-kiki-devstral-python-lora` (this LoRA adapter, v0.4.2) |
49
  | **Model dependencies** | This is a **fine-tune (LoRA, rank 16)** of the general-purpose AI model [`mistralai/Devstral-Small-2-24B-Instruct-2512`](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512). Refer to the base-model provider's PST for the underlying training summary. |
50
  | **Date of placement of the model on the Union market** | 2026-05-06 |
51
 
@@ -167,7 +167,7 @@ from mlx_lm.tuner.utils import linear_to_lora_layers
167
  from huggingface_hub import snapshot_download
168
 
169
  base_path = snapshot_download("mistralai/Devstral-Small-2-24B-Instruct-2512")
170
- adapter_path = snapshot_download("clemsail/eu-kiki-devstral-python-lora")
171
 
172
  model, tokenizer = load(base_path)
173
  linear_to_lora_layers(model, num_layers=32, config={"rank": 16, "alpha": 32})
@@ -180,7 +180,7 @@ Or fuse and serve as a self-contained checkpoint:
180
  python -m mlx_lm fuse \
181
  --model mistralai/Devstral-Small-2-24B-Instruct-2512 \
182
  --adapter-path <adapter_path> \
183
- --save-path /tmp/eu-kiki-devstral-python-lora-fused \
184
  --dequantize
185
  ```
186
 
 
18
  library_name: peft
19
  ---
20
 
21
+ # devstral-python-lora
22
 
23
  LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [ailiance](https://github.com/L-electron-Rare/ailiance) project. Live demo: https://www.ailiance.fr.
24
 
 
45
 
46
  | Field | Value |
47
  |---|---|
48
+ | **Versioned model name(s)** | `Ailiance-fr/devstral-python-lora` (this LoRA adapter, v0.4.2) |
49
  | **Model dependencies** | This is a **fine-tune (LoRA, rank 16)** of the general-purpose AI model [`mistralai/Devstral-Small-2-24B-Instruct-2512`](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512). Refer to the base-model provider's PST for the underlying training summary. |
50
  | **Date of placement of the model on the Union market** | 2026-05-06 |
51
 
 
167
  from huggingface_hub import snapshot_download
168
 
169
  base_path = snapshot_download("mistralai/Devstral-Small-2-24B-Instruct-2512")
170
+ adapter_path = snapshot_download("Ailiance-fr/devstral-python-lora")
171
 
172
  model, tokenizer = load(base_path)
173
  linear_to_lora_layers(model, num_layers=32, config={"rank": 16, "alpha": 32})
 
180
  python -m mlx_lm fuse \
181
  --model mistralai/Devstral-Small-2-24B-Instruct-2512 \
182
  --adapter-path <adapter_path> \
183
+ --save-path /tmp/devstral-python-lora-fused \
184
  --dequantize
185
  ```
186