clemsail commited on
Commit
7852897
·
verified ·
1 Parent(s): 1334d03

docs: rebrand legacy clemsail/eu-kiki-devstral-cpp-lora -> Ailiance-fr/devstral-cpp-lora

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -17,7 +17,7 @@ language:
17
  library_name: peft
18
  ---
19
 
20
- # eu-kiki-devstral-cpp-lora
21
 
22
  LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [ailiance](https://github.com/L-electron-Rare/ailiance) project. Live demo: https://www.ailiance.fr.
23
 
@@ -44,7 +44,7 @@ LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [
44
 
45
  | Field | Value |
46
  |---|---|
47
- | **Versioned model name(s)** | `clemsail/eu-kiki-devstral-cpp-lora` (this LoRA adapter, v0.4.2) |
48
  | **Model dependencies** | This is a **fine-tune (LoRA, rank 16)** of the general-purpose AI model [`mistralai/Devstral-Small-2-24B-Instruct-2512`](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512). Refer to the base-model provider's PST for the underlying training summary. |
49
  | **Date of placement of the model on the Union market** | 2026-05-06 |
50
 
@@ -180,7 +180,7 @@ from mlx_lm.tuner.utils import linear_to_lora_layers
180
  from huggingface_hub import snapshot_download
181
 
182
  base_path = snapshot_download("mistralai/Devstral-Small-2-24B-Instruct-2512")
183
- adapter_path = snapshot_download("clemsail/eu-kiki-devstral-cpp-lora")
184
 
185
  model, tokenizer = load(base_path)
186
  linear_to_lora_layers(model, num_layers=32, config={"rank": 16, "alpha": 32})
@@ -193,7 +193,7 @@ Or fuse and serve as a self-contained checkpoint:
193
  python -m mlx_lm fuse \
194
  --model mistralai/Devstral-Small-2-24B-Instruct-2512 \
195
  --adapter-path <adapter_path> \
196
- --save-path /tmp/eu-kiki-devstral-cpp-lora-fused \
197
  --dequantize
198
  ```
199
 
 
17
  library_name: peft
18
  ---
19
 
20
+ # devstral-cpp-lora
21
 
22
  LoRA adapter for **mistralai/Devstral-Small-2-24B-Instruct-2512**, part of the [ailiance](https://github.com/L-electron-Rare/ailiance) project. Live demo: https://www.ailiance.fr.
23
 
 
44
 
45
  | Field | Value |
46
  |---|---|
47
+ | **Versioned model name(s)** | `Ailiance-fr/devstral-cpp-lora` (this LoRA adapter, v0.4.2) |
48
  | **Model dependencies** | This is a **fine-tune (LoRA, rank 16)** of the general-purpose AI model [`mistralai/Devstral-Small-2-24B-Instruct-2512`](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512). Refer to the base-model provider's PST for the underlying training summary. |
49
  | **Date of placement of the model on the Union market** | 2026-05-06 |
50
 
 
180
  from huggingface_hub import snapshot_download
181
 
182
  base_path = snapshot_download("mistralai/Devstral-Small-2-24B-Instruct-2512")
183
+ adapter_path = snapshot_download("Ailiance-fr/devstral-cpp-lora")
184
 
185
  model, tokenizer = load(base_path)
186
  linear_to_lora_layers(model, num_layers=32, config={"rank": 16, "alpha": 32})
 
193
  python -m mlx_lm fuse \
194
  --model mistralai/Devstral-Small-2-24B-Instruct-2512 \
195
  --adapter-path <adapter_path> \
196
+ --save-path /tmp/devstral-cpp-lora-fused \
197
  --dequantize
198
  ```
199