| --- |
| license: apache-2.0 |
| base_model: mistralai/Devstral-Small-2-24B-Instruct-2512 |
| library_name: peft |
| tags: |
| - mlx |
| - lora |
| - peft |
| - ailiance |
| - devstral |
| - python |
| language: |
| - en |
| - fr |
| pipeline_tag: text-generation |
| --- |
| |
| # Ailiance β Devstral-Small-2-24B-Instruct python LoRA |
|
|
| LoRA adapter fine-tuned on `mistralai/Devstral-Small-2-24B-Instruct-2512` for **python** tasks. |
|
|
| > Maintained by **Ailiance** β French AI org publishing EU AI Act aligned LoRA adapters and datasets. |
|
|
| ## Quick start (MLX) |
|
|
| ```python |
| from mlx_lm import load, generate |
| |
| model, tokenizer = load( |
| "mistralai/Devstral-Small-2-24B-Instruct-2512", |
| adapter_path="Ailiance-fr/devstral-python-lora", |
| ) |
| |
| print(generate(model, tokenizer, prompt="...")) |
| ``` |
|
|
| ## Training |
|
|
| | Hyperparameter | Value | |
| |------------------|------------------------| |
| | Base model | `mistralai/Devstral-Small-2-24B-Instruct-2512` | |
| | Method | LoRA via `mlx-lm` | |
| | Rank | 16 | |
| | Scale | 2.0 | |
| | Alpha | 32 | |
| | Max seq length | 2048 | |
| | Iterations | 500 | |
| | Optimizer | Adam, LR 1e-5 | |
| | Hardware | Apple M3 Ultra 512 GB | |
|
|
| ## Training data lineage |
|
|
| Derived from the internal **eu-kiki / mascarade** curation. All upstream samples |
| are synthetic, permissively-licensed, or generated from Apache-2.0 base resources. |
| See the [Ailiance-fr catalog](https://huggingface.co/Ailiance-fr) for related cards. |
|
|
| ## License chain |
|
|
| | Component | License | |
| |-----------------------------------|-------------------| |
| | Base model (`mistralai/Devstral-Small-2-24B-Instruct-2512`) | apache-2.0 | |
| | Training data (internal Ailiance curation (synthetic + permissive sources)) | apache-2.0 | |
| | **LoRA adapter (this repo)** | **apache-2.0**| |
|
|
| _All upstream components are Apache 2.0 / MIT β LoRA inherits permissive terms._ |
|
|
| ## EU AI Act compliance |
|
|
| - **Article 53(1)(c)**: training data licenses preserved (per-dataset cards declare upstream licenses). |
| - **Article 53(1)(d)**: training data summary β see upstream dataset cards on Ailiance-fr. |
| - **GPAI Code of Practice (July 2025)**: base `mistralai/Devstral-Small-2-24B-Instruct-2512` released under apache-2.0. |
| - **No web scraping by Ailiance**, **no licensed data**, **no PII**. |
| - Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter. |
|
|
| ## License |
|
|
| LoRA weights: **apache-2.0** β see License chain table above for derivation rationale. |
|
|
| ## Citation |
|
|
| ```bibtex |
| @misc{ailiance_devstral_python_2026, |
| author = {Ailiance}, |
| title = {Ailiance β Devstral-Small-2-24B-Instruct python LoRA}, |
| year = {2026}, |
| publisher = {Hugging Face}, |
| url = {https://huggingface.co/Ailiance-fr/devstral-python-lora} |
| } |
| ``` |
|
|
| ## Related |
|
|
| See the full [Ailiance-fr LoRA collection](https://huggingface.co/Ailiance-fr). |
|
|