clemsail's picture
docs: add base vs tuned bench comparison
b3cc62b verified
---
license: cc-by-sa-4.0
base_model: swiss-ai/Apertus-70B-Instruct-2509
library_name: peft
tags:
- mlx
- lora
- peft
- ailiance
- apertus
- embedded
language:
- en
- fr
pipeline_tag: text-generation
---
# Ailiance β€” Apertus-70B-Instruct embedded LoRA
LoRA adapter fine-tuned on `swiss-ai/Apertus-70B-Instruct-2509` for **embedded** tasks.
> Maintained by **Ailiance** β€” French AI org publishing EU AI Act aligned LoRA adapters and datasets.
## Quick start (MLX)
```python
from mlx_lm import load, generate
model, tokenizer = load(
"swiss-ai/Apertus-70B-Instruct-2509",
adapter_path="Ailiance-fr/apertus-embedded-lora",
)
print(generate(model, tokenizer, prompt="..."))
```
## Training
| Hyperparameter | Value |
|------------------|------------------------|
| Base model | `swiss-ai/Apertus-70B-Instruct-2509` |
| Method | LoRA via `mlx-lm` |
| Rank | 16 |
| Scale | 2.0 |
| Alpha | 32 |
| Max seq length | 1024 |
| Iterations | 500 |
| Optimizer | Adam, LR 1e-5 |
| Hardware | Apple M3 Ultra 512 GB |
## Training data lineage
| Role | Dataset | License |
|-----------------|--------------------------------------------------------------------------------------------------|----------------|
| Primary corpus | [`Ailiance-fr/mascarade-embedded-dataset`](https://huggingface.co/datasets/Ailiance-fr/mascarade-embedded-dataset) | cc-by-sa-4.0 |
For per-sample provenance and attribution status, consult the dataset card.
## Benchmark roadmap
This LoRA has **not yet been evaluated** through `electron-bench` (the current
pipeline supports `gemma-4-E4B` base only). Training was completed with the
standard `mlx-lm` LoRA trainer (rank 16, alpha 32, scale 2.0, AdamW
LR 1e-5, 500 iters) β€” full hyperparameters are in the `Training` table above.
Planned evaluations:
- Perplexity on the validation split of the training data
- Functional benchmark on **apertus**-specific tasks
- Comparison vs base `swiss-ai/Apertus-70B-Instruct-2509`
Track progress: [ailiance-bench issues](https://github.com/ailiance/ailiance-bench/issues).
For reference benchmarks on the `gemma-4-E4B` base, see the
[base-vs-LoRA matrix](https://github.com/ailiance/ailiance-bench/blob/main/bench-results/compare_base_vs_lora.md).
## License chain
| Component | License |
|-----------------------------------|-------------------|
| Base model (`swiss-ai/Apertus-70B-Instruct-2509`) | apache-2.0 |
| Training data ([`Ailiance-fr/mascarade-embedded-dataset`](https://huggingface.co/datasets/Ailiance-fr/mascarade-embedded-dataset)) | cc-by-sa-4.0 |
| **LoRA adapter (this repo)** | **cc-by-sa-4.0**|
_Most restrictive license in the chain (CC-BY-SA-4.0 share-alike) propagates to derivatives._
## EU AI Act compliance
- **Article 53(1)(c)**: training data licenses preserved (per-dataset cards declare upstream licenses).
- **Article 53(1)(d)**: training data summary β€” see upstream dataset cards on Ailiance-fr.
- **GPAI Code of Practice (July 2025)**: base `swiss-ai/Apertus-70B-Instruct-2509` released under apache-2.0.
- **No web scraping by Ailiance**, **no licensed data**, **no PII**.
- Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.
## License
LoRA weights: **cc-by-sa-4.0** β€” see License chain table above for derivation rationale.
## Citation
```bibtex
@misc{ailiance_apertus_embedded_2026,
author = {Ailiance},
title = {Ailiance β€” Apertus-70B-Instruct embedded LoRA},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/Ailiance-fr/apertus-embedded-lora}
}
```
## Related
See the full [Ailiance-fr LoRA collection](https://huggingface.co/Ailiance-fr).
## Bench comparison (2026-05-11)
### Base model (Apertus-70B-Instruct-2509) capability
| Task | Score | Notes |
|---|---:|---|
| ARC-Easy acc / acc_norm | **0.81 / 0.77** | W3 lm-eval-harness BF16 |
| GSM8K-CoT | TIMEOUT (1800s budget) | base 70B BF16 too slow for CoT |
| MMLU-Pro Computer Science | TIMEOUT | |
### This LoRA (tuned) β€” bench PENDING
Production usage: served via gateway alias `ailiance-apertus-<domain>` on
<https://www.ailiance.fr> through the Apertus multi-LoRA hot-swap server
(Studio :9322, 1 base + 10 LoRA dynamic swap, ~40GB VRAM).