Ailiance β Apertus-70B-Instruct spice-sim LoRA
LoRA adapter fine-tuned on swiss-ai/Apertus-70B-Instruct-2509 for spice-sim tasks.
Maintained by Ailiance β French AI org publishing EU AI Act aligned LoRA adapters and datasets.
Quick start (MLX)
from mlx_lm import load, generate
model, tokenizer = load(
"swiss-ai/Apertus-70B-Instruct-2509",
adapter_path="Ailiance-fr/apertus-spice-sim-lora",
)
print(generate(model, tokenizer, prompt="..."))
Training
| Hyperparameter | Value |
|---|---|
| Base model | swiss-ai/Apertus-70B-Instruct-2509 |
| Method | LoRA via mlx-lm |
| Rank | 16 |
| Scale | 2.0 |
| Alpha | 32 |
| Max seq length | 2048 |
| Iterations | 475 |
| Optimizer | Adam, LR 1e-5 |
| Hardware | Apple M3 Ultra 512 GB |
Training data lineage
| Role | Dataset | License |
|---|---|---|
| Primary corpus | Ailiance-fr/mascarade-spice-dataset |
cc-by-sa-4.0 |
For per-sample provenance and attribution status, consult the dataset card.
Benchmark roadmap
This LoRA has not yet been evaluated through electron-bench (the current
pipeline supports gemma-4-E4B base only). Training was completed with the
standard mlx-lm LoRA trainer (rank 16, alpha 32, scale 2.0, AdamW
LR 1e-5, 500 iters) β full hyperparameters are in the Training table above.
Planned evaluations:
- Perplexity on the validation split of the training data
- Functional benchmark on apertus-specific tasks
- Comparison vs base
swiss-ai/Apertus-70B-Instruct-2509
Track progress: ailiance-bench issues.
For reference benchmarks on the gemma-4-E4B base, see the
base-vs-LoRA matrix.
License chain
| Component | License |
|---|---|
Base model (swiss-ai/Apertus-70B-Instruct-2509) |
apache-2.0 |
Training data (Ailiance-fr/mascarade-spice-dataset) |
cc-by-sa-4.0 |
| LoRA adapter (this repo) | cc-by-sa-4.0 |
Most restrictive license in the chain (CC-BY-SA-4.0 share-alike) propagates to derivatives.
EU AI Act compliance
- Article 53(1)(c): training data licenses preserved (per-dataset cards declare upstream licenses).
- Article 53(1)(d): training data summary β see upstream dataset cards on Ailiance-fr.
- GPAI Code of Practice (July 2025): base
swiss-ai/Apertus-70B-Instruct-2509released under apache-2.0. - No web scraping by Ailiance, no licensed data, no PII.
- Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.
License
LoRA weights: cc-by-sa-4.0 β see License chain table above for derivation rationale.
Citation
@misc{ailiance_apertus_spice_sim_2026,
author = {Ailiance},
title = {Ailiance β Apertus-70B-Instruct spice-sim LoRA},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/Ailiance-fr/apertus-spice-sim-lora}
}
Related
See the full Ailiance-fr LoRA collection.
Bench comparison (2026-05-11)
Base model (Apertus-70B-Instruct-2509) capability
| Task | Score | Notes |
|---|---|---|
| ARC-Easy acc / acc_norm | 0.81 / 0.77 | W3 lm-eval-harness BF16 |
| GSM8K-CoT | TIMEOUT (1800s budget) | base 70B BF16 too slow for CoT |
| MMLU-Pro Computer Science | TIMEOUT |
This LoRA (tuned) β bench PENDING
Production usage: served via gateway alias ailiance-apertus-<domain> on
https://www.ailiance.fr through the Apertus multi-LoRA hot-swap server
(Studio :9322, 1 base + 10 LoRA dynamic swap, ~40GB VRAM).
- Downloads last month
- 57
Quantized
Model tree for Ailiance-fr/apertus-spice-sim-lora
Base model
swiss-ai/Apertus-70B-2509