Ailiance β€” Apertus-70B-Instruct emc-dsp-power LoRA

LoRA adapter fine-tuned on swiss-ai/Apertus-70B-Instruct-2509 for emc-dsp-power tasks.

Maintained by Ailiance β€” French AI org publishing EU AI Act aligned LoRA adapters and datasets.

βœ… Training data attribution audited (2026-05-11)

Trained on Ailiance-fr/mascarade-emc-dataset (4.05% SE), Ailiance-fr/mascarade-dsp-dataset (5.35% SE), Ailiance-fr/mascarade-power-dataset (4.87% SE). Per-sample Stack Exchange Electronics attribution recovered via SE API search: only ~4.05–5.35 % of samples per dataset are from Stack Exchange Electronics (now fully attributed in metadata.stack_exchange_attribution β€” URL + author + post_id + creation_date).

Original heuristic estimate of "~30 % SE" was over-counted by ~6–7Γ— (style β‰  source). See audit trail for the methodology (/search/advanced + body match β‰₯ 0.60).

Remaining content is either (a) style-resembling SE but not findable on the API (marked attribution_recovery=not_found_on_se, probable synthetic) or (b) synthetic LLM-generated.

Quick start (MLX)

from mlx_lm import load, generate

model, tokenizer = load(
    "swiss-ai/Apertus-70B-Instruct-2509",
    adapter_path="Ailiance-fr/apertus-emc-dsp-power-lora",
)

print(generate(model, tokenizer, prompt="..."))

Training

Hyperparameter Value
Base model swiss-ai/Apertus-70B-Instruct-2509
Method LoRA via mlx-lm
Rank 16
Scale 2.0
Alpha 32
Max seq length 1024
Iterations 500
Optimizer Adam, LR 1e-5
Hardware Apple M3 Ultra 512 GB

Training data lineage

Role Dataset License
Primary corpus Ailiance-fr/mascarade-emc-dataset cc-by-sa-4.0
Companion Ailiance-fr/mascarade-dsp-dataset cc-by-sa-4.0
Companion Ailiance-fr/mascarade-power-dataset cc-by-sa-4.0

For per-sample provenance and attribution status, consult the dataset card.

Benchmark roadmap

This LoRA has not yet been evaluated through electron-bench (the current pipeline supports gemma-4-E4B base only). Training was completed with the standard mlx-lm LoRA trainer (rank 16, alpha 32, scale 2.0, AdamW LR 1e-5, 500 iters) β€” full hyperparameters are in the Training table above.

Planned evaluations:

  • Perplexity on the validation split of the training data
  • Functional benchmark on apertus-specific tasks
  • Comparison vs base swiss-ai/Apertus-70B-Instruct-2509

Track progress: ailiance-bench issues.

For reference benchmarks on the gemma-4-E4B base, see the base-vs-LoRA matrix.

License chain

Component License
Base model (swiss-ai/Apertus-70B-Instruct-2509) apache-2.0
Training data (Ailiance-fr/mascarade-emc-dataset) cc-by-sa-4.0
LoRA adapter (this repo) cc-by-sa-4.0

Most restrictive license in the chain (CC-BY-SA-4.0 share-alike) propagates to derivatives.

EU AI Act compliance

  • Article 53(1)(c): training data licenses preserved (per-dataset cards declare upstream licenses).
  • Article 53(1)(d): training data summary β€” see upstream dataset cards on Ailiance-fr.
  • GPAI Code of Practice (July 2025): base swiss-ai/Apertus-70B-Instruct-2509 released under apache-2.0.
  • No web scraping by Ailiance, no licensed data, no PII.
  • Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.

License

LoRA weights: cc-by-sa-4.0 β€” see License chain table above for derivation rationale.

Citation

@misc{ailiance_apertus_emc_dsp_power_2026,
  author    = {Ailiance},
  title     = {Ailiance β€” Apertus-70B-Instruct emc-dsp-power LoRA},
  year      = {2026},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/Ailiance-fr/apertus-emc-dsp-power-lora}
}

Related

See the full Ailiance-fr LoRA collection.

Bench comparison (2026-05-11)

Base model (Apertus-70B-Instruct-2509) capability

Task Score Notes
ARC-Easy acc / acc_norm 0.81 / 0.77 W3 lm-eval-harness BF16
GSM8K-CoT TIMEOUT (1800s budget) base 70B BF16 too slow for CoT
MMLU-Pro Computer Science TIMEOUT

This LoRA (tuned) β€” bench PENDING

Production usage: served via gateway alias ailiance-apertus-<domain> on https://www.ailiance.fr through the Apertus multi-LoRA hot-swap server (Studio :9322, 1 base + 10 LoRA dynamic swap, ~40GB VRAM).

Downloads last month
59
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Ailiance-fr/apertus-emc-dsp-power-lora

Adapter
(10)
this model