Ailiance β€” EuroLLM-22B-Instruct traduction-tech LoRA

LoRA adapter fine-tuned on utter-project/EuroLLM-22B-Instruct-2512 for traduction-tech tasks.

Maintained by Ailiance β€” French AI org publishing EU AI Act aligned LoRA adapters and datasets.

Quick start (MLX)

from mlx_lm import load, generate

model, tokenizer = load(
    "utter-project/EuroLLM-22B-Instruct-2512",
    adapter_path="Ailiance-fr/eurollm-traduction-tech-lora",
)

print(generate(model, tokenizer, prompt="..."))

Training

Hyperparameter Value
Base model utter-project/EuroLLM-22B-Instruct-2512
Method LoRA via mlx-lm
Rank 16
Scale 2.0
Alpha 32
Max seq length 2048
Iterations 500
Optimizer Adam, LR 1e-5
Hardware Apple M3 Ultra 512 GB

Training data lineage

Derived from the internal eu-kiki / mascarade curation. All upstream samples are synthetic, permissively-licensed, or generated from Apache-2.0 base resources. See the Ailiance-fr catalog for related cards.

Benchmark roadmap

This LoRA has not yet been evaluated through electron-bench (the current pipeline supports gemma-4-E4B base only). Training was completed with the standard mlx-lm LoRA trainer (rank 16, alpha 32, scale 2.0, AdamW LR 1e-5, 500 iters) β€” full hyperparameters are in the Training table above.

Planned evaluations:

  • Perplexity on the validation split of the training data
  • Functional benchmark on eurollm-specific tasks
  • Comparison vs base utter-project/EuroLLM-9B-Instruct

Track progress: ailiance-bench issues.

For reference benchmarks on the gemma-4-E4B base, see the base-vs-LoRA matrix.

License chain

Component License
Base model (utter-project/EuroLLM-22B-Instruct-2512) apache-2.0
Training data (internal Ailiance curation (synthetic + permissive sources)) apache-2.0
LoRA adapter (this repo) apache-2.0

All upstream components are Apache 2.0 / MIT β€” LoRA inherits permissive terms.

EU AI Act compliance

  • Article 53(1)(c): training data licenses preserved (per-dataset cards declare upstream licenses).
  • Article 53(1)(d): training data summary β€” see upstream dataset cards on Ailiance-fr.
  • GPAI Code of Practice (July 2025): base utter-project/EuroLLM-22B-Instruct-2512 released under apache-2.0.
  • No web scraping by Ailiance, no licensed data, no PII.
  • Upstream Stack Exchange content (where applicable) is CC-BY-SA-4.0 and propagates to this adapter.

License

LoRA weights: apache-2.0 β€” see License chain table above for derivation rationale.

Citation

@misc{ailiance_eurollm_traduction_tech_2026,
  author    = {Ailiance},
  title     = {Ailiance β€” EuroLLM-22B-Instruct traduction-tech LoRA},
  year      = {2026},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/Ailiance-fr/eurollm-traduction-tech-lora}
}

Related

See the full Ailiance-fr LoRA collection.

Downloads last month
31
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Ailiance-fr/eurollm-traduction-tech-lora