Apertus-8B-Instruct-2509-NVFP4A16

NVFP4 quantization of swiss-ai/Apertus-8B-Instruct-2509 — part of the Swiss AI Apertus model family. 8B dense transformer supporting 1,811 languages with 65K context.

W4A16 — weights in FP4, activations in FP16 (weight-only quantization). See also Apertus-8B-Instruct-2509-NVFP4 for the full W4A4 variant.

Key Specs

Original (BF16) NVFP4 (this)
Size on disk ~16 GB ~5 GB
Compression ~3.0x
Parameters 8B 8B
Architecture Dense transformer, xIELU activation same
Context window 65,536 tokens 65,536 tokens
Languages 1,811 1,811

Serving with vLLM

vllm serve bg-digitalservices/Apertus-8B-Instruct-2509-NVFP4A16 \
  --quantization modelopt \
  --dtype auto \
  --kv-cache-dtype fp8 \
  --gpu-memory-utilization 0.85 \
  --max-model-len 65536 \
  --trust-remote-code

DGX Spark

VLLM_NVFP4_GEMM_BACKEND=marlin vllm serve bg-digitalservices/Apertus-8B-Instruct-2509-NVFP4A16 \
  --quantization modelopt \
  --dtype auto \
  --kv-cache-dtype fp8 \
  --max-model-len 65536 \
  --trust-remote-code

Testing

This is an instruct model with tool use support — use the chat completions endpoint.

Quantization Details

  • Method: NVIDIA Model Optimizer (modelopt) v0.43
  • Format: NVFP4 — E2M1 weights with per-group FP8 scales (group size 16)
  • Calibration: 4096 samples from CNN/DailyMail, batch size 32, seq_len 1024
  • Hardware: NVIDIA H200 GPU
  • Quantization script: included as quantize.py

About Apertus

Apertus is built by Swiss AI — a fully open, privacy-first model family trained on 4,096 GH200 GPUs. Key features:

  • 1,811 native languages
  • Novel xIELU activation + AdEMAMix optimizer
  • EU AI Act compliant, respects opt-out consent
  • Full training transparency (weights, data, scripts all public)

License

Apache 2.0 — inherited from the base model.

Citation

If you use this model, please cite the original Apertus work:

@misc{swisstransformer2025apertus,
  title   = {Apertus},
  author  = {Swiss Transformer},
  year    = {2025},
  url     = {https://huggingface.co/swiss-ai}
}

Credits

Quantized by Mario Iseli on an NVIDIA H200. Built and validated with AI-engineering assistance from Anthropic.

📬 mario@marioiseli.comBuy me a coffee if this makes your inference go brrrrrr! 🚀

Downloads last month
298
Safetensors
Model size
5B params
Tensor type
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bg-digitalservices/Apertus-8B-Instruct-2509-NVFP4A16

Quantized
(33)
this model