Apertus-70B-2509-NVFP4

NVFP4 quantization of swiss-ai/Apertus-70B-2509 β€” part of the Swiss AI Apertus model family. 70B dense transformer supporting 1,811 languages with 65K context.

W4A4 β€” both weights and activations in FP4. Maximum speed on Blackwell GPUs. See also Apertus-70B-2509-NVFP4A16 for the weight-only W4A16 variant.

Key Specs

Original (BF16) NVFP4 (this)
Size on disk ~140 GB ~35 GB
Compression β€” ~3.0x
Parameters 70B 70B
Architecture Dense transformer, xIELU activation same
Context window 65,536 tokens 65,536 tokens
Languages 1,811 1,811

Serving with vLLM

vllm serve bg-digitalservices/Apertus-70B-2509-NVFP4 \
  --quantization modelopt \
  --dtype auto \
  --kv-cache-dtype fp8 \
  --gpu-memory-utilization 0.85 \
  --max-model-len 65536 \
  --trust-remote-code

DGX Spark

VLLM_NVFP4_GEMM_BACKEND=marlin vllm serve bg-digitalservices/Apertus-70B-2509-NVFP4 \
  --quantization modelopt \
  --dtype auto \
  --kv-cache-dtype fp8 \
  --max-model-len 65536 \
  --trust-remote-code

Testing

This is a base model.

Quantization Details

  • Method: NVIDIA Model Optimizer (modelopt) v0.43
  • Format: NVFP4 β€” E2M1 weights with per-group FP8 scales (group size 16)
  • Calibration: 4096 samples from CNN/DailyMail, batch size 32, seq_len 1024
  • Hardware: NVIDIA H200 GPU
  • Quantization script: included as quantize.py

About Apertus

Apertus is built by Swiss AI β€” a fully open, privacy-first model family trained on 4,096 GH200 GPUs. Key features:

  • 1,811 native languages
  • Novel xIELU activation + AdEMAMix optimizer
  • EU AI Act compliant, respects opt-out consent
  • Full training transparency (weights, data, scripts all public)

License

Apache 2.0 β€” inherited from the base model.

Citation

If you use this model, please cite the original Apertus work:

@misc{swisstransformer2025apertus,
  title   = {Apertus},
  author  = {Swiss Transformer},
  year    = {2025},
  url     = {https://huggingface.co/swiss-ai}
}

Credits

Quantized by Mario Iseli on an NVIDIA H200. Built and validated with AI-engineering assistance from Anthropic.

πŸ“¬ mario@marioiseli.com β˜• Buy me a coffee if this makes your inference go brrrrrr! πŸš€

Downloads last month
326
Safetensors
Model size
36B params
Tensor type
BF16
Β·
F8_E4M3
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for bg-digitalservices/Apertus-70B-2509-NVFP4

Quantized
(4)
this model