aquif-3.5-Max-42B-A3B-nvfp4
Note: This model (NVFP4 quantization) was tested on an NVIDIA RTX PRO 6000 Blackwell (sm_120, CUDA 13.0, Driver 580.95.05) using vLLM 0.11.2, 0.11.0, and 0.10.1 Docker containers. It fails to run on all tested versions due to multiple issues:
vLLM 0.11.2: All attention backends (FLASH_ATTN, FLASHINFER, TRITON_ATTN) fail with TypeError: [Backend]Impl.init() got an unexpected keyword argument 'layer_idx' during Qwen3MoeAttention initialization. TORCH_SDPA backend is not registered in V1 engine. vLLM 0.11.0: ops.shuffle_rows fails with "no kernel image is available for execution on the device" vLLM 0.10.1: cutlass_fp4_moe_mm fails with "Failed to initialize GEMM" / "no cutlass_scaled_mm kernel for CUDA device capability: 120"
Root cause: NVFP4 MoE CUTLASS kernels are not fully compiled/supported for SM120 (RTX Blackwell) in prebuilt vLLM containers. SM120 (GeForce/RTX Blackwell) differs from SM100 (datacenter Blackwell B100/B200) and requires separate kernel compilation. Potential solutions:
Build vLLM from source with TORCH_CUDA_ARCH_LIST="12.0" using NVIDIA NGC container (e.g., nvcr.io/nvidia/pytorch:25.09-py3) Wait for a future vLLM release with SM120 NVFP4 MoE support (tracking PRs: #24968, #21309)
See related vLLM GitHub issues: #24921, #23826, #18153 This model should work on SM100 (B100/B200) and SM90 (H100/H200) GPUs with vLLM 0.10.1+.
Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: aquif-ai/aquif-3.5-Max-42B-A3B
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with Rombo-Org/Optimized_Reasoning.
Notes: Keep
lm_headin high precision; calibrate on long, domain-relevant sequences.
Check the original model card for information about this model.
Running the model with VLLM in Docker
sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/aquif-3.5-Max-42B-A3B-nvfp4 --dtype auto --max-model-len 32768
This was tested on an RTX Pro 6000 Blackwell cloud instance.
If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
- Downloads last month
- 3