Qwen3.5-9B-NVFP4 by IG1

Quantization

This model has been quantized using llm-compressor v0.10.1.dev31+geb49917e (just after Qwen3.5 support was merged) and transformers v5.3.0. It is based on the official example with a few modifications (see next section).

Quantization particularities

The sequence length has been increased from 4096 to 8192 and the number of samples from 256 to 1024. The 1024 samples come from 4 differents datasets:

  • 256 general conversation samples (UltraChat)
  • 256 math reasoning samples (GSM8K)
  • 256 code samples (CodeAlpaca)
  • 256 multilingual samples (Aya)

You can find the quantization script here.

While the quantization needed transformers v5, the original (transformers v4) tokenizer files has been put back for simple execution on current vLLM versions. The transformers v5 tokenizer files produced by llm-compressor can be found in the transformers_v5 folder.

About FP8 KV cache

In our testing, the Qwen3.5 Mamba hybrid architecture did not play well with FP8 KV cache:

  • vLLM dynamic FP8 KV cache (--kv-cache-dtype fp8_e4m3 --calculate-kv-scales) appeared to work initially but quality degraded rapidly into gibberish.

  • Static FP8 scales via llm-compressor (kv_cache_scheme in the recipe) corrupted the NVFP4 weight quantization during calibration. Because FP8 is injected into the forward pass during scale computation, layers with mismatched head dimensions (256 for attention vs 128 for linear attention) produced corrupted activations that propagated through the network, poisoning the weight quantization scales. The resulting model output gibberish even when FP8 KV cache was disabled at inference — the weights themselves were permanently damaged. Note that static FP8 KV scales stored in a checkpoint are passive metadata and still require explicit activation via --kv-cache-dtype fp8_e4m3 at vLLM startup to be used; however, the corruption occurred during quantization, not at inference time.

Qwen3.5 Profiles

Alongside support for dynamic thinking and non-thinking modes, the Qwen team offers 4 sampling parameter profiles:

  • Thinking General
  • Thinking Coding
  • Instruct General
  • Instruct Reasoning (we prefer to call it Instruct Creative internally)

Manually configuring these parameters for every AI client can be difficult. To solve this, we built a lightweight reverse proxy that exposes the 4 profiles as virtual model names. It handles request transformation on the fly using a single inference server as backend. View the project on our GitHub.

You can find a full docker compose example in this repository.

Inference

We run this model with vLLM, here is a sample execution command:

docker run --rm --name 'Qwen3.5-9B-NVFP4' \
  --runtime=nvidia --gpus 'all' --ipc=host \
  -e 'HF_TOKEN' \
  -e 'VLLM_MEMORY_PROFILER_ESTIMATE_CUDAGRAPHS=1' \
  -v '/srv/cache:/root/.cache' \
  -p '127.0.0.1:8000:8000' \
  'vllm/vllm-openai:v0.18.0-cu130' \
  'ig1/Qwen3.5-9B-NVFP4' \
  --served-model-name 'Qwen3.5-9B' \
  --reasoning-parser 'qwen3' \
  --enable-auto-tool-choice \
  --tool-call-parser 'qwen3_coder' \
  --max-model-len 'auto' \
  --gpu-memory-utilization '0.9'

A few notes about some of the parameters:

  • Adapt the /srv/cache:/root/.cache mount point to your liking. It contains files you want to keep between multiples run (dynamo bytecode and AOT with torch compile but most importantly the huggingface folder for the model)
  • VLLM_MEMORY_PROFILER_ESTIMATE_CUDAGRAPHS=1 allows for more precise CUDA graph VRAM estimation. It should become the default once vLLM reaches v0.19.0 which at which point you can simply remove it

RTX 5090 Optimized Deployment

As the FP8 KV cache did not play well with this model and the RTX 5090 has a limited 32GiB of VRAM, we recommended to lower the --max-cudagraph-capture-size (and the --max-num-seqs alongside it) to the theorical max of parrallel requests you expect to fire against the model in order to increase the available space that the KV cache can use.

Exemple for a linux headless deployment:

docker run --rm --name 'Qwen3.5-9B-NVFP4' \
  --runtime=nvidia --gpus 'all' --ipc=host \
  -e 'HF_TOKEN' \
  -e 'VLLM_MEMORY_PROFILER_ESTIMATE_CUDAGRAPHS=1' \
  -v '/srv/cache:/root/.cache' \
  -p '127.0.0.1:8000:8000' \
  'vllm/vllm-openai:v0.18.0-cu130' \
  'ig1/Qwen3.5-9B-NVFP4' \
  --served-model-name 'Qwen3.5-9B' \
  --reasoning-parser 'qwen3' \
  --enable-auto-tool-choice \
  --tool-call-parser 'qwen3_coder' \
  --max-model-len 'auto' \
  --limit-mm-per-prompt.video 0 \
  --max-cudagraph-capture-size 64 \
  --max-num-seqs 64 \
  --gpu-memory-utilization '0.95'

This configuration should yeild a max model len/KV cache size of 110,880 tokens.

Windows with Docker WSL

Because of the need for VRAM for the host/graphical environment, the total memory must be lowered. Also, adapt the mount point using a windows path and run it from a windows terminal: -v 'E:\cache:/root/.cache'

Powershell start command:

docker run --rm --name 'Qwen3.5-9B-NVFP4' `
  --runtime=nvidia --gpus 'all' --ipc=host `
  -e 'HF_TOKEN' `
  -e 'VLLM_MEMORY_PROFILER_ESTIMATE_CUDAGRAPHS=1' `
  -v 'E:\cache:/root/.cache' `
  -p '127.0.0.1:8000:8000' `
  'vllm/vllm-openai:v0.18.0-cu130' `
  'ig1/Qwen3.5-9B-NVFP4' `
  --served-model-name 'Qwen3.5-9B' `
  --reasoning-parser 'qwen3' `
  --enable-auto-tool-choice `
  --tool-call-parser 'qwen3_coder' `
  --max-model-len 'auto' `
  --limit-mm-per-prompt.video 0 `
  --max-cudagraph-capture-size 64 `
  --max-num-seqs 64 `
  --gpu-memory-utilization '0.8'

This configuration yields a max model len/KV cache size of 77,088 tokens.

Downloads last month
1,476
Safetensors
Model size
7B params
Tensor type
F32
·
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ig1/Qwen3.5-9B-NVFP4

Finetuned
Qwen/Qwen3.5-9B
Quantized
(171)
this model

Datasets used to train ig1/Qwen3.5-9B-NVFP4

Collections including ig1/Qwen3.5-9B-NVFP4