QwQ-32B-NVFP4 / README.md
drawais's picture
Initial upload of QwQ-32B-NVFP4
c28c5b8 verified
metadata
license: apache-2.0
license_link: https://www.apache.org/licenses/LICENSE-2.0
base_model: Qwen/QwQ-32B
tags:
  - quantized
  - 4-bit
  - int4
  - awq
language:
  - en
library_name: transformers
pipeline_tag: text-generation

QwQ-32B-NVFP4

INT4 weight-only quantization of Qwen/QwQ-32B.

Qwen QwQ-32B in NVFP4 W4A4. Native vLLM compressed-tensors. About 17 GB on disk.

Property Value
Base model Qwen/QwQ-32B
Quantization INT4 weight-only
Approx. on-disk size ~20.7 GB
License Apache License, Version 2.0
Languages English

Load (vLLM)

vllm serve drawais/QwQ-32B-NVFP4 \
  --max-model-len 32768 \
  --gpu-memory-utilization 0.94
from vllm import LLM, SamplingParams
llm = LLM(model="drawais/QwQ-32B-NVFP4", max_model_len=32768)
print(llm.generate(["Hello!"], SamplingParams(max_tokens=128))[0].outputs[0].text)

Footprint

~20.7 GB on disk. Recommended VRAM: enough headroom for KV cache.

License & attribution

This artifact is a derivative work of Qwen/QwQ-32B, released by its original authors under the Apache License, Version 2.0.

This artifact is distributed under the same license. The full license text is included in LICENSE, and required attribution is in NOTICE.

License text: https://www.apache.org/licenses/LICENSE-2.0 Source model: https://huggingface.co/Qwen/QwQ-32B