How to use from
Docker Model Runner
docker model run hf.co/drawais/DeepSeek-R1-Distill-Qwen-32B-NVFP4
Quick Links

DeepSeek-R1-Distill-Qwen-32B-NVFP4

INT4 weight-only quantization of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B.

DeepSeek-R1-Distill-Qwen-32B in NVFP4 W4A4. Native vLLM compressed-tensors. About 17 GB on disk.

Property Value
Base model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
Quantization INT4 weight-only
Approx. on-disk size ~20.7 GB
License MIT License
Languages English

Load (vLLM)

vllm serve drawais/DeepSeek-R1-Distill-Qwen-32B-NVFP4 \
  --max-model-len 32768 \
  --gpu-memory-utilization 0.94
from vllm import LLM, SamplingParams
llm = LLM(model="drawais/DeepSeek-R1-Distill-Qwen-32B-NVFP4", max_model_len=32768)
print(llm.generate(["Hello!"], SamplingParams(max_tokens=128))[0].outputs[0].text)

Footprint

~20.7 GB on disk. Recommended VRAM: enough headroom for KV cache.

License & attribution

This artifact is a derivative work of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B, released by its original authors under the MIT License.

This artifact is distributed under the same license. The full license text is included in LICENSE, and required attribution is in NOTICE.

License text: https://opensource.org/license/mit Source model: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B

Downloads last month
13
Safetensors
Model size
19B params
Tensor type
F32
·
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for drawais/DeepSeek-R1-Distill-Qwen-32B-NVFP4

Quantized
(138)
this model