Text Generation
Transformers
Safetensors
English
qwen2
quantized
4-bit precision
int4
awq
conversational
text-generation-inference
compressed-tensors
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("drawais/DeepSeek-R1-Distill-Qwen-32B-AWQ-INT4")
model = AutoModelForCausalLM.from_pretrained("drawais/DeepSeek-R1-Distill-Qwen-32B-AWQ-INT4")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
DeepSeek-R1-Distill-Qwen-32B-AWQ-INT4
INT4 weight-only quantization of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B.
DeepSeek-R1 reasoning distilled into Qwen 32B, then INT4. About 17 GB on disk. Runs on a 24 GB consumer GPU.
| Property | Value |
|---|---|
| Base model | deepseek-ai/DeepSeek-R1-Distill-Qwen-32B |
| Quantization | INT4 weight-only |
| Approx. on-disk size | ~19.2 GB |
| License | MIT License |
| Languages | English |
Load (vLLM)
vllm serve drawais/DeepSeek-R1-Distill-Qwen-32B-AWQ-INT4 \
--max-model-len 32768 \
--gpu-memory-utilization 0.94
from vllm import LLM, SamplingParams
llm = LLM(model="drawais/DeepSeek-R1-Distill-Qwen-32B-AWQ-INT4", max_model_len=32768)
print(llm.generate(["Hello!"], SamplingParams(max_tokens=128))[0].outputs[0].text)
Footprint
~19.2 GB on disk. Recommended VRAM: enough headroom for KV cache.
License & attribution
This artifact is a derivative work of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B,
released by its original authors under the MIT License.
This artifact is distributed under the same license. The full license text is
included in LICENSE, and required attribution is in NOTICE.
License text: https://opensource.org/license/mit Source model: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
- Downloads last month
- 39
Model tree for drawais/DeepSeek-R1-Distill-Qwen-32B-AWQ-INT4
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="drawais/DeepSeek-R1-Distill-Qwen-32B-AWQ-INT4") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)