Phi-4-reasoning-AWQ-INT4

INT4 weight-only quantization of microsoft/Phi-4-reasoning.

Microsoft Phi-4-reasoning in INT4. About 7 GB on disk. Reasoning model, fits a single 16 GB consumer GPU.

Property Value
Base model microsoft/Phi-4-reasoning
Quantization INT4 weight-only
Approx. on-disk size ~9.1 GB
License MIT License
Languages English

Load (vLLM)

vllm serve drawais/Phi-4-reasoning-AWQ-INT4 \
  --max-model-len 32768 \
  --gpu-memory-utilization 0.94
from vllm import LLM, SamplingParams
llm = LLM(model="drawais/Phi-4-reasoning-AWQ-INT4", max_model_len=32768)
print(llm.generate(["Hello!"], SamplingParams(max_tokens=128))[0].outputs[0].text)

Footprint

~9.1 GB on disk. Recommended VRAM: enough headroom for KV cache.

License & attribution

This artifact is a derivative work of microsoft/Phi-4-reasoning, released by its original authors under the MIT License.

This artifact is distributed under the same license. The full license text is included in LICENSE, and required attribution is in NOTICE.

License text: https://opensource.org/license/mit Source model: https://huggingface.co/microsoft/Phi-4-reasoning

Downloads last month
22
Safetensors
Model size
15B params
Tensor type
I64
I32
BF16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for drawais/Phi-4-reasoning-AWQ-INT4

Base model

microsoft/phi-4
Quantized
(32)
this model