Granite-4.1-8B-AWQ-INT4

INT4 weight-only quantization of ibm-granite/granite-4.1-8b.

First community 4-bit AWQ of IBM Granite 4.1 8B.

Property Value
Base model ibm-granite/granite-4.1-8b
Quantization INT4 weight-only
Approx. on-disk size ~4.9 GB
Languages English

Load (vLLM)

vllm serve drawais/Granite-4.1-8B-AWQ-INT4 \
  --max-model-len 32768 \
  --gpu-memory-utilization 0.94
from vllm import LLM, SamplingParams
llm = LLM(model="drawais/Granite-4.1-8B-AWQ-INT4", max_model_len=32768)
print(llm.generate(["Hello!"], SamplingParams(max_tokens=128))[0].outputs[0].text)

Footprint

~4.9 GB on disk. Recommended VRAM: enough headroom for KV cache.

License & attribution

This artifact is a derivative work of ibm-granite/granite-4.1-8b, released by its original authors under the Apache License, Version 2.0.

This artifact is distributed under the same license. The full license text is included in LICENSE, and required attribution is in NOTICE.

License text: https://www.apache.org/licenses/LICENSE-2.0 Source model: https://huggingface.co/ibm-granite/granite-4.1-8b

Downloads last month
122
Safetensors
Model size
8B params
Tensor type
I64
I32
BF16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for drawais/Granite-4.1-8B-AWQ-INT4

Quantized
(31)
this model