EXAONE-4.0-1.2B GPTQ W3

3-bit: mse=2.0 + group_size=32 + SmoothMSE(64,0.70)

Expected: 90-93% quality × 3.0-4.0x speed

Usage

GPTQModel

from gptqmodel import GPTQModel
model = GPTQModel.from_quantized("namgyu-youn/EXAONE-4.0-1.2B-GPTQ-W3A16", device="cuda:0")

vLLM

from vllm import LLM
llm = LLM(model="namgyu-youn/EXAONE-4.0-1.2B-GPTQ-W3A16", dtype="float16")
Downloads last month
1
Safetensors
Model size
1B params
Tensor type
F16
·
I32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for namgyu-youn/EXAONE-4.0-1.2B-GPTQ-W3A16

Quantized
(33)
this model

Collection including namgyu-youn/EXAONE-4.0-1.2B-GPTQ-W3A16