EXAONE-4.0-1.2B GPTQ W4 + EoRA

4-bit: mse=2.4 + group_size=32 + SmoothMSE(64,0.75) + EoRA(rank=64)

Expected: 98-99.5% quality × 2.8-3.2x = 2.74-3.18 score

Usage

GPTQModel

from gptqmodel import GPTQModel
model = GPTQModel.from_quantized("namgyu-youn/EXAONE-4.0-1.2B-GPTQ-W4A16-EoRA", device="cuda:0")

vLLM

from vllm import LLM
llm = LLM(model="namgyu-youn/EXAONE-4.0-1.2B-GPTQ-W4A16-EoRA", dtype="float16")
Downloads last month
1
Safetensors
Model size
1B params
Tensor type
F16
·
I32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for namgyu-youn/EXAONE-4.0-1.2B-GPTQ-W4A16-EoRA

Quantized
(33)
this model

Collection including namgyu-youn/EXAONE-4.0-1.2B-GPTQ-W4A16-EoRA