Huihui-Qwen3.5-9B-abliterated-gptq-pro-w4g128

GPTQ-PRO W4A16 quantization (group size 128) of Huihui/Qwen3.5-9B-abliterated.

Quantization Details

  • Quantization method: GPTQ-PRO
  • Bits/Config: W4A16 (group 128)
  • Base model: Huihui/Qwen3.5-9B-abliterated
Downloads last month
34
Safetensors
Model size
9B params
Tensor type
BF16
·
I32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support