Qwen3.5-4B-abliterated-gptq-pro-w4g128

GPTQ-PRO W4A16 quantization (group size 128) of Qwen/Qwen3.5-4B-abliterated.

Quantization Details

  • Quantization method: GPTQ-PRO
  • Bits/Config: W4A16 (group 128)
  • Base model: Qwen/Qwen3.5-4B-abliterated
Downloads last month
136
Safetensors
Model size
4B params
Tensor type
BF16
·
I32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support