Qwen3.5-27B-heretic-v2-gptq-pro-w8g128

GPTQ-PRO W8A16 quantization (group size 128) of llmfan46/Qwen3.5-27B-heretic-v2.

Quantization Details

  • Quantization method: GPTQ-PRO
  • Bits/Config: W8A16 (group 128)
  • Base model: llmfan46/Qwen3.5-27B-heretic-v2
Downloads last month
63
Safetensors
Model size
27B params
Tensor type
BF16
·
I32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for groxaxo/Qwen3.5-27B-heretic-v2-gptq-pro-w8g128

Base model

Qwen/Qwen3.5-27B
Quantized
(14)
this model