Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-Q4_K_M

GGUF 4bit conversion of huihui-ai/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated.

Example

llama-cli \
  -m path/to/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-Q4_K_M.gguf \
  --n-gpu-layers all \
  --ctx-size 262144 \
  --flash-attn on
Downloads last month
2,281
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Sepolian/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-Q4_K_M