Duplicated from Sepolian/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-Q4_K_M
GGUF 4bit conversion of huihui-ai/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated.
huihui-ai/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated
llama-cli \ -m path/to/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-Q4_K_M.gguf \ --n-gpu-layers all \ --ctx-size 262144 \ --flash-attn on
Chat template
4-bit
Base model