YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

saved from huihui's repo before the Q3_K_M was deleted

https://huggingface.co/huihui-ai/Huihui-GLM-4.5-Air-abliterated-GGUF/tree/main/Q3_K_M-GGUF

useful for 64gb ram

this is the setup I use with 8gb vram

koboldcpp.exe --host 0.0.0.0 --port 5001 --model Q3_K_M-GGUF-00001-of-00006.gguf --flashattention --contextsize 57344 --gpulayers 22 --moecpu

or

koboldcpp.exe --host 0.0.0.0 --port 5001 --model Q3_K_M-GGUF-00001-of-00006.gguf --flashattention --contextsize 16384 --gpulayers 48 --moecpu

this model is really good but i recommend mlabonne's abliterated gemma 3 27B for cases when GLM struggles with prompt adherence

Downloads last month
21
GGUF
Model size
110B params
Architecture
glm4moe
Hardware compatibility
Log In to add your hardware

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support