A few select GGUF quants for ConicCat/GLM-4.5-Architect-106B-A12B

Downloads last month
21
GGUF
Model size
110B params
Architecture
glm4moe
Hardware compatibility
Log In to add your hardware

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ApocalypseParty/GLM-4.5-Architect-106B-A12B-gguf

Quantized
(3)
this model