README.md exists but content is empty.
Downloads last month
62
Safetensors
Model size
30B params
Tensor type
BF16
U32
F32
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for kwonhan/GLM-4.7-Flash-mixed-4-6-g32-mlx

Quantized
(81)
this model