Various GGUF and MLX quants I want to test and benchmark
-
unsloth/Qwen3.6-35B-A3B-GGUF
Image-Text-to-Text • 35B • Updated • 2.5M • 960 -
unsloth/Qwen3.6-35B-A3B-MLX-8bit
Image-Text-to-Text • 10B • Updated • 33.9k • 24 -
Jackrong/Qwopus3.6-35B-A3B-v1-GGUF
Image-Text-to-Text • 35B • Updated • 19k • 79 -
Brooooooklyn/Qwen3.6-35B-A3B-UD-Q8_K_XL-mlx
Text Generation • 10B • Updated • 2.55k • 8