Claude Code in a Box
Collection
How to replace Claude Code with a Mac Studio: https://spicyneuron.substack.com/p/a-mac-studio-for-local-ai-6-months • 5 items • Updated • 2
Qwen3.5-397B-A17B optimized for MLX!
Also available as a smaller 129GB version: https://huggingface.co/spicyneuron/Qwen3.5-397B-A17B-MLX-2.6bit
# Start server at http://localhost:8080/v1/chat/completions
uvx --from mlx-lm mlx_lm.server \
--host 127.0.0.1 \
--port 8080 \
--model spicyneuron/Qwen3.5-397B-A17B-MLX-3.5bit
Quantized with a mlx-lm fork, drawing inspiration from Unsloth/AesSedai/ubergarm style mixed-precision GGUFs. MLX quantization options differ than llama.cpp, but the principles are the same:
| metric | lmstudio-community 4 bit | 2.6 bit | 3.5 bit |
|---|---|---|---|
| perplexity | 3.919 ± 0.019 | 3.852 ± 0.018 | 3.919 ± 0.019 |
| hellaswag | 0.594 ± 0.022 | 0.598 ± 0.022 | 0.622 ± 0.022 |
| piqa | 0.798 ± 0.018 | 0.802 ± 0.018 | 0.804 ± 0.018 |
| winogrande | 0.744 ± 0.02 | 0.718 ± 0.02 | 0.746 ± 0.019 |
| p1024/g512 prompt | 490.702 | 489.545 | 479.453 |
| p1024/g512 gen | 39.192 | 38.398 | 35.547 |
| p1024/g512 mem | 225.095 | 131.523 | 179.842 |
3-bit
Base model
Qwen/Qwen3.5-397B-A17B