Qwen3-Coder-Next-oQ
Collection
5 items โข Updated
oQ4 mixed-precision MLX quantization produced via oMLX.
mlx-vlm and mlx-lmpip install mlx-vlm
python3 -m mlx_vlm generate --model bearzi/Qwen3-Coder-Next-oQ4 --prompt "Your prompt here" --max-tokens 512
oQ measures per-layer quantization sensitivity through calibration inference and allocates bits where they matter most โ critical layers stay at higher precision, tolerant layers compress aggressively. See oMLX docs.
Quantized
Base model
Qwen/Qwen3-Coder-Next