This model was converted to MLX format from Qwen/Qwen3.6-27B using oMLX v0.3.6.

Settings

  • Level: oQ5
  • Sensitivity model: Qwen3.6-27B-MLX-Q8
  • Text Only: yes
  • Non-quant weight dtype: float16

What is "oQ"?

See "oQ: oMLX Universal Dynamic Quantization" and "Q and oQ KL Divergence and RAM usage comparison" for details.

What is "FP16"?

"FP16" is M1/M2 Apple Silicon only optimization that leads to a very noticeable prompt processing boost. See jundot/omlx/issues/604 for details.

Feel free to request deepsweet/Qwen3.6-27B-MLX-oQ5 if you have M3+ Apple Silicon.

I need a multimodal variant with Vision-Language

Feel free to request deepsweet/Qwen3.6-27B-MLX-VL-oQ5-FP16.

Changelog

  • 27.04.2026: re-quantize using affine Q8 sensitivity model
Downloads last month
1,111
Safetensors
Model size
5B params
Tensor type
F16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for deepsweet/Qwen3.6-27B-MLX-oQ5-FP16

Base model

Qwen/Qwen3.6-27B
Quantized
(280)
this model

Collection including deepsweet/Qwen3.6-27B-MLX-oQ5-FP16