This model was converted to MLX format from Qwen/Qwen3.6-27B using oMLX v0.3.6.

Settings

  • Level: oQ8
  • Sensitivity model: Qwen3.6-27B-MLX-Q8
  • Text Only: yes
  • Non-quant weight dtype: float16

What is "oQ"?

See "oQ: oMLX Universal Dynamic Quantization" and "Q and oQ KL Divergence and RAM usage comparison" for details.

What is "FP16"?

"FP16" is M1/M2 Apple Silicon only optimization that leads to a very noticeable prompt processing boost. See jundot/omlx/issues/604 for details.

Feel free to request deepsweet/Qwen3.6-27B-MLX-oQ8 if you have M3+ Apple Silicon.

I need a multimodal variant with Vision-Language

Feel free to request deepsweet/Qwen3.6-27B-MLX-VL-oQ8-FP16.

Downloads last month
3,042
Safetensors
Model size
8B params
Tensor type
F16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for deepsweet/Qwen3.6-27B-MLX-oQ8-FP16

Base model

Qwen/Qwen3.6-27B
Quantized
(280)
this model

Collection including deepsweet/Qwen3.6-27B-MLX-oQ8-FP16