Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2-MLX-4bit

4-bit MLX quantization of Jackrong/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2

Conversion

Quantized using mlx_lm.convert with 4-bit quantization (q_bits=4, q_group_size=64)

Usage

from mlx_lm import load, generate

model, tokenizer = load("rafal-adamczyk/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2-MLX-4bit")
response = generate(model, tokenizer, prompt="Hello!", verbose=True)

Original Model

See Jackrong/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2 for full details.

License

Apache 2.0 (inherited from original)

Downloads last month
825
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rafal-adamczyk/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2-MLX-4bit