Image-Text-to-Text
MLX
Safetensors
English
Chinese
qwen3_5
unsloth
fine tune
all use cases
coder
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
mxfp8
conversational
8-bit precision
Qwen3.5-40B-Claude-4.5-Opus-High-Reasoning-Thinking
Quality: quantized (mxfp8, group size: 32, 8.341 bpw)
40 billion parameters (dense, not moe) expanded from Qwen3.5 27B, then trained on Claude 4.6 Opus High Reasoning dataset via Unsloth on local hardware.
96 layers, 1275 Tensors. (50% more than base model of 27B)
Features variable length reasoning ; less complex = shorter, longer for more complex.
Model performance has increased dramatically.
256K context.
Source
This model was converted to MLX format from DavidAU/Qwen3.5-40B-Claude-4.5-Opus-High-Reasoning-Thinking using mlx-vlm version 0.4.
- Downloads last month
- 154
Model size
39B params
Tensor type
U8
·
U32 ·
BF16 ·
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for wbkou/Qwen3.5-40B-Claude-4.5-Opus-Distilled-MLX-mxfp8
Base model
Qwen/Qwen3.5-27B