Image-Text-to-Text
MLX
Safetensors
English
Chinese
qwen3_5
unsloth
fine tune
all use cases
coder
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
mxfp4
conversational
4-bit precision
Qwen3.5-40B-Claude-4.5-Opus-High-Reasoning-Thinking
Quality: quantized (mxfp4, 4.388 bpw)
40 billion parameters (dense, not moe) expanded from Qwen3.5 27B, then trained on Claude 4.6 Opus High Reasoning dataset via Unsloth on local hardware.
96 layers, 1275 Tensors. (50% more than base model of 27B)
Features variable length reasoning ; less complex = shorter, longer for more complex.
Model performance has increased dramatically.
256K context.
Source
This model was converted to MLX format from DavidAU/Qwen3.5-40B-Claude-4.5-Opus-High-Reasoning-Thinking using mlx-vlm version 0.4.
- Downloads last month
- 181
Model size
39B params
Tensor type
U8
·
U32 ·
BF16 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for TheCluster/Qwen3.5-40B-Claude-4.5-Opus-High-Reasoning-Thinking-MLX-mxfp4
Base model
Qwen/Qwen3.5-27B