Qwen3.5-24B-A3B — Claude Opus + Gemini 3.1 Pro Reasoning Distill

A fine-tuned version of sandeshrajx/Qwen3.5-24B-A3B-REAP-0.32, itself based on Qwen3.5-35B-A3B. The goal of this project is simple: the best reasoning model that can comfortably fit and run on a 16GB GPU.

Inspired By

Jackrong's distills

Uploaded model

  • Developed by: JackBinary
  • License: apache-2.0
  • Finetuned from model : sandeshrajx/Qwen3.5-24B-A3B-REAP-0.32

This qwen3_5_moe_text model was trained 2x faster with Unsloth

Downloads last month
92
Safetensors
Model size
24B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for JackBinary/Qwen3.5-24B-A3B-Claude-Opus-Gemini-3.1-Pro-Reasoning-Distilled

Finetuned
(2)
this model

Datasets used to train JackBinary/Qwen3.5-24B-A3B-Claude-Opus-Gemini-3.1-Pro-Reasoning-Distilled

Collection including JackBinary/Qwen3.5-24B-A3B-Claude-Opus-Gemini-3.1-Pro-Reasoning-Distilled