Qwen3-30B-A3B-Gemini-Pro-High-Reasoning-2507-ABLITERATED-UNCENSORED

The power of Gemini 3 Pro High Reasoning with the MOE power (and speed) of Qwen 30B-A3B 2507 Thinking (256k context, 128 experts).

This version is both fully uncensored, and fully functional too.

Tuning via Unsloth (on local hardware) using Linux for Windows.

Specialized tuning applied on an abliterated model post abliteration to bring both new reasoning (Gemini) and repair any ablit model issues.

Compact, to the point, and powerful reasoning takes "Qwen 30B-A3B 2507 Thinking" to the next level.

Reasoning/Thinking blocks will be a lot shorter, and in many cases different from "Qwen" reasoning.

Average size 4-10 paragraphs. Definitely "Gemini" style.

Note all math, science and other goodies are fully intact.

Model Specs:

  • 256k context
  • 128 experts (8 active by default)
  • 3B of 30B parameters active.
  • Model can be used on GPU, CPU or split at reasonable token/second speed.

BENCHMARKS:

[ xxx ] - Exceeds org model specs.

ARC-Challenge | ARC-Easy | BoolQ   | Hellaswag | OpenBookQA | PIQA  | Winogrande

0.422           0.474      0.761     0.687       0.382        0.783   0.647

VS "Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated"

ARC-Challenge | ARC-Easy | BoolQ | Hellaswag | OpenBookQA | PIQA  | Winogrande
0.387           0.436      0.628   0.616       0.400        0.763   0.639

VS "Normal Qwen3 30B-A3B"

ARC-Challenge | ARC-Easy | BoolQ | Hellaswag | OpenBookQA | PIQA  | Winogrande
0.410           0.444      0.691   0.635       0.390        0.769   0.650

[ more soon ... ]

Downloads last month
4
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for botp/Qwen3-30B-A3B-Gemini-Pro-High-Reasoning-2507-ABLITERATED-UNCENSORED

Dataset used to train botp/Qwen3-30B-A3B-Gemini-Pro-High-Reasoning-2507-ABLITERATED-UNCENSORED