Model Card for riomus/Qwen3-Coder-30B-A3B-Instruct-bnb-4b-fused

Fused and quantized unsloth/Qwen3-Coder-30B-A3B-Instruct using qwen3_moe_fused

Because of unsloth#638 issue - model is not sharded

For training/fine-tuning/LoRA/QLoRA it is best to use just fused model riomus/Qwen3-Coder-30B-A3B-Instruct-fused

Downloads last month
3
Safetensors
Model size
31B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for riomus/Qwen3-Coder-30B-A3B-Instruct-bnb-4b-fused