🌟 Qwopus3.6-35B-A3B-v1

💡 Base Model Overview

Qwen3.6-35B-A3B is an advanced hybrid sparse MoE (Mixture-of-Experts) model developed by Alibaba Cloud. It features 35B total parameters with only 3B active parameters per token, ensuring high inference efficiency. Architecturally, it combines Gated DeltaNet linear attention with standard gated attention layers, routing tokens across 256 experts. It natively supports a massive 262k context window and is specifically designed for high-performance agentic coding, deep reasoning, and multimodal tasks.

Base Model Benchmark Placeholder


🚀 Model Refinement & Logic Tuning (Qwopus3.6-35B-A3B-v1)

🪐Qwopus3.6-35B-A3B-v1 is a reasoning-enhanced MoE (Mixture of Experts) model fine-tuned on top of Qwen3.6-35B-A3B.

🛠 Training Strategy

The fine-tuning process for this model is structured into three distinct stages of distributed SFT (Supervised Fine-Tuning), progressively scaling reasoning complexity and data diversity. This systematic approach ensures the model inherits the base MoE capabilities while sharpening its logic-handling depth.

Looking ahead, Reinforcement Learning (RL) training will be introduced in subsequent versions to further optimize the reasoning paths and alignment performance.

This version uses LoRA fine-tuning, but uniquely scales up the trainable parameters, with approximately 9% of the model parameters participating in the update. This allows for a deeper adaptation of reasoning capabilities while maintaining the efficiency of parameter-efficient fine-tuning. However, setting trainable parameters to 9% is a risky configuration for this MoE architecture, as it significantly increases the potential for training instability and weight merging conflicts.

Vision & Tool Calling Support: This model supports visual capabilities and tool calling. To enable vision, please place the mmproj.gguf file from the GGUF repository into the same directory as the main .gguf file.

It is designed for:

  • 🧩 More structured reasoning
  • 🪶 More consistent answer style
  • 🔁 Better cross-source distillation alignment
  • ⚡ A stronger foundation for later larger-scale versions

Community Release Notice: Qwopus3.6-35B-A3B-v1 has not undergone complete performance evaluation or safety testing. It is released purely as an experimental community version for research and exploration.


🧪 Data Composition & Context Length Mix

The model was trained on a carefully curated dataset encompassing a wide range of domains, including mathematics, code, science, multilingual chat, and instruction following.

To balance different capabilities, the training data is divided into four main context-length buckets, incorporating a mix of:

  • Short format stable samples
  • Medium complexity reasoning samples
  • Long context high-quality samples
  • A small amount of replay samples

Context Length Distribution:

  • < 4096 tokens: Short-context data focused on establishing stable formatting and basic reasoning.
  • 4096 - 8192 tokens: Medium-context data introducing higher reasoning complexity.
  • 8192 - 16384 tokens: Long-context reasoning data, which also includes 10% short sample replay to prevent catastrophic forgetting of basic instruction-following.
  • 16384 - 32K tokens: A small amount of multi-turn conversations to maintain extended interaction capabilities.

🎯 Three-Stage Curriculum Learning

Qwopus3.6-35B-A3B-v1 employs a curriculum learning-style phased reasoning data mix, progressively increasing the difficulty and complexity of the training signals:

  1. Early Stage (Format Establishment): Focuses on short-to-medium length, format-stable reasoning samples. The primary goal here is to establish a reliable, structured new reasoning format without overwhelming the model with extreme complexity.

  2. Middle Stage (Complexity Scaling & Multi-Teacher Distillation): Gradually increases the proportion of complex reasoning samples from multiple teacher models.

    • Distillation data sourced from a 27B model that closely matches the base model's stylistic distribution, ensuring the capability gap isn't too drastic to learn efficiently.
  3. Final Stage (Long-Context Reinforcement & Anti-Drift): Strengthens long-context reasoning capabilities. Crucially, this stage retains short sample replay to ensure the model maintains its short-context instruction-following ability and minimizes capability drift.


🚀 Quick Evaluation Summary: Qwopus3.6-35B-A3B-v1

This model represents a significant leap in inference efficiency and one-shot generation quality compared to previous dense architectures. By leveraging a Hybrid MoE structure (35B total / 3B active parameters) and Gated DeltaNet linear attention, it balances high throughput with deep reasoning capabilities.

Screenshot 2026-05-07 at 10.27.57 AM

  • Unmatched Speed: Achieves an average of 161.9 tok/s on an RTX 5090—a 2.6× speedup over the 27B dense predecessor—making it one of the fastest high-parameter models available for single-GPU consumer hardware.
  • Production-Grade Frontend Design: Evaluated as one of the strongest open models for one-shot HTML/CSS generation. Unlike models that provide surface-level scaffolding, this model delivers complete, functional pages with complex micro-interactions, animated components, and production-ready logic.
  • Starvation-Free Reasoning: Successfully resolves the "thinking starvation" issues seen in earlier versions. It maintains robust performance in long-context JSON extraction and multi-step agentic planning, outputting valid structured data even after extensive internal reasoning traces.
  • Architectural Efficiency: The integration of Gated DeltaNet allows for a massive 262K native context window with optimized VRAM usage, keeping memory requirements nearly flat even as sequence lengths increase.

Verdict: A premier choice for developers requiring a high-throughput, agentic model that excels at UI/UX generation and complex logical deduction on a single-GPU setup.

Here is a summary for model card, based on the 🔗 Qwopus3.6-35B-A3B-v1 comprehensive evaluation report by Kyle Hessling.

Screenshot 2026-05-07 at 10.28.27 AM

Screenshot 2026-05-07 at 10.28.42 AM

Screenshot 2026-05-07 at 10.28.56 AM

Screenshot 2026-05-07 at 10.29.09 AM

Screenshot 2026-05-07 at 10.35.30 AM


⚠️ Known Training & Deployment Issues (IMPORTANT)

Due to the architectural complexities of the Qwen3.6 MoE models, several technical challenges were encountered during training and weight merging. Users should be aware of the following potential instabilities:

MoE Architecture Compatibility Issues

  • The weight structure of MoE expert layers differs significantly from standard dense models.
  • There are known, easily triggered incompatibilities between PEFT/LoRA, Transformers 5.x's fused expert pattern, and Unsloth patches.
  • Even when using the absolute latest environment and dependencies, merging the LoRA weights into the base model after training may fail or encounter severe compatibility bugs.
  • Common Error: You may encounter ModuleNotFoundError: Could not import module 'Qwen3_5MoeForConditionalGeneration' or similar structural mismatch errors during the weight merging phase.

If you are attempting to fine-tune or merge weights for this MoE architecture locally, proceed with caution and be prepared to manually patch model definition files or downgrade specific library versions.


📚 Resources & Guides

👉 GitHub Repository: Jackrong-llm-finetuning-guide Visit the repo to dive into the codebase and reproduce the results locally or on Colab.


🙏 Acknowledgements

Special thanks to:

  • The Qwen team for the strong Qwen3.6 MoE base model.
  • Unsloth for efficient fine-tuning frameworks.
  • Open-source datasets and community contributors.
  • Kyle Hessling for his generous hardware and equipment support. You can follow him for more updates on X / Twitter: @KyleHessling1.

📖 Citation

@misc{jackrong_qwopus36_35b_a3b_v1,
  title        = {Qwopus3.6-35B-A3B-v1},
  author       = {Jackrong},
  year         = {2026},
  publisher    = {Hugging Face}
}
Downloads last month
100
Safetensors
Model size
36B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Jackrong/Qwopus3.6-35B-A3B-v1

Adapter
(4)
this model
Adapters
2 models
Finetunes
1 model
Quantizations
6 models

Space using Jackrong/Qwopus3.6-35B-A3B-v1 1

Collection including Jackrong/Qwopus3.6-35B-A3B-v1