Qwen3.5-35B-A3B Uncensored YaRN 1M - Extended Context Uncensored Model

Base Model: Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-BF16 with YaRN
Architecture: Qwen3.5-35B-A3B MoE
Context: 1M tokens (1,048,576)
Purpose: Long-form uncensored content, extended creative writing, unrestricted analysis

YaRN Configuration

--ctx-size 1048576 --rope-scaling yarn --yarn-orig-ctx 262144

Recommended Settings

Standard Mode

--temp 0.7 --top-p 0.8 --top-k 20 --min-p 0

High Creativity

--temp 0.8 --top-p 0.9 --top-k 40 --min-p 0

Quick Start

llama-server -m Qwen3.5-35B-Uncensored-YaRN-1M-Q4_K_M.gguf   --ctx-size 1048576 --temp 0.7 --top-p 0.8 --top-k 20   --rope-scaling yarn --yarn-orig-ctx 262144

Note

For standard 262K context, use the regular Uncensored repo.

Downloads last month
1,917
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support