Qwen3.5-27B-Uncensored-YaRN-1M

Uncensored Qwen3.5 27B with aggressive HauhauCS modifications and YaRN extension, supporting up to 1M context.

Model Details

  • Base: HauhauCS/Qwen3.5-27B-Uncensored-HauhauCS-Aggressive
  • Context: 262K active / 1M max (YaRN)
  • Quantization: BF16
  • VRAM: ~60GB (262K) / ~90GB (1M)

Usage

262K Context (80GB GPU)

1M Context (100GB+ GPU or Q8_0 model)

With Vision (mmproj)

Files

  • GGUF: 53.8GB (BF16)
  • mmproj: 885MB (F16, vision)
  • Tokenizer: 13MB

Performance

  • 262K context: ~60GB VRAM (BF16 cache)
  • 1M context: ~90GB VRAM (BF16 cache)
  • Max output: 81,920 tokens
  • Temperature: 0.7 (uncensored optimized)

Credits

  • Base: HauhauCS (uncensored aggressive)
  • Original: Qwen Team
  • YaRN: arxiv.org/abs/2309.00071
Downloads last month
501
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for Exil01/Qwen3.5-27B-Uncensored-YaRN-1M