Qwen3-VL-8B-Thinking-abliterated-v1-GGUF

The Qwen3-VL-8B-Thinking-abliterated-v1 from prithivMLmods is an 8B-parameter vision-language model, an abliterated (v1.0) variant of Alibaba's Qwen3-VL-8B-Thinking optimized for uncensored reasoning and captioning across complex, sensitive, nuanced, artistic, technical, or abstract visual/multimodal content while supporting diverse aspect ratios, resolutions, videos, and layouts with Interleaved-MRoPE, 32-language OCR, and 256K+ context length. It bypasses standard content filters to generate factual, descriptive, reasoning-rich outputs with variational detail control—from high-level summaries to intricate chain-of-thought analyses—leveraging the base model's superior visual agent capabilities, spatial perception, long-context video understanding, and STEM reasoning, primarily in English with multilingual prompt adaptability.[attached_file:1 equivalent] Ideal for research in content moderation/red-teaming, creative storytelling, and visual datasets excluded from mainstream models, it uses Transformers/Qwen3VLForConditionalGeneration for GPU inference (16-24GB VRAM) but may produce explicit content unsuitable for moderated production.

Qwen3-VL-8B-Thinking-abliterated-v1

File Name Quant Type File Size File Link
Qwen3-VL-8B-Thinking-abliterated-v1.IQ4_XS.gguf IQ4_XS 4.59 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q2_K.gguf Q2_K 3.28 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q3_K_L.gguf Q3_K_L 4.43 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q3_K_M.gguf Q3_K_M 4.12 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q3_K_S.gguf Q3_K_S 3.77 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q4_K_M.gguf Q4_K_M 5.03 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q4_K_S.gguf Q4_K_S 4.8 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q5_K_M.gguf Q5_K_M 5.85 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q5_K_S.gguf Q5_K_S 5.72 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q6_K.gguf Q6_K 6.73 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.Q8_0.gguf Q8_0 8.71 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.f16.gguf F16 16.4 GB Download
Qwen3-VL-8B-Thinking-abliterated-v1.mmproj-Q8_0.gguf mmproj-Q8_0 752 MB Download
Qwen3-VL-8B-Thinking-abliterated-v1.mmproj-f16.gguf mmproj-f16 1.16 GB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
443
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-8B-Thinking-abliterated-v1-GGUF

Collection including prithivMLmods/Qwen3-VL-8B-Thinking-abliterated-v1-GGUF