Configuration Parsing Warning:In config.json: "quantization_config.bits" must be an integer

llmfan46 / Qwen3.5-27B-ultra-uncensored-heretic-v2

QUANTIZED BY: UnstableLlama
Information
2.06bpw exl3 quantization of Qwen3.5-27B-ultra-uncensored-heretic-v2 via exllamav3.
repo generated automatically with ezexl3.
Repo Data
Quantization graph
REVISION GiB KL DIV PPL
2.06bpw 9.8498 0.1342 7.6050
3.00bpw 12.67 0.0353 7.2376
4.00bpw 15.50 0.0107 6.9668
6.00bpw 21.17 0.0011 6.9606
8.00bpw 27.13 0.0004 6.9698
bf16 50.96 0.0000 6.9692
CLI Download
hf download UnstableLlama/Qwen3.5-27B-ultra-uncensored-heretic-v2-exl3-2.06bpw --local-dir ./Qwen3.5-27B-ultra-uncensored-heretic-v2-exl3-2.06bpw
Downloads last month
29
Safetensors
Model size
5B params
Tensor type
F16
I16
BF16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for UnstableLlama/Qwen3.5-27B-ultra-uncensored-heretic-v2-exl3-2.06bpw

Base model

Qwen/Qwen3.5-27B
Quantized
(8)
this model