Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
tripolskypetr
/
Qwen3.5-Uncensored-Aggressive-GGUF
like
0
GGUF
imatrix
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
6,976
GGUF
Model size
122B params
Architecture
qwen35moe
Chat template
Hardware compatibility
Log In
to add your hardware
2-bit
IQ2_M
39.9 GB
IQ2_M
9.4 GB
3-bit
IQ3_XXS
47.1 GB
IQ3_M
53.8 GB
Q3_K_M
58.6 GB
IQ3_M
12.6 GB
Q3_K_M
13.3 GB
IQ3_M
15.4 GB
4-bit
IQ4_XS
65.2 GB
IQ4_XS
14.7 GB
IQ4_XS
18.7 GB
Q4_K_M
74.2 GB
Q4_K_M
16.5 GB
Q4_K_M
1.27 GB
Q4_K_M
21.2 GB
Q4_K_M
2.71 GB
Q4_K_M
5.63 GB
5-bit
Q5_K_M
87 GB
Q5_K_M
19.4 GB
6-bit
Q6_K
100 GB
Q6_K
22.1 GB
Q6_K
1.56 GB
Q6_K
3.46 GB
Q6_K
7.36 GB
Q6_K_P
105 GB
8-bit
Q8_0
2.01 GB
Q8_0
4.48 GB
Q8_0
9.53 GB
16-bit
BF16
3.78 GB
BF16
8.42 GB
BF16
17.9 GB
View +1 variant
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support