File size: 1,825 Bytes
6a935c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: mit
tags:
  - uncensored
  - glm4
  - moe
language:
  - en
  - zh
---

# GLM-4.7-Flash-Uncensored-HauhauCS-Aggressive

> **[Join the Discord](https://discord.gg/SZ5vacTXYf)** for updates, roadmaps, projects, or just to chat.

GLM-4.7 Flash uncensored by HauhauCS.

## About

No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.

These are meant to be the best lossless uncensored models out there.

## Aggressive vs Balanced

The Aggressive variant removes more refusal behavior. Use this if the Balanced variant still refuses too much.

For agentic coding or tasks requiring higher reliability, use the [Balanced variant](https://huggingface.co/HauhauCS/GLM-4.7-Flash-Uncensored-HauhauCS-Balanced) instead.

## Downloads

| File | Quant | Size |
|------|-------|------|
| GLM-4.7-Flash-Uncensored-HauhauCS-Aggressive-FP16.gguf | FP16 | 56 GB |
| GLM-4.7-Flash-Uncensored-HauhauCS-Aggressive-Q8_0.gguf | Q8_0 | 30 GB |
| GLM-4.7-Flash-Uncensored-HauhauCS-Aggressive-Q6_K.gguf | Q6_K | 23 GB |
| GLM-4.7-Flash-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf | Q4_K_M | 17 GB |

## Specs

- 30B-A3B MoE (31B total, ~3B active per forward pass)
- 202K context
- Based on [zai-org/GLM-4.7-Flash](https://huggingface.co/zai-org/GLM-4.7-Flash)

## Recommended Settings

From the official Z.ai authors:

**General use:**
- `--temp 1.0 --top-p 0.95`

**Tool-calling / agentic:**
- `--temp 0.7 --top-p 1.0`

**Important:**
- Disable repeat penalty (or `--repeat-penalty 1.0`)
- For llama.cpp: use `--min-p 0.01` (default 0.05 is too high)
- Use `--jinja` flag for llama.cpp

**Note:** Not recommended for Ollama due to chat template issues. Works well with llama.cpp, LM Studio, Jan.

## Usage

Works with llama.cpp, LM Studio, Jan, koboldcpp, etc.