File size: 1,607 Bytes
c16b401
13d1862
 
 
 
31b5080
c16b401
 
13d1862
b4cb8c4
 
 
 
31b5080
 
c16b401
 
13d1862
 
 
 
 
 
 
 
 
 
31b5080
13d1862
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31b5080
13d1862
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31b5080
13d1862
 
 
 
 
 
 
 
 
 
 
 
31b5080
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
title: Qwen API
emoji: 🤖
colorFrom: blue
colorTo: purple
sdk: docker
app_file: app.py
pinned: false
license: apache-2.0
tags:
  - qwen
  - uncensored
  - llama-cpp
  - gguf
suggested_hardware: a10g-small
---

# Qwen3.5-9B Uncensored API Interface

API interface for [HauhauCS/Qwen3.5-9B-Uncensored-HauhauCS-Aggressive](https://huggingface.co/HauhauCS/Qwen3.5-9B-Uncensored-HauhauCS-Aggressive).

## Features

- 9B parameters with 262K context window
- Fully uncensored (0/465 refusals)
- Multimodal capable (text, image, video)
- Supports 201 languages
- Q4_K_M quantization via llama.cpp

## API Usage

### Python

```python
from gradio_client import Client

client = Client("Ngixdev/qwen-api")

result = client.predict(
    prompt="Your question here",
    system_prompt="You are a helpful assistant",
    temperature=0.7,
    top_p=0.8,
    max_tokens=1024,
    api_name="/api_generate"
)
print(result)
```

### cURL

```bash
curl -X POST https://ngixdev-qwen-api.hf.space/api/api_generate \
    -H "Content-Type: application/json" \
    -d '{
        "data": [
            "Your question here",
            "You are a helpful assistant",
            0.7,
            0.8,
            1024
        ]
    }'
```

## Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| prompt | string | required | User prompt/question |
| system_prompt | string | "" | System instruction |
| temperature | float | 0.7 | Sampling temperature (0.0-2.0) |
| top_p | float | 0.8 | Nucleus sampling (0.0-1.0) |
| max_tokens | int | 1024 | Maximum tokens to generate |