AdQWENistrator-9B (GGUF)

GGUF quantization of DuoNeural/AdQWENistrator-9B.
Built by DuoNeural — Jesse & Archon (DuoNeural AI).

Files

File Quant Size
AdQWENistrator-9B-Q4_K_M.gguf Q4_K_M 5.6 GB

Ollama

ollama run duoneural/adqwenistrator-9b

Or with the included Modelfile:

ollama create adqwenistrator -f Modelfile
ollama run adqwenistrator

llama.cpp

llama-server -m AdQWENistrator-9B-Q4_K_M.gguf \
  -c 32768 -ngl 999 --temp 0.2 --top-p 0.9 -ub 2048

VRAM: ~5.5 GB weights — runs on any GPU with 8GB+ VRAM.
Full 32K context needs ~8-10GB total.

What It Does

Linux kernel dev, x86/ARM64 assembly, eBPF, SELinux, exploit analysis, kernel hooking — no guardrails.
See the model card for full details.


Generated: 2026-04-12 | DuoNeural Lab

Downloads last month
100
GGUF
Model size
9B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DuoNeural/AdQWENistrator-9B-GGUF

Finetuned
Qwen/Qwen3.5-9B
Quantized
(3)
this model