How to use from
Hermes Agent
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-14B
Configure Hermes
# Install Hermes:
curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash
hermes setup
# Point Hermes at the local server:
hermes config set model.provider custom
hermes config set model.base_url http://127.0.0.1:8080/v1
hermes config set model.default VECTORVV1/DeepSeek-R1-Distill-Qwen-14B
Run Hermes
hermes
Quick Links

GPT-OSS-20B-Uncensored-HauhauCS-Aggressive

Join the Discord for updates, roadmaps, projects, or just to chat.

Uncensored version of GPT-OSS 20B. This is the aggressive variant - tuned harder for fewer refusals.

If you want something more conservative, check out the Balanced variant.

Format

MXFP4 GGUF. Works with llama.cpp, ollama, LM Studio, and anything else that loads GGUFs.

LM Studio

Compatible with Reasoning Effort custom buttons. To use them, put the model in:

LM Models\lmstudio-community\gpt-oss-20b-GGUF\

About

  • Base model: GPT-OSS 20B
  • Abliterated with custom tooling
  • Pick this one if you want maximum uncensoring
Downloads last month
374
GGUF
Model size
21B params
Architecture
gpt-oss
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support