GGUF
conversational
How to use from
Hermes Agent
# Gated model: Login with a HF token with gated access permission
hf auth login
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf Blazed-Forge/Testing-Grounds-GGUF:Q5_K_S
Configure Hermes
# Install Hermes:
curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash
hermes setup
# Point Hermes at the local server:
hermes config set model.provider custom
hermes config set model.base_url http://127.0.0.1:8080/v1
hermes config set model.default Blazed-Forge/Testing-Grounds-GGUF:Q5_K_S
Run Hermes
hermes
Quick Links

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

README.md exists but content is empty.
Downloads last month
27
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support