Instructions to use OusiaResearch/AurethV2-4B-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use OusiaResearch/AurethV2-4B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="OusiaResearch/AurethV2-4B-GGUF", filename="Qwen_Qwen3.5-4B_1777947324.BF16-mmproj.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use OusiaResearch/AurethV2-4B-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
Use Docker
docker model run hf.co/OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use OusiaResearch/AurethV2-4B-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OusiaResearch/AurethV2-4B-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OusiaResearch/AurethV2-4B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
- Ollama
How to use OusiaResearch/AurethV2-4B-GGUF with Ollama:
ollama run hf.co/OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
- Unsloth Studio new
How to use OusiaResearch/AurethV2-4B-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for OusiaResearch/AurethV2-4B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for OusiaResearch/AurethV2-4B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for OusiaResearch/AurethV2-4B-GGUF to start chatting
- Pi new
How to use OusiaResearch/AurethV2-4B-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "OusiaResearch/AurethV2-4B-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use OusiaResearch/AurethV2-4B-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use OusiaResearch/AurethV2-4B-GGUF with Docker Model Runner:
docker model run hf.co/OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
- Lemonade
How to use OusiaResearch/AurethV2-4B-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull OusiaResearch/AurethV2-4B-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.AurethV2-4B-GGUF-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf OusiaResearch/AurethV2-4B-GGUF:# Run inference directly in the terminal:
llama-cli -hf OusiaResearch/AurethV2-4B-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf OusiaResearch/AurethV2-4B-GGUF:# Run inference directly in the terminal:
./llama-cli -hf OusiaResearch/AurethV2-4B-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf OusiaResearch/AurethV2-4B-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf OusiaResearch/AurethV2-4B-GGUF:Use Docker
docker model run hf.co/OusiaResearch/AurethV2-4B-GGUF:Aureth V2 — 4B GGUF
Aureth V2 is a finetuned instance of Qwen 3.5-4B-Instruct, trained on the Aureth Agent SFT Robust curriculum and exported to GGUF format for local inference.
◆ Model Overview
| Property | Value |
|---|---|
| Base Architecture | Qwen 3.5-4B-Instruct (dense, 40 layers) |
| Finetuning | Supervised fine-tuning on Aureth-Agent-SFT-Robust (243k examples) |
| Export | GGUF via Unsloth |
| Format | Standard GGUF — compatible with llama.cpp, Ollama, and other backends |
| Chat Template | Qwen-instruct (Jinja-compatible) |
| Quantizations | Q2_K_L · Q3_K_M · Q4_K_M · Q5_K_M · Q6_K · Q8_0 · BF16 |
◆ Quantizations
| Quantization | File Size | VRAM (est.) | Use Case |
|---|---|---|---|
| Q4_K_M | 2.71 GB | ~3.5 GB | Recommended daily driver |
| Q3_K_M | 2.26 GB | ~3.0 GB | Memory-constrained setups |
| Q5_K_M | 3.07 GB | ~4.0 GB | Higher quality when headroom allows |
| Q2_K_L | 2.07 GB | ~2.8 GB | Lowest-quant — quality trade-off significant |
| Q6_K | 3.46 GB | ~4.5 GB | Near-FP16 quality, tighter fit |
| Q8_0 | 4.48 GB | ~6 GB | Near-lossy; use when memory is not a constraint |
| BF16 | 8.42 GB | ~10 GB | Full precision; Metal GPU or high-VRAM GPU only |
◆ Hardware Guidance
Apple Silicon (M-series, Metal)
Recommended: Q4_K_M — stable 16–22 tok/s on M3 8GB with full GPU offload (num_gpu: 99).
NVIDIA GPU (llama.cpp cuBLAS)
- RTX 3060 12GB: Q4_K_M or Q5_K_M recommended
- RTX 4060 Ti 16GB: Q6_K or BF16 viable
- T4 (Kaggle): Q3_K_M or Q4_K_M — fits comfortably in 16GB
CPU-only (llama-cli)
- Q4_K_M: ~4–6 tok/s on modern 8-core CPUs
- Q2/Q3: ~6–9 tok/s with decompression overhead
◆ Quick Start
llama-cli (local GGUF)
# Q4_K_M example
llama-cli -hf OusiaResearch/AurethV2-4B-GGUF \
--mmproj Qwen_Qwen3.5-4B_1777947324.BF16-mmproj.gguf \
-p "You are Aureth by Ousia Research. Report uncertainty honestly. Be direct." \
-i -r "User:" -ps -2 -cn 2048 -tb 128 -ngl 99 -fa
Ollama (pull & run)
# Create Modelfile
echo 'FROM OusiaResearch/AurethV2-4B-GGUF
PARAMETER num_gpu 99
PARAMETER context_length 2048' > Modelfile
ollama create aureth-v2 -f Modelfile
ollama run aureth-v2
Homebrew llama.cpp (macOS)
brew install llama.cpp
llama-cli -hf OusiaResearch/AurethV2-4B-GGUF \
-p "You are Aureth by Ousia Research." -i -r "User:"
◆ Training Details
- Base model: Qwen/Qwen3.5-4B-Instruct
- Finetuning framework: Unsloth (LoRA + SFT pipeline)
- Training data: OusiaResearch/Aureth-Agent-SFT-Robust
- Curriculum categories: core · func_call · agentic · anti_sycophancy
- Data sources: NousResearch · teknium · lambda · DJLougen · interstellarninja · camilablank
- System prompt: "You are Aureth by Ousia Research. Report uncertainty honestly. Disagree when wrong. Correct errors cleanly. Show reasoning in blocks when complex. Be direct."
◆ Model Card
- Organization: Ousia Research
- License: Apache 2.0
- Base license: Qwen2.5-4B-Instruct (Alibaba, Apache 2.0)
- Version: V2 (May 2026)
◆ Related Models
| Model | Arch | Size | Notes |
|---|---|---|---|
| Aureth Compiler | Qwen 2.5 | 4B | Primary — this release |
| Aureth Architect | Qwen 2.5 | 9B | Larger variant |
| Aureth-Agent-SFT-Robust | Dataset | 243k rows | Training curriculum |
Ousia Research — autonomous reasoning, forged open.
- Downloads last month
- 1,256
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf OusiaResearch/AurethV2-4B-GGUF:# Run inference directly in the terminal: llama-cli -hf OusiaResearch/AurethV2-4B-GGUF: