How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf morikomorizz/GRM-2.6-Plus-GGUF:
# Run inference directly in the terminal:
llama-cli -hf morikomorizz/GRM-2.6-Plus-GGUF:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf morikomorizz/GRM-2.6-Plus-GGUF:
# Run inference directly in the terminal:
llama-cli -hf morikomorizz/GRM-2.6-Plus-GGUF:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf morikomorizz/GRM-2.6-Plus-GGUF:
# Run inference directly in the terminal:
./llama-cli -hf morikomorizz/GRM-2.6-Plus-GGUF:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf morikomorizz/GRM-2.6-Plus-GGUF:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf morikomorizz/GRM-2.6-Plus-GGUF:
Use Docker
docker model run hf.co/morikomorizz/GRM-2.6-Plus-GGUF:
Quick Links

GRM-2.6-Plus (27B) - GGUF

Overview

This repository contains the GGUF quantized files for OrionLLM/GRM-2.6-Plus.

GRM-2.6-Plus is a highly capable 27B-parameter reasoning model built on the Qwen3.6 architecture. It is specifically engineered for general-purpose AI and optimized for difficult, high-complexity tasks.

Key Capabilities

  • Elite-Level Reasoning for Hard Tasks: GRM-2.6-Plus is optimized to handle difficult reasoning workloads with clarity, consistency, and strong step-by-step problem-solving ability.
  • High Performance for Its Size: With 27B parameters, the model is designed to deliver excellent capability relative to its scale, balancing strong intelligence with practical deployment.
  • Advanced Coding and Agentic Use: GRM-2.6-Plus is well suited for code generation, structured problem-solving, tool-style workflows, and local agentic applications.
  • Optimized for Practical Deployment: The model aims to remain efficient and usable across capable consumer and workstation hardware while offering strong performance for advanced tasks.

How to Use

These GGUF files are fully compatible with llama.cpp and popular graphical interfaces like LM Studio, Ollama.

Example using llama.cpp CLI:

./llama-cli -m GRM-2.6-Plus-Q8_0.gguf \
  -p "System: You are a helpful assistant.\nUser: Create a calculator in a single HTML file backwards.\nAssistant:" \
  -n 2048 -c 8192
Downloads last month
1,537
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for morikomorizz/GRM-2.6-Plus-GGUF

Base model

Qwen/Qwen3.6-27B
Quantized
(5)
this model

Collection including morikomorizz/GRM-2.6-Plus-GGUF