Carnice V2 27B

Carnice-V2-27B GGUF

GGUF exports for kai-os/carnice-v2-27b, a merged BF16 SFT of Qwen/Qwen3.6-27B for Hermes-style agent traces.

Recommended Files

File Size class Use
carnice-v2-27b-IQ2_M.gguf 9.4GB Best 16GB-GPU target. Built with a Carnice/Hermes imatrix calibration pass.
carnice-v2-27b-Q2_K.gguf 10GB Safest 16GB-GPU fallback. More compatible than IQ quants, lower quality than imatrix IQ2_M.
carnice-v2-27b-Q4_K_M.gguf 16GB Balanced local quality tier. May require shorter context or partial CPU offload on a 16GB GPU.
carnice-v2-27b-Q5_K_M.gguf 18GB Better quality tier for 24GB+ or split/offload setups.
carnice-v2-27b-Q8_0.gguf 27GB Near-lossless quant tier for high-memory systems.
carnice-v2-27b-bf16.gguf 51GB Full BF16 GGUF export.

For a 16GB GPU, start with IQ2_M if your runtime supports IQ quants and this Qwen3.5/Qwen3.6 GGUF architecture. If the runtime is older or fails to load IQ quants, use Q2_K.

Benchmarks From The Source SFT

Carnice V2 benchmark card

Metric Qwen3.6-27B base Carnice SFT
IFEval prompt strict, limit 20 85.0% 90.0%
IFEval prompt loose, limit 20 85.0% 90.0%
IFEval instruction strict, limit 20 90.0% 93.3%
IFEval instruction loose, limit 20 90.0% 93.3%
Held-out assistant-token eval loss 0.607 0.414
Held-out assistant-token eval perplexity 1.835 1.513

Scope note: these are source SFT checks, not separate GGUF quant benchmark scores. The full benchmark artifact bundle is in the merged model repo: kai-os/carnice-v2-27b.

Runtime Note

This model converts as qwen35 GGUF with hybrid attention/SSM layers. Use a recent llama.cpp build; older GGUF runtimes may not know this architecture yet.

Example:

llama-cli \
  -m carnice-v2-27b-Q2_K.gguf \
  -ngl all \
  -c 8192 \
  -p "Write a short plan for a Hermes agent debugging a failing tool call."

For long context on 16GB, keep the weight quant low and tune KV cache aggressively. The file fitting in VRAM does not mean 128K context will also fit.

Downloads last month
94,900
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kai-os/Carnice-V2-27b-GGUF

Base model

Qwen/Qwen3.6-27B
Quantized
(7)
this model

Collection including kai-os/Carnice-V2-27b-GGUF