Carnice-27b Banner

Carnice-27b-MLX-Q6

This repo is a straight MLX Q6 quant of kai-os/Carnice-27b for local Apple Silicon inference.

No other edits, additions, merges, or behavioral changes have been made to the model beyond the quantization/export step.

M1 Ultra Mac Studio Throughput

Measured on a Mac Studio with Apple M1 Ultra and 128 GB unified memory.

  • Carnice-27b full weights: 10.776 tokens/sec average generation, 53.939 GB max peak memory
  • Carnice-27b-MLX-Q6: 19.124 tokens/sec average generation, 22.023 GB max peak memory

Q6 Quant

This is a straight 6-bit MLX quant.

  • quantization: Q6
  • final exported model size works out to about 6.501 bits per weight

Original Model

The text below is carried over from kai-os/Carnice-27b, with the quant-specific notes above added for this MLX release.

Carnice-27b is the merged full-model release of the Trinity Hermes-Agent training run on top of Qwen/Qwen3.5-27B.

This repo contains the quantized MLX export of that model.

Acknowledgements

This work would not have been possible without Zachary Mueller, Lambda, Teknium, and Nous Research.

Trained using traces from lambda/hermes-agent-reasoning-traces

Trinity Process

Stage A: Premium Reasoning Backbone

  • 3300 train rows
  • 193 validation rows
  • 12288 max length
  • final eval loss 0.5316
  • final eval perplexity 1.7016

Stage B: Hermes Alignment

  • widened Carnice + DJ + Lambda alignment mix
  • 2269 train rows
  • 80 validation rows
  • final eval loss 0.2336
  • final eval perplexity 1.2632

Stage C: Carnice Polish

  • 600 train rows
  • 60 validation rows
  • final eval loss 0.2310
  • final eval perplexity 1.2599

Intended Use

Carnice-27b is tuned for Hermes-Agent style terminal, file, browser, repo, debugging, and multi-step tool workflows.

Benchmark Status

Reproducible benchmark runs are not attached yet. They will be added only after the dedicated benchmark box run is complete.

Loading with mlx-lm

python -m mlx_lm.generate \
  --model /path/to/Carnice-27b-MLX-Q6 \
  --prompt "Write a bash command to list large files recursively."
Downloads last month
330
Safetensors
Model size
27B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 0xdfi/Carnice-27b-MLX-Q6

Base model

Qwen/Qwen3.5-27B
Quantized
(4)
this model