Carnice-27b-MLX-Q6
This repo is a straight MLX Q6 quant of kai-os/Carnice-27b for local Apple Silicon inference.
No other edits, additions, merges, or behavioral changes have been made to the model beyond the quantization/export step.
M1 Ultra Mac Studio Throughput
Measured on a Mac Studio with Apple M1 Ultra and 128 GB unified memory.
- Carnice-27b full weights:
10.776tokens/sec average generation,53.939 GBmax peak memory - Carnice-27b-MLX-Q6:
19.124tokens/sec average generation,22.023 GBmax peak memory
Q6 Quant
This is a straight 6-bit MLX quant.
- quantization:
Q6 - final exported model size works out to about
6.501 bits per weight
Original Model
The text below is carried over from kai-os/Carnice-27b, with the quant-specific notes above added for this MLX release.
Carnice-27b is the merged full-model release of the Trinity Hermes-Agent training run on top of Qwen/Qwen3.5-27B.
This repo contains the quantized MLX export of that model.
Acknowledgements
This work would not have been possible without Zachary Mueller, Lambda, Teknium, and Nous Research.
Trained using traces from lambda/hermes-agent-reasoning-traces
Trinity Process
Stage A: Premium Reasoning Backbone
3300train rows193validation rows12288max length- final eval loss
0.5316 - final eval perplexity
1.7016
Stage B: Hermes Alignment
- widened Carnice + DJ + Lambda alignment mix
2269train rows80validation rows- final eval loss
0.2336 - final eval perplexity
1.2632
Stage C: Carnice Polish
600train rows60validation rows- final eval loss
0.2310 - final eval perplexity
1.2599
Intended Use
Carnice-27b is tuned for Hermes-Agent style terminal, file, browser, repo, debugging, and multi-step tool workflows.
Benchmark Status
Reproducible benchmark runs are not attached yet. They will be added only after the dedicated benchmark box run is complete.
Loading with mlx-lm
python -m mlx_lm.generate \
--model /path/to/Carnice-27b-MLX-Q6 \
--prompt "Write a bash command to list large files recursively."
- Downloads last month
- 330
6-bit
