How to use from
MLX LM
Generate or start a chat session
# Install MLX LM
uv tool install mlx-lm
# Interactive chat REPL
mlx_lm.chat --model "bearzi/Trinity-Mini-oQ6"
Run an OpenAI-compatible server
# Install MLX LM
uv tool install mlx-lm
# Start the server
mlx_lm.server --model "bearzi/Trinity-Mini-oQ6"
# Calling the OpenAI-compatible server with curl
curl -X POST "http://localhost:8000/v1/chat/completions" \
   -H "Content-Type: application/json" \
   --data '{
     "model": "bearzi/Trinity-Mini-oQ6",
     "messages": [
       {"role": "user", "content": "Hello"}
     ]
   }'
Quick Links

Trinity-Mini-oQ6

oQ6 mixed-precision MLX quantization produced via oMLX.

  • Quantization: oQ6 (sensitivity-driven mixed precision, group_size=64)
  • Format: MLX safetensors
  • Compatible with: mlx-lm, mlx-vlm, oMLX on Apple Silicon

Usage

from mlx_lm import load, generate
model, tokenizer = load("bearzi/Trinity-Mini-oQ6")
prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": "Hello"}],
    add_generation_prompt=True,
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))

About oQ

oQ measures per-layer quantization sensitivity through calibration and allocates bits where they matter most — critical layers stay at higher precision, tolerant layers compress aggressively. Target averages of 2/3/4/6/8 bits are provided; actual per-layer bits vary by measured sensitivity.

See oQ documentation.

Comparative benchmarks and feedback welcome — please open a discussion.

Downloads last month
30
Safetensors
Model size
6B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bearzi/Trinity-Mini-oQ6

Quantized
(20)
this model

Collection including bearzi/Trinity-Mini-oQ6