Voxtral-4B-TTS-2603-TurboQuant-MLX-8bit

8-bit MLX weight-quantized build of mistralai/Voxtral-4B-TTS-2603 with a TurboQuant KV-cache profile. Highest-fidelity MLX TTS variant on Apple Silicon โ€” nearly indistinguishable from the reference model in subjective listening tests.

Hardware compatibility

Device VRAM / RAM Recommendation
Apple M4 Max 128 GB ~5.2 GB recommended โ€” headroom for long context
Apple M3 Max 64 GB ~5.2 GB comfortable
Apple M2 Max 32 GB ~4.8 GB fits

Overview

  • Base: mistralai/Voxtral-4B-TTS-2603 โ€” 4B multilingual TTS with zero-shot voice cloning
  • Weight precision: 8-bit (group-wise)
  • KV-cache profile: TurboQuant
  • Approx. on-disk size: ~4 GB
  • Runtime: MLX on Apple Silicon

Quickstart

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("majentik/Voxtral-4B-TTS-2603-TurboQuant-MLX-8bit")

prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": [
        {"type": "audio", "path": "reference_voice.wav"},
        {"type": "text", "text": "Hello, this is a cloned voice."},
    ]}],
    add_generation_prompt=True,
)
audio_tokens = generate(model, tokenizer, prompt=prompt, max_tokens=2048)
# Decode acoustic tokens to waveform via the Voxtral audio decoder

Model specs

Field Value
Parameters 4B
Weight bits 8
Group size 64
Cache profile TurboQuant
Languages 9
Voice cloning Zero-shot
Size on disk ~4 GB
Target hardware Apple Silicon (M1/M2/M3/M4)
License Apache 2.0

RotorQuant vs TurboQuant

TurboQuant RotorQuant
Strategy Per-head static calibration Rotational online re-basis
Memory reduction ~3.5x on KV-cache ~4x on KV-cache
Best for Single-voice, single-language sessions Multi-voice / multi-language batches

See also

Downloads last month
129
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for majentik/Voxtral-4B-TTS-2603-TurboQuant-MLX-8bit

Quantized
(10)
this model