MERaLiON-2-10B-TurboQuant

TurboQuant KV cache compression for aisingapore/MERaLiON-AudioLLM-Whisper-SEA-LION-V3-10B.

This is a documentation repository that explains how to combine MERaLiON-2-10B's weights with TurboQuant inference-time KV cache compression. No weights are stored here β€” use the base model directly and apply TurboQuant via the Python package or llama.cpp fork.

What is this?

KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime β€” so the same base weights can be used with or without compression.

Technique Where it's applied Savings
Weight quantization (GGUF/MLX/AWQ) Baked into model file Reduces disk + weight memory
TurboQuant KV cache At inference time Reduces attention memory (critical for long context)

Both can be combined for maximum efficiency.

Quickstart

Option A β€” Python / transformers

Install the turboquant package:

pip install turboquant

Then use it with the base model:

import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoTokenizer
from turboquant import TurboQuantCache

tokenizer = AutoTokenizer.from_pretrained("aisingapore/MERaLiON-AudioLLM-Whisper-SEA-LION-V3-10B", trust_remote_code=True)
model = AutoModelForSpeechSeq2Seq.from_pretrained(
    "aisingapore/MERaLiON-AudioLLM-Whisper-SEA-LION-V3-10B",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

# Apply TurboQuant to the KV cache
cache = TurboQuantCache(bits=4)  # or bits=2 for more aggressive compression

inputs = tokenizer("Hello, how are you?", return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=128,
    past_key_values=cache,
    use_cache=True,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))

Model Specifications

Property Value
Base Model aisingapore/MERaLiON-AudioLLM-Whisper-SEA-LION-V3-10B
Architecture Whisper encoder + Gemma-2-9B-IT decoder (audio-text)
Parameters ~10B (audio encoder + text decoder)
Context Length 8K
BF16 Size ~20 GB
Modalities Audio + Text
License other

What is TurboQuant?

TurboQuant (ICLR 2026) applies random orthogonal rotations followed by optimal scalar quantization to the KV cache. Bit-identical prefill logits at 4-bit, up to 4-8Γ— memory savings for long sequences.

Benchmarks (from the TurboQuant repository, Llama 3.1 8B on RTX 5090 β€” results vary by model and hardware):

  • 4-bit KV cache: bit-identical prefill logits
  • ~1.4-1.7Γ— speedup on Apple Silicon
  • Up to 8Γ— KV memory savings

Benchmarks are from the TurboQuant repository using Llama 3.1 8B. Performance on MERaLiON-2-10B will differ. Please open a discussion if you have independent results.

Current Ecosystem Support

Runtime TurboQuant Support Notes
Python transformers + turboquant βœ… Full Drop-in cache class
llama.cpp upstream ❌ Not merged Use fork below
llama-cpp-turboquant fork βœ… planar3, iso3 GitHub
LM Studio ❌ Requested Use q8_0 as alternative
Ollama ❌ Not supported Use OLLAMA_KV_CACHE_TYPE=q8_0
vLLM ❌ Not supported β€”
koboldcpp ❌ Not supported β€”

Pre-quantized weight variants

If you want combined weight + KV cache compression, majentik hosts pre-quantized versions:

See Also

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Paper for majentik/MERaLiON-2-10B-TurboQuant