Leanstral-RotorQuant-MLX-2bit

2-bit MLX weight-quantized Leanstral-2603 with RotorQuant KV-cache quantization for high-throughput Lean 4 formal proof generation on Apple Silicon.

Leanstral is the first open-source AI agent purpose-built for Lean 4 formal proofs -- generating both executable code and machine-checkable mathematical proofs. This variant combines dual compression: 2-bit MLX weight quantization for aggressive model size reduction plus RotorQuant KV-cache quantization, delivering 5.3x faster prefill and 28% faster decode compared to TurboQuant equivalents.

Overview

This repository provides an aggressively compressed configuration with RotorQuant's superior throughput: MLX 2-bit weight quantization minimizes the static memory footprint, while RotorQuant's rotation-aware KV-cache compression delivers faster prefill and decode than TurboQuant.

Spec Value
Base model mistralai/Leanstral-2603
Architecture Mistral MoE (~119B parameters, 7 consolidated shards)
Weight quantization 2-bit (MLX)
KV-cache quantization RotorQuant
Weight memory ~30 GB
Prefill speedup 5.3x vs TurboQuant
Decode speedup 28% vs TurboQuant
Runtime MLX (Apple Silicon)
License Apache 2.0
Use case Lean 4 formal verification, theorem proving, mathematical proofs

Quickstart

from mlx_lm import load, generate

model, tokenizer = load("majentik/Leanstral-RotorQuant-MLX-2bit")

prompt = "Prove that for all natural numbers n, n + 0 = n in Lean 4:"
response = generate(
    model,
    tokenizer,
    prompt=prompt,
    max_tokens=512,
)
print(response)

What is RotorQuant?

RotorQuant is an advanced KV-cache quantization method that leverages rotation-aware quantization to achieve superior throughput compared to standard KV-cache compression. By exploiting the rotary positional embedding structure, RotorQuant achieves:

  • 5.3x faster prefill -- critical for long Lean 4 proof contexts
  • 28% faster decode -- faster token-by-token proof generation
  • Equivalent memory savings to TurboQuant with better computational efficiency

Note: 2-bit weight quantization is lossy. Expect some degradation in proof quality compared to the 4-bit variant. For critical formal verification work, prefer the 4-bit or full-precision variants.

Memory Estimates

Component Estimate
Model weights (2-bit) ~30 GB
KV-cache Reduced via RotorQuant
Recommended hardware MacBook Pro M2/M3/M4 Max (64 GB+) or Mac Studio

Lean 4 Use Case

Leanstral excels at:

  • Formal verification -- generating machine-checkable proofs of mathematical theorems
  • Theorem proving -- interactive and automated proof search in Lean 4
  • Code generation -- writing verified Lean 4 programs with correctness guarantees
  • Proof repair -- fixing incomplete or broken proof scripts

See Also

Downloads last month
14
Safetensors
Model size
23B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for majentik/Leanstral-RotorQuant-MLX-2bit

Quantized
(8)
this model