Nemotron-3-Super-120B-A12B-MINT-MLX

Mixed-precision quantized version of nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16 optimised by baa.ai.

Hybrid Mamba-MoE-Attention architecture (512 experts, 22 active per token) quantized to fit a single Apple Silicon machine.

Metrics

| Metric | Value |

|--------|-------|

| Size | 98 GB |

| Average bits | 7.0 |

| Framework | MLX |

| Architecture | Hybrid Mamba-2 + MoE + Attention |

| Parameters | 123.6B (12B active) |

Bit Allocation

| Precision | Parameters | Percentage |

|-----------|-----------|------------|

| 16-bit | 18.2B | 14.8% |

| 8-bit | 43.9B | 35.5% |

| 4-bit | 61.4B | 49.7% |

Usage


from mlx_lm import load, generate



model, tokenizer = load("baa-ai/Nemotron-3-Super-120B-A12B-MINT-MLX")



messages = [{"role": "user", "content": "Hello!"}]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

response = generate(model, tokenizer, prompt=prompt, max_tokens=256)

print(response)

Notes

  • Requires mlx-lm ≥ 0.31.1 with latent MoE projection support

  • Peak memory: ~100 GB (fits on M2 Ultra 192 GB or M4 Max 128 GB)

  • MTP (Multi-Token Prediction) layers are stripped for inference — they are not needed for standard autoregressive generation


Quantized by baa.ai


Black Sheep AI Products

Shepherd — Private AI deployment platform that shrinks frontier models by 50-60% through RAM compression, enabling enterprises to run sophisticated AI on single GPU instances or Apple Silicon hardware. Deploy in your VPC with zero data leaving your infrastructure. Includes CI/CD pipeline integration, fleet deployment across Apple Silicon clusters, air-gapped and sovereign deployment support, and multi-format export (MLX, GGUF). Annual cloud costs from ~$2,700 — or run on a Mac Studio for electricity only.

Watchman — Capability audit and governance platform for compressed AI models. Know exactly what your quantized model can do before it goes live. Watchman predicts which capabilities survive compression in minutes — replacing weeks of benchmarking. Includes compliance-ready reporting for regulated industries, quality valley warnings for counterproductive memory allocations, instant regression diagnosis tracing issues to specific tensors, and 22 adversarial security probes scanning for injection, leakage, hallucination, and code vulnerabilities.

Learn more at baa.ai — Sovereign AI.

Downloads last month
1,773
Safetensors
Model size
121B params
Tensor type
BF16
·
U32
·
F32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for baa-ai/Nemotron-3-Super-120B-A12B-MINT-MLX

Quantized
(42)
this model

Collection including baa-ai/Nemotron-3-Super-120B-A12B-MINT-MLX