File size: 11,195 Bytes
30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb 2bc8470 30f7cdb | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 | # M2 Pro Max 96GB β Gemma 4 Setup Guide
Your machine is powerful enough to run **Gemma 4 31B-BF16 locally via MLX** β the best open alternative to Claude Opus. This guide sets up:
- **Primary**: NIM (cloud GPU)
- **Secondary**: Cloudflare Workers AI
- **Tertiary**: Google Gemini
- **Local**: Gemma 4 via MLX on Metal GPU
## Architecture
```
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MacBook M2 Pro Max 96GB β
β β
β ββββββββββββββββ ββββββββββββββββββββββββββββββββ β
β β MLX Server β β Docker (uv + API + Infra) β β
β β (Metal GPU) β β βββββββββββββββββββββββββ β β
β β Port 8000 ββββββ β’ FastAPI β β
β β Gemma 4 31B β β β’ Redis β β
β β ~65GB RAM β β β’ Postgres β β
β ββββββββββββββββ β β’ Nginx β β
β β² ββββββββββββββββββββββββββββββββ β
β β β
β ββββββββ΄βββββββ ββββββββββββββββββββ β
β β NIM Cloud βββββΊβ Cloudflare AI β β
β β (Primary) β β (Secondary) β β
β ββββββββββββββββ ββββββββββ¬ββββββββββ β
β β β
β ββββββββββββββββββββββββββββββ΄ββββββββββ β
β β Google Gemini (Tertiary) β β
β β Best for coding + reasoning tasks β β
β βββββββββββββββββββββββββββββββββββββββββ β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
## Gemma 4 Model Recommendations
With 96GB unified memory, you have options:
| Model | RAM | Quality | Speed | Best For |
|-------|-----|---------|-------|----------|
| **gemma-4-31b-bf16** | ~65GB | βββββ Highest | ~6 tok/s | Deep reasoning, code, complex tasks |
| **gemma-4-26b-a4b-it-bf16** | ~55GB | βββββ Excellent | ~7 tok/s | General purpose, multimodal |
| **gemma-4-26b-a4b-it-8bit** | ~36GB | ββββ Great | ~12 tok/s | Fast inference with good quality |
| **gemma-4-e4b-it** | ~12GB | βββ Good | ~25 tok/s | Quick Q&A, simple tasks |
**Recommendation**: Start with `gemma-4-31b-bf16`. It's the best open alternative to Claude Opus and still leaves ~30GB for system + context.
---
## Quick Start (5 Minutes)
### 1. Install uv + MLX Server
```bash
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create MLX environment
mkdir ~/mlx-gemma4 && cd ~/mlx-gemma4
uv venv --python 3.11
source .venv/bin/activate
# Install mlx-vlm (supports Gemma 4)
uv pip install mlx-vlm
```
### 2. Download Gemma 4 Model
```bash
# Download 31B BF16 (best quality, ~58GB on disk)
# This fits in 96GB with room for context
python -c "
from mlx_vlm.utils import load
model, processor = load('mlx-community/gemma-4-31b-bf16')
print('Gemma 4 31B loaded successfully')
"
# Or the 26B variant (slightly smaller, still excellent)
# python -c "from mlx_vlm.utils import load; load('mlx-community/gemma-4-26b-a4b-it-bf16')"
```
### 3. Start MLX Server
```bash
# Terminal 1: Start Gemma 4 MLX server
# Uses Metal GPU automatically β no config needed
python -m mlx_vlm.server \
--model mlx-community/gemma-4-31b-bf16 \
--host 0.0.0.0 \
--port 8000
# Verify
curl http://localhost:8000/v1/models
```
### 4. Configure API Server
```bash
# In the ml-intern repo
cd production
# Copy minimal env
cp .env.minimal .env
```
Edit `.env` β **just add your API keys** (only 3-4 lines):
```env
# Cloudflare (required β always works)
CLOUDFLARE_API_KEY=sk-your-cloudflare-key
CLOUDFLARE_ACCOUNT_ID=your-account-id
# NIM (optional β faster, free tier)
NVIDIA_API_KEY=nvapi-your-nvidia-key
# Gemini (optional β great coding/reasoning)
GEMINI_API_KEY=your-gemini-key
# Enable Gemma 4 local
MLX_ENABLED=true
MLX_API_BASE=http://host.docker.internal:8000/v1
```
### 5. Start the Stack
```bash
# Terminal 2: Launch API + infrastructure
docker-compose -f docker-compose.m2.yml up -d
# Verify
curl http://localhost/health | jq
curl http://localhost/v1/models | jq
curl http://localhost/v1/fallback/status | jq
```
### 6. Test Everything
```bash
# Test 1: Fallback status β see active provider
curl http://localhost/v1/fallback/status | jq
# Test 2: Chat via active provider (auto-fallback)
curl -X POST http://localhost/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "cloudflare/@cf/google/gemma-4-26b-a4b-it",
"messages": [{"role":"user","content":"Explain quantum computing"}]
}'
# Test 3: Force Gemma 4 local via MLX
curl -X POST http://localhost/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "mlx/gemma-4-31b-bf16",
"messages": [{"role":"user","content":"Write a Python web scraper"}],
"provider_override": "mlx"
}'
# Test 4: Gemini for coding
curl -X POST http://localhost/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemini/gemini-2.5-pro-preview",
"messages": [{"role":"user","content":"Debug this code: def foo(): pass"}]
}'
```
---
## Fallback Chain (Automatic)
```
Request comes in
β
βΌ
ββββββββββββββββ
β Check NIM ββββ Circuit breaker CLOSED?
β (Primary) β Yes β Send to NIM (fastest cloud)
ββββββββββββββββ No β Fall through
β
βΌ
ββββββββββββββββ
βCheck Cloudflareββββ Circuit breaker CLOSED?
β (Secondary) β Yes β Send to Cloudflare
ββββββββββββββββ No β Fall through
β
βΌ
ββββββββββββββββ
β Check Gemini ββββ Circuit breaker CLOSED?
β (Tertiary) β Yes β Send to Gemini (great for code)
ββββββββββββββββ No β Fall through
β
βΌ
ββββββββββββββββ
β Check MLX ββββ Enabled + Gemma 4 loaded?
β (Local Gemma)β Yes β Send to local Gemma 4
ββββββββββββββββ No β Return 503
```
Force any provider:
```bash
curl -X POST http://localhost/v1/chat/completions \
-d '{"model":"mlx/gemma-4-31b-bf16","messages":[...],"provider_override":"mlx"}'
```
---
## Advanced: Gemma 4 with Speculative Decoding (2x Speed)
Use a small drafter model to predict tokens ahead, verified by the full model:
```bash
# Terminal 1: Gemma 4 with MTP drafter (2x faster!)
python -m mlx_vlm.server \
--model mlx-community/gemma-4-31b-bf16 \
--draft-model mlx-community/gemma-4-31B-it-assistant-bf16 \
--draft-kind mtp \
--host 0.0.0.0 \
--port 8000
```
> **Note**: Temperature must be 0 for byte-identical output with MTP.
---
## Multi-Model Setup (Run Multiple Locally)
Your 96GB can hold **both** Gemma 4 31B + a fast 7B model:
```bash
# Terminal 1: Gemma 4 31B for deep reasoning
python -m mlx_vlm.server \
--model mlx-community/gemma-4-31b-bf16 \
--host 0.0.0.0 --port 8000
# Terminal 2: Gemma 4 E4B for quick tasks
python -m mlx_vlm.server \
--model mlx-community/gemma-4-e4b-it \
--host 0.0.0.0 --port 8001
```
Then send quick tasks to port 8001 directly, complex ones to port 8000.
---
## Provider Selection Guide
| Task Type | Recommended Provider | Model |
|-----------|---------------------|-------|
| **General reasoning** | MLX local | `gemma-4-31b-bf16` |
| **Coding/debugging** | Gemini | `gemini-2.5-pro-preview` |
| **Fast Q&A** | Cloudflare | `@cf/google/gemma-4-26b-a4b-it` |
| **High throughput** | NIM | `llama-3.1-405b` |
| **Multimodal (image+text)** | MLX local | `gemma-4-26b-a4b-it` |
---
## Minimal Configuration (Just 3 Lines)
```bash
# .env β the bare minimum
CLOUDFLARE_API_KEY=sk-your-key
CLOUDFLARE_ACCOUNT_ID=your-account-id
MLX_ENABLED=true
```
Everything else auto-configures. Even without NIM or Gemini keys, Cloudflare + MLX gives you a robust setup.
---
## Monitoring
```bash
# Check active provider and fallback status
curl http://localhost/v1/fallback/status | jq
# View all available models
curl http://localhost/v1/models | jq '.data[] | {id, owned_by}'
# Grafana: http://localhost:3000 (admin/admin)
# - Dashboard: "ml-intern Production"
# - Panels: provider latency, fallback count, cache hit rate
# Prometheus queries:
curl 'http://localhost:9090/api/v1/query?query=ml_intern_fallback_total'
curl 'http://localhost:9090/api/v1/query?query=ml_intern_circuit_breaker_state'
```
---
## Troubleshooting
### Gemma 4 download is slow
```bash
# Use huggingface-cli for resumable download
uv pip install huggingface-hub
huggingface-cli download mlx-community/gemma-4-31b-bf16 --local-dir ~/models/gemma-4-31b
```
### MLX server says "out of memory"
```bash
# Try the smaller 26B model instead
python -m mlx_vlm.server --model mlx-community/gemma-4-26b-a4b-it-8bit --port 8000
# Or the tiny E4B:
python -m mlx_vlm.server --model mlx-community/gemma-4-e4b-it --port 8000
```
### Docker can't reach MLX on host
```bash
# On macOS, host.docker.internal works
# On Linux, use your machine's IP:
MLX_API_BASE=http://192.168.1.5:8000/v1
```
---
## Gemma 4 vs Claude Opus
| Capability | Gemma 4 31B-BF16 (MLX) | Claude Opus 4 |
|-----------|------------------------|---------------|
| **Context window** | 128K tokens | 200K tokens |
| **Multimodal** | β
Image + Text | β
Image + Text |
| **Code quality** | ββββ Excellent | βββββ Best |
| **Reasoning** | βββββ Excellent | βββββ Best |
| **Speed** | ~6 tok/s (M2 Max) | Cloud-based |
| **Cost** | $0 (local) | ~$15/1M input tokens |
| **Privacy** | β
100% local | β Cloud |
| **Offline** | β
Works offline | β Requires internet |
**Verdict**: For most tasks, Gemma 4 31B-BF16 on your M2 Pro Max is a genuine Claude Opus alternative. For edge cases where you need the absolute best, Gemini 2.5 Pro fills the gap.
|