GLM-5.1-JANG_1L
744B-parameter Mixture-of-Experts at ~2.15 bits/weight Created by Jinho Jang — eric@jangq.ai
⚠ EXPERIMENTAL
This is an early research release. Benchmarks (MMLU, HumanEval, GSM8K, etc.) are not yet finalized and will be uploaded in a follow-up revision. Expect rough edges in long-form reasoning outputs until tuning is complete.
Requires MLX Studio
This model only runs on MLX Studio — Jinho Jang's native MLX inference app for Apple Silicon.
- Standard
mlx_lmwill NOT work with this model. MLX Studio contains a patcheddeepseek_v32runtime path that is required for coherent decode on quantized GLM-5.1 at bf16. Without the patched runtime, the model produces repetition loops during generation. - MLX Studio auto-detects JANG v2 format and loads instantly via mmap (~50s on Mac Studio for this model size).
- All quantization, loading, and inference tuning is handled by MLX Studio — no extra setup required.
If you want to run this model and do not have MLX Studio, wait for the public release.
Model summary
| Field | Value |
|---|---|
| Base architecture | GLM-5.1 (ZhipuAI / THUDM) — MoE, 744B total params, 40B active, 256 routed experts top-8, 78 transformer layers + 1 MTP |
| Attention | MLA (Multi-head Latent Attention) with DSA (Dense Sparse Attention) indexer |
| Context window | 202,752 tokens |
| Quantization method | JANG_1L — mixed-precision importance quantization (8-bit critical tier, 8-bit important tier, 2-bit compress tier) |
| Effective bits | 2.15 bits/weight |
| On-disk size | 233 GB |
| Active RAM during inference | ~235 GB (fits on 256 GB+ Apple Silicon w/ raised iogpu.wired_limit_mb) |
| Format | JANG v2 — MLX-native safetensors, instant mmap load |
| Source | Converted from the official GLM-5.1 FP8 release |
| Mode | Text-only |
Why JANG_1L specifically? The JANG_1L profile applies maximum-quality protection to the critical tensors (attention MLA embed_q/unembed_out, router gates, lm_head, token embeddings, MLA KV compression) while allowing the routed expert MLPs to go to 2 bits. At 744B params with 256 experts, most of the weight budget lives in the routed experts — compressing them aggressively while keeping the attention and routing fully-precise is the sweet spot for MoE at 2-bit average.
Running the model
Short-form factual or instruction prompts (recommended default):
from mlx_studio import load, generate, make_sampler
model, tokenizer = load("GLM-5.1-JANG_1L")
messages = [{"role": "user", "content": "What is the capital of France? Answer in one word."}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False,
enable_thinking=False, # direct-answer mode for short-form prompts
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=60,
sampler=make_sampler(temp=0.0)))
# → "Paris"
Multi-step reasoning (larger budget, thinking mode on):
messages = [{"role": "user", "content": "If I drop a glass on a hard floor, what will happen? Explain."}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False,
enable_thinking=True,
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=1024,
sampler=make_sampler(temp=0.0)))
Sampling recommendations
| Task | enable_thinking |
temp |
top_p |
max_tokens |
|---|---|---|---|---|
| Short factual QA (one-word, one-number answers) | False |
0.0 (greedy) |
— | 60 |
| Conversational / general | False |
0.7 |
0.9 |
256 |
| Multi-step reasoning | True |
0.0 or 1.0 |
0.95 |
1024+ |
Do not apply repetition penalty to math or factual prompts — GLM-5.1 penalizes correct-answer repetition (e.g. "47+38=85" becomes "47+38=5, 7, 10").
Reasoning mode needs room: the <think>...</think> block can consume 300-800 tokens before the final answer. Budget at least 1024 max_tokens for any serious reasoning task.
Performance snapshot (informal — full benchmarks TBD)
Tested on Apple M3 Ultra (256 GB unified memory) via MLX Studio.
| Metric | Value |
|---|---|
| Cold load time (mmap) | ~54 s |
| Short-form answer latency | <1 s after load |
| Reasoning generation speed | ~5–7 tok/s |
| RAM footprint during generation | ~235 GB wired |
Qualitative coherence (10-prompt private benchmark, greedy)
| Mode | Coherent | Notes |
|---|---|---|
enable_thinking=False short-form |
7/10 | Correct on Paris/Au/85/buenos días/sky-blue/glass-breaks/ocean poem; partial on pi digits and code one-liner; fails on multi-step word problems |
enable_thinking=True reasoning |
9/10 coherent reasoning chains | Most prompts need max_tokens ≥ 1000 to emit the final </think> + answer; some chain-of-thought reaches the correct conclusion in the <think> block |
Formal benchmarks (MMLU, GSM8K, HumanEval, BBH, GPQA, etc.) coming in a follow-up revision.
Known limitations
- Reasoning budget — many
enable_thinking=Trueprompts needmax_tokens ≥ 1024to fully emit their reasoning chain and final answer. Setting lower budgets will truncate mid-analysis. - Code generation at 2-bit — simple Python one-liners sometimes get stuck in slicing-notation patterns. Expect rough edges on code tasks until future revisions.
- Word problems under short budget — multi-step word problems (Alice-apples-style) sometimes degenerate into numeric repetition when
enable_thinking=False. Useenable_thinking=True+ larger budget for any word problem that requires more than one algebraic step. - Memory requirement — this model requires ≥250 GB of GPU-wired memory. On Mac Studio, verify
sysctl iogpu.wired_limit_mbreturns250000or higher before loading. - MLX Studio only — the model depends on MLX Studio's inference runtime. It will not run under stock
mlx_lmormlx_vlm. Attempting to do so will produce repetition loops during generation.
Credits
- Quantization & conversion — Jinho Jang,
eric@jangq.ai - Runtime — MLX Studio by Jinho Jang
- Base model — GLM-5.1 by ZhipuAI / THUDM / zai-org (see the original model card for training data, intended use, and safety information)
All JANG tooling and MLX Studio are commercial products of Jinho Jang. Please refer to the MLX Studio project page for licensing terms.
Status & roadmap
- Initial conversion + runtime validation on Apple M3 Ultra
- Short-form factual QA verified coherent
- Reasoning mode (
enable_thinking=True) verified coherent through multi-step chains - Formal benchmark sweep — MMLU, GSM8K, HumanEval, BBH, GPQA — uploading in a follow-up revision
- Sampling-config tuning for code and multi-step word problems
-
GLM-5.1-JANG_2S— currently converting. JANG_2S uses the(6, 4, 2)bit tuple — tighter critical and important tiers vs JANG_1L's(8, 8, 2), for users who want a slightly smaller file footprint at the cost of attention-layer precision. Upload to follow once conversion completes and benchmarks are run head-to-head against JANG_1L. - Additional profile variants (JANG_2L, JANG_3M) under evaluation
Questions or issues — contact eric@jangq.ai.
- Downloads last month
- 236
Quantized
Model tree for JANGQ-AI/GLM-5.1-JANG_1L
Base model
zai-org/GLM-5.1