GLM-5.1-JANG_2S
744B-parameter Mixture-of-Experts at ~2.10 bits/weight — smallest 2-bit-family variant Created by Jinho Jang — eric@jangq.ai
⚠ EXPERIMENTAL
This is an early research release. Benchmarks (MMLU, HumanEval, GSM8K, etc.) are not yet finalized and will be uploaded in a follow-up revision.
🔬 Comparison model
JANG_2Sis a smaller-floor variant of the same 2-bit-family family asGLM-5.1-JANG_1L. It uses the(6, 4, 2)bit tuple (critical=6, important=4, compress=2) vs JANG_1L's(8, 8, 2), trading ~5 GB of size for lower precision on the critical/important attention and routing tensors. For most users,JANG_1Lis the recommended default. UseJANG_2Sonly if you want the smallest 2-bit-family variant for comparison or hardware where every GB matters.
Requires MLX Studio
This model only runs on MLX Studio — Jinho Jang's native MLX inference app for Apple Silicon.
Standard mlx_lm will NOT work with this model. MLX Studio contains a patched deepseek_v32 runtime path that is required for coherent decode on quantized GLM-5.1 at bf16. Without the patched runtime, the model produces repetition loops during generation.
If you want to run this model and do not have MLX Studio, wait for the public release.
Model summary
| Field | Value |
|---|---|
| Base architecture | GLM-5.1 (ZhipuAI / THUDM) — MoE, 744B total params, 40B active, 256 routed experts top-8, 78 transformer layers + 1 MTP |
| Attention | MLA (Multi-head Latent Attention) with DSA (Dense Sparse Attention) indexer |
| Context window | 202,752 tokens |
| Quantization method | JANG_2S — mixed-precision with (critical=6, important=4, compress=2) bits |
| Effective bits | 2.10 bits/weight |
| On-disk size | 228 GB |
| Active RAM during inference | ~232 GB wired |
| Format | JANG v2 — MLX-native safetensors, instant mmap load |
| Source | Converted from the official GLM-5.1 FP8 release |
| Mode | Text-only |
JANG_2S vs JANG_1L at a glance
| JANG_2S (this) | JANG_1L | |
|---|---|---|
Bit tuple (critical, important, compress) |
(6, 4, 2) |
(8, 8, 2) |
| Actual avg bits | 2.10 | 2.15 |
| On-disk size | 228 GB | 233 GB |
gate (MoE router) |
6-bit | 8-bit |
k_proj, q_proj, kv_a_proj_with_mqa, embed_tokens |
4-bit | 4–8-bit |
lm_head |
6-bit | 6-bit |
routed experts (gate/up/down_proj) |
2-bit (97% of params) | 2-bit (97% of params) |
| FAST-mode short answer accuracy (internal 10-prompt benchmark) | 7/10 | 7/10 |
| THINK-mode 500-tok reasoning coherence | 7/10 | 7/10 |
Takeaway: the compress tier (routed experts, 97% of params) is identical between the two profiles. JANG_2S's savings come from lower precision on the critical/important attention path, which produces marginally different failure patterns on the same 10-prompt test but no measurable overall quality gain. Choose based on whether those 5 GB matter to your hardware.
Running the model
Same as JANG_1L — see that model card for the recommended sampling config table. Short-form QA:
from mlx_studio import load, generate, make_sampler
model, tokenizer = load("GLM-5.1-JANG_2S")
messages = [{"role": "user", "content": "What is the capital of France? Answer in one word."}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False,
enable_thinking=False,
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=60,
sampler=make_sampler(temp=0.0)))
# → "Paris"
Reasoning mode (needs max_tokens ≥ 1024 to reliably emit final answer):
tmpl = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False,
enable_thinking=True,
)
print(generate(model, tokenizer, prompt=tmpl, max_tokens=1024,
sampler=make_sampler(temp=0.0)))
Do NOT apply repetition penalty — GLM-5.1 penalizes correct-answer repetition on math.
Performance snapshot (informal — full benchmarks TBD)
Tested on Apple M3 Ultra (256 GB unified memory) via MLX Studio.
| Metric | Value |
|---|---|
| Cold load time (mmap) | ~54 s |
| Short-form answer latency | <1 s after load |
| Reasoning generation speed | ~5–7 tok/s |
| RAM footprint during generation | ~232 GB wired |
Qualitative coherence (10-prompt private benchmark, greedy)
| Mode | Coherent |
|---|---|
enable_thinking=False short-form |
7/10 |
enable_thinking=True reasoning |
7/10 |
Failure pattern differs from JANG_1L: JANG_2S gets to the "Paris" answer about 2× faster in THINK mode (15 s vs 32 s) but fails on fact_gold (the "Au" prompt) with a looping pattern JANG_1L doesn't exhibit.
Formal benchmarks (MMLU, GSM8K, HumanEval, BBH, GPQA, etc.) coming in a follow-up revision.
Known limitations
- Same reasoning budget requirement as JANG_1L —
enable_thinking=Trueneedsmax_tokens ≥ 1024. - Code generation at 2-bit is rough — the model sometimes refuses with an essay rather than emitting code.
- Memory requirement: requires ≥250 GB of GPU-wired memory. On Mac Studio, verify
sysctl iogpu.wired_limit_mbreturns250000or higher before loading. - MLX Studio only — see the notice at the top of this file.
Credits
- Quantization & conversion — Jinho Jang,
eric@jangq.ai - Runtime — MLX Studio by Jinho Jang
- Base model — GLM-5.1 by ZhipuAI / THUDM / zai-org (see the original model card for training data, intended use, and safety information)
All JANG tooling and MLX Studio are commercial products of Jinho Jang. Please refer to the MLX Studio project page for licensing terms.
Status
- Initial conversion + runtime validation on Apple M3 Ultra
- Short-form factual QA verified coherent
- Reasoning mode (
enable_thinking=True) verified coherent through multi-step chains - Formal benchmark sweep — MMLU, GSM8K, HumanEval, BBH, GPQA — uploading in a follow-up revision
- A true sub-2-bit variant (
JANG_1S) is in active research — target: ~150 GB on disk via codebook quantization of routed experts
Questions or issues — contact eric@jangq.ai.
- Downloads last month
- 253
Quantized
Model tree for JANGQ-AI/GLM-5.1-JANG_2S
Base model
zai-org/GLM-5.1