CRITICAL FIX (2026-03-19): Fixed eos_token_id — previous versions caused infinite thinking loops. You MUST re-download this model if you downloaded before today.
Update (2026-03-18): Models have been updated to v2.1.0 with VLM support, proper tokenizer, and fixed configs. If you downloaded before this date, please re-download for full MLX Studio compatibility.
MLX Studio — the only app that natively supports JANG models
Early Adoption: LM Studio, Ollama, oMLX, Inferencer do not support JANG yet. Use MLX Studio or
pip install "jang[mlx]". Ask your favorite app's creators to add JANG support!
Qwen3.5-122B-A10B — JANG_2S (MoE, 2-bit) — VLM
JANG — Jang Adaptive N-bit Grading | Mixed-Precision Quantization for Apple Silicon
JANG is fully open-source. Quantization engine, research, and full commit history: github.com/jjang-ai/jangq. Created by Jinho Jang.
Results (200-question MMLU)
| Model | MMLU | Size |
|---|---|---|
| JANG_4K | 86% | 69 GB |
| JANG_2S | 79% | 38 GB |
| MLX 4-bit | 85% | 64 GB |
| MLX 2-bit | 56.5% | 36 GB |
JANG_2S at 38 GB scores 79% while MLX 2-bit at 36 GB scores 56.5%. +22.5 points at nearly the same size. On MoE models with 256 experts, JANG's tier-based allocation protects the <2% of critical parameters while compressing the 98% expert MLP to 2-bit.
Specs
| Metric | Value |
|---|---|
| Source | Qwen3.5-122B-A10B |
| Architecture | MoE (256 experts, 8 active) + GatedDeltaNet SSM |
| Profile | JANG_2S (CRITICAL=6, IMPORTANT=4, COMPRESS=2) |
| GPU Memory | ~35 GB |
| Best for | 64+ GB Mac |
| VLM | Yes |
| Speed | 54 tok/s |
| Format | v2 (MLX-native, instant load) |
Install
pip install "jang[mlx]"
For Vision-Language models:
pip install "jang[vlm]"
Quick Start
from jang_tools.loader import load_jang_model
from mlx_lm.sample_utils import make_sampler
from mlx_lm.generate import generate_step
import mlx.core as mx
model, tokenizer = load_jang_model("JANGQ-AI/Qwen3.5-122B-A10B-JANG_2S")
sampler = make_sampler(temp=0.7)
tokens = tokenizer.encode("What is photosynthesis?")
for tok, _ in generate_step(prompt=mx.array(tokens), model=model, max_tokens=200, sampler=sampler):
t = tok.item() if hasattr(tok, 'item') else int(tok)
print(tokenizer.decode([t]), end="", flush=True)
if t == tokenizer.eos_token_id:
break
VLM Inference
from jang_tools.loader import load_jang_vlm_model
from mlx_vlm import generate
model, processor = load_jang_vlm_model("JANGQ-AI/Qwen3.5-122B-A10B-JANG_2S")
prompt = processor.tokenizer.apply_chat_template(
[{"role": "user", "content": [
{"type": "image", "image": "photo.jpg"},
{"type": "text", "text": "Describe this image."}
]}], add_generation_prompt=True, tokenize=False, enable_thinking=False)
result = generate(model, processor, prompt, ["photo.jpg"], max_tokens=200)
print(result.text)
Links
- GitHub | HuggingFace | MLX Studio | PyPI | Format Spec
한국어
Qwen3.5-122B (MoE) — JANG 2S
JANG은 Apple Silicon을 위한 혼합정밀도 양자화 포맷입니다. MLX를 위한 GGUF와 같은 역할을 합니다.
| 모델 | MMLU | 크기 |
|---|---|---|
| JANG_2S | 79% | 38 GB |
| MLX 2-bit | 56.5% | 36 GB |
설치
pip install "jang[mlx]"
호환성
현재 **MLX Studio**만 JANG 포맷을 기본 지원합니다. LM Studio, Ollama 등은 아직 지원하지 않습니다.
GitHub · HuggingFace · MLX Studio · PyPI
장진호 제작 · Created by Jinho Jang — jangq.ai · @dealignai
- Downloads last month
- 512
Quantized
Model tree for JANGQ-AI/Qwen3.5-122B-A10B-JANG_2S
Base model
Qwen/Qwen3.5-122B-A10B
