New Models
Collection
Quants created recently.. where time is relative • 53 items • Updated
This model is Text only, the vision tower was removed.
Brainwaves
arc arc/e boolq hswag obkqa piqa wino
qx85 0.456,0.519,0.622,0.704,0.392,0.774,0.680
qx64-hi 0.445,0.513,0.622,0.704,0.384,0.786,0.702
mxfp4 0.455,0.510,...
Quant Perplexity Speed(t/s) Memory
qx85 3.762 ± 0.024 448 98.86 GB
qx64-hi 3.778 ± 0.024 435 98.28 GB
mxfp4 3.909 ± 0.025 535 71.94 GB
For comparison, the brainwaves from Qwen3-Coder-Next. The two architectures are structurally different.
arc arc/e boolq hswag obkqa piqa wino
mxfp8 0.514,0.709,0.884,0.639,0.420,0.748,0.611
mxfp4 0.528,0.713,0.880,0.630,0.428,0.744,0.619
qx53n 0.520,0.714,0.872,0.630,0.438,0.744,0.599
qx64n-hi 0.527,0.707,0.880,0.631,0.426,0.744,0.580
qx64n 0.511,0.703,0.881,0.631,0.420,0.746,0.598
qx86n-hi 0.518,0.710,0.882,0.626,0.416,0.745,0.601
qx86n 0.515,0.712,0.881,0.627,0.414,0.744,0.590
The qx85 uses 5 bit data stores and 8 bit attention paths, embeddings, and head.
It was shaped to fit a 128GB Mac available RAM with a decent sized context.
The qx formula for the Qwen3.5 is still being refined, the model will be updated in-place if a more stable layer combination is found.
-G
This model Qwen3.5-122B-A10B-Text-qx64-hi-mlx was converted to MLX format from Qwen/Qwen3.5-122B-A10B using mlx-lm version 0.30.8.
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Qwen3.5-122B-A10B-Text-qx64-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
6-bit
Base model
Qwen/Qwen3.5-122B-A10B