Qwen3.5 Heretic Quants
Collection
4 items • Updated
FP8 block-wise quantization of coder3101/Qwen3.5-122B-A10B-heretic-v2 (abliterated via Heretic).
Quantization format matches Qwen/Qwen3.5-122B-A10B-FP8 exactly (same tensor names, scale format, skip list).
| Parameter | Value |
|---|---|
| Method | FP8 E4M3 block-wise |
| Block size | 128x128 |
| Scale dtype | BF16 |
| Scale naming | weight_scale_inv |
| Activation scheme | Dynamic |
| Original size | ~227 GB (BF16) |
| Quantized size | ~117 GB |
All linear projections in attention (q/k/v/o_proj), GatedDeltaNet (in_proj_qkv, in_proj_z, out_proj), and MLP experts (gate/up/down_proj for all experts + shared expert).
Embeddings, lm_head, norms, MoE router gates, GDN precision-sensitive params (conv1d, in_proj_a, in_proj_b), vision tower.
python -m sglang.launch_server \
--model-path nivvis/Qwen3.5-122B-A10B-heretic-v2-FP8 \
--tp 2 \
--port 30000 --host 0.0.0.0 \
--mem-fraction-static 0.85 \
--context-length 32768 \
--attention-backend triton \
--reasoning-parser qwen3 \
--tool-call-parser qwen3_coder \
--mamba-scheduler-strategy extra_buffer \
--trust-remote-code
from vllm import LLM
model = LLM("nivvis/Qwen3.5-122B-A10B-heretic-v2-FP8", tensor_parallel_size=2, trust_remote_code=True)
--speculative-algo NEXTN (accept rate ~0.25). Restoring MTP by copying the head from the original Qwen model and fine-tuning against heretic hidden states is a potential future improvement.<think> reasoning. Use "chat_template_kwargs": {"enable_thinking": false} to disable.Base model
Qwen/Qwen3.5-122B-A10B