Qwen3.5 Heretic Quants
Collection
4 items • Updated
FP8 block-wise quantization of llmfan46/Qwen3.5-35B-A3B-heretic-v2 (abliterated via Heretic v1.2.0 MPOA+SOMA).
Quantization format matches Qwen/Qwen3.5-35B-A3B-FP8 exactly (same tensor names, scale format, skip list).
| Parameter | Value |
|---|---|
| Method | FP8 E4M3 block-wise |
| Block size | 128x128 |
| Scale dtype | BF16 |
| Scale naming | weight_scale_inv |
| Activation scheme | Dynamic |
| Original size | ~66 GB (BF16) |
| Quantized size | ~35 GB |
All linear projections in attention (q/k/v/o_proj), GatedDeltaNet (in_proj_qkv, in_proj_z, out_proj), and MLP experts (gate/up/down_proj for all 256 experts + shared expert).
Embeddings, lm_head, norms, MoE router gates, GDN precision-sensitive params (conv1d, in_proj_a, in_proj_b), vision tower.
CUDA_DEVICE_ORDER=PCI_BUS_ID \
CUDA_VISIBLE_DEVICES=0 \
SGLANG_ENABLE_JIT_DEEPGEMM=0 \
SGLANG_ENABLE_SPEC_V2=1 \
python -m sglang.launch_server \
--model-path nivvis/Qwen3.5-35B-A3B-heretic-v2-FP8 \
--tool-call-parser qwen3_coder \
--port 30000 --host 0.0.0.0 \
--mem-fraction-static 0.85 \
--context-length 32768 \
--attention-backend triton \
--reasoning-parser qwen3 \
--mamba-scheduler-strategy extra_buffer \
--trust-remote-code
from vllm import LLM
model = LLM("nivvis/Qwen3.5-35B-A3B-heretic-v2-FP8", trust_remote_code=True)
Tested on NVIDIA RTX PRO 6000 Blackwell (98 GB):
--speculative-algo NEXTN (accept rate ~0.25). Restoring MTP by copying the head from the original Qwen model and fine-tuning against heretic hidden states is a potential future improvement.<think> reasoning. Use "chat_template_kwargs": {"enable_thinking": false} to disable.Base model
Qwen/Qwen3.5-35B-A3B-Base