GPTQ quantization
Hi! Can you tell me how possible it is to independently prepare the GPTQ modification of this model? We're exploring production deployment of Huihui-Qwen3-Coder-Next-abliterated via vLLM with 4-bit quantization. Due to the custom qwen3_next architecture and MoE setup, standard tools (AutoGPTQ/AutoAWQ) fail to recognize the model type. Any guidance on quantization workflows or pre-quantized weights would be highly appreciated.
Architecture & Quantization Support
Does Qwen3NextForCausalLM (custom qwen3_next architecture with mixed linear_attention/full_attention layers and 512-expert MoE) support standard 4-bit quantization frameworks like AutoGPTQ or AutoAWQ?
If not officially supported, are there known workarounds or custom quantization scripts you recommend?Pre-quantized Versions
Do you provide official GPTQ or AWQ 4-bit quantized versions of this model?
If yes, please share the Hugging Face Hub link or conversion script used.MoE-Specific Quantization Guidance
Given the extreme MoE configuration (num_experts=512, num_experts_per_tok=10):
Should all 512 experts be quantized, or only the top-k activated ones?
Are there known accuracy drops when quantizing MoE layers vs. dense layers?
Any recommended group_size or desc_act settings for MoE stability?Critical Parameters for Quantization
For optimal quantization quality, please confirm:
Recommended group_size (128 for Marlin compatibility?)
desc_act=True or False?
Required calibration dataset (e.g., c4-new, domain-specific code data)?
Minimum VRAM needed for quantization (e.g., 24GB for 7B-class models?)
Since our space on hf.co is now limited, there are no plans to switch to other formats of storage.
Maybe you can try llm-compressor
Creation details
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.utils import dispatch_for_generation
from llmcompressor.modifiers.quantization import GPTQModifier
# NOTE: Requires a minimum of transformers 4.57.0
MODEL_ID = "/data/model-cache/Huihui-Qwen3-Coder-Next-abliterated"
# Load model.
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
# Select calibration dataset.
DATASET_ID = "/data/model-cache/ultrachat_200k"
DATASET_SPLIT = "train_sft"
# Select number of samples. 512 samples is a good place to start.
# Increasing the number of samples can improve accuracy.
NUM_CALIBRATION_SAMPLES = 1024
MAX_SEQUENCE_LENGTH = 4096
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=f"{DATASET_SPLIT}[:{NUM_CALIBRATION_SAMPLES}]")
ds = ds.shuffle(seed=42)
def preprocess(example):
return {
"text": tokenizer.apply_chat_template(
example["messages"],
tokenize=False,
)
}
ds = ds.map(preprocess)
# Tokenize inputs.
def tokenize(sample):
return tokenizer(
sample["text"],
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
add_special_tokens=False,
)
ds = ds.map(tokenize, remove_columns=ds.column_names)
# Configure the quantization algorithm to run.
# * quantize the weights to 4 bit with GPTQ with a group size 128
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
weight_observer="mse",
ignore=[
"lm_head",
"re:.*mlp.gate$",
"re:.*mlp.shared_expert_gate$",
"re:.*linear_attn.*",
],
)
# Apply algorithms.
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
)
# Save to disk compressed.
SAVE_DIR = MODEL_ID + "-quantized.w4a16"
model.save_pretrained(SAVE_DIR, save_compressed=True)
tokenizer.save_pretrained(SAVE_DIR)
I will try later