Text Generation
Transformers
Safetensors
PyTorch
nemotron_h
nvidia
nemotron-3
latent-moe
mtp
conversational
custom_code
8-bit precision
modelopt

VLLM + MTP + NVFP4 doesn't work

#16
by catplusplus - opened

This seems like a promising model and I am grateful to NVIDIA for contributing it to open weights community. However, MTP does not seem to work at least with vllm and NVFP4 version. My coding agent has identified at least the first problem, but there are future issues. Without speculative config, the model works well

-- a/vllm/transformers_utils/model_arch_config_convertor.py
+++ b/vllm/transformers_utils/model_arch_config_convertor.py
@@ -445,4 +445,5 @@ MODEL_ARCH_CONFIG_CONVERTORS = {
"ernie_mtp": ErnieMTPModelArchConfigConvertor,
"pangu_ultra_moe_mtp": PanguUltraMoeMTPModelArchConfigConvertor,
"longcat_flash_mtp": LongCatFlashMTPModelArchConfigConvertor,

  • "nemotron_h_mtp": DeepSeekMTPModelArchConfigConvertor, # Reuse DeepSeek MTP convertor - both use num_nextn_predict_layers

So as a quick fix I would clarify the limits of MTP support for quantized models and then hopefully we can have a fix?

what is the startp command that you are using here?

https://github.com/NVIDIA-NeMo/Nemotron/tree/main/usage-cookbook/Nemotron-3-Super/SparkDeploymentGuide

This is where we're going to keep up to date/most recent configs for Spark for this model, including the fixes for MTP!

Sign up or log in to comment