MTP results with vLLM inside

#10
by unoid - opened

vLLM Qwen3.5 122B Benchmark Results
NVFP4 Quantization - MTP Comparison - March 3, 2026

RTX Pro 6000 Blackwell - 300W - vLLM 0.16.1rc1.dev173+g8fa68a8ce

image

image

122B benefits but 27B FP8 does scale a lot better with MTP spec decode due to 27B dense next token pass versus A10B.

launch command?

docker run -d \
  --name vllm-server \
  --gpus all \
  --ulimit nofile=1048576:1048576 \
  --ipc=host \
  --shm-size=32g \
  -p 8000:8000 \
  -v /mnt/user/AI/models:/models \
  --env VLLM_MARLIN_USE_ATOMIC_ADD=1 \
  --env VLLM_USE_DEEP_GEMM=1 \
  --env TORCH_CUDA_ARCH_LIST="12.0" \
  vllm/vllm-openai:cu130-nightly \
  /models/Qwen3.5-122B-A10B-NVFP4-v3 \
  --served-model-name Qwen3.5 \
  --tensor-parallel-size 1 \
  --dtype auto \
  --trust-remote-code \
  --max-model-len 262144 \
  --gpu-memory-utilization 0.90 \
  --kv-cache-dtype fp8_e4m3 \
  --max-num-seqs 6 \
  --max-num-batched-tokens 8192 \
  --enable-prefix-caching \
  --enable-auto-tool-choice \
  --tool-call-parser qwen3_coder \
  --reasoning-parser qwen3 \
  --mm-encoder-tp-mode data \
  --mm-processor-cache-type shm \
  --override-generation-config '{"repetition_penalty": 1.0, "temperature": 1.0, "top_p": 0.95, "presence_penalty": 1.5, "top_k": 20}' \
  --speculative-config '{"method": "qwen3_next_mtp", "num_speculative_tokens": 3}'

Here is the issue I'm running into, vllm/vllm-openai:cu130-nightly "works" even though it's buggy, and gets around the '!!!!' output issue from the model (MTP enabled/disabled). The problem I found, I don't think it's the same as the main branch from the vllm git repo. I've applied all of the GB10 patches, and I'm still getting '!!!', I'm at a loss for how to fix this. It kind of works with vllm/vllm-openai:cu130-nightly (buggy, some crashes), but I'm getting like 10-11 TPS even with MTP enabled.

@unoid
Server starts up fine, but when i try to run a curl request i get this

(EngineCore_DP0 pid=213) /usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fla/ops/utils.py:113: UserWarning: Input tensor shape suggests potential format mismatch: seq_len (16) < num_heads (64). This may indicate the inputs were passed in head-first format [B, H, T, ...] when head_first=False was specified. Please verify your input tensor format matches the expected shape [B, T, H, ...].
(EngineCore_DP0 pid=213) return fn(*contiguous_args, **contiguous_kwargs)
(EngineCore_DP0 pid=213) /usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fla/ops/utils.py:113: UserWarning: Input tensor shape suggests potential format mismatch: seq_len (16) < num_heads (64). This may indicate the inputs were passed in head-first format [B, H, T, ...] when head_first=False was specified. Please verify your input tensor format matches the expected shape [B, T, H, ...].
(EngineCore_DP0 pid=213) return fn(*contiguous_args, **contiguous_kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [dump_input.py:72] Dumping input data for V1 LLM engine (v0.16.1rc1.dev206+g097eb544e) with config: model='/models/Qwen3.5-122B-A10B-NVFP4-v3', speculative_config=SpeculativeConfig(method='mtp', model='/models/Qwen3.5-122B-A10B-NVFP4-v3', num_spec_tokens=3), tokenizer='/models/Qwen3.5-122B-A10B-NVFP4-v3', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=262144, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=compressed-tensors, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=fp8_e4m3, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='qwen3', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Qwen3.5, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '/root/.cache/vllm/torch_compile_cache/27918726ec', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer', 'vllm::unified_kv_cache_update', 'vllm::unified_mla_kv_cache_update'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.PIECEWISE: 1>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': True, 'fuse_attn_quant': False, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 48, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': False}, 'local_cache_dir': '/root/.cache/vllm/torch_compile_cache/27918726ec/rank_0_0/eagle_head', 'fast_moe_cold_start': False, 'static_all_moe_layers': ['language_model.model.layers.0.mlp.experts', 'language_model.model.layers.1.mlp.experts', 'language_model.model.layers.2.mlp.experts', 'language_model.model.layers.3.mlp.experts', 'language_model.model.layers.4.mlp.experts', 'language_model.model.layers.5.mlp.experts', 'language_model.model.layers.6.mlp.experts', 'language_model.model.layers.7.mlp.experts', 'language_model.model.layers.8.mlp.experts', 'language_model.model.layers.9.mlp.experts', 'language_model.model.layers.10.mlp.experts', 'language_model.model.layers.11.mlp.experts', 'language_model.model.layers.12.mlp.experts', 'language_model.model.layers.13.mlp.experts', 'language_model.model.layers.14.mlp.experts', 'language_model.model.layers.15.mlp.experts', 'language_model.model.layers.16.mlp.experts', 'language_model.model.layers.17.mlp.experts', 'language_model.model.layers.18.mlp.experts', 'language_model.model.layers.19.mlp.experts', 'language_model.model.layers.20.mlp.experts', 'language_model.model.layers.21.mlp.experts', 'language_model.model.layers.22.mlp.experts', 'language_model.model.layers.23.mlp.experts', 'language_model.model.layers.24.mlp.experts', 'language_model.model.layers.25.mlp.experts', 'language_model.model.layers.26.mlp.experts', 'language_model.model.layers.27.mlp.experts', 'language_model.model.layers.28.mlp.experts', 'language_model.model.layers.29.mlp.experts', 'language_model.model.layers.30.mlp.experts', 'language_model.model.layers.31.mlp.experts', 'language_model.model.layers.32.mlp.experts', 'language_model.model.layers.33.mlp.experts', 'language_model.model.layers.34.mlp.experts', 'language_model.model.layers.35.mlp.experts', 'language_model.model.layers.36.mlp.experts', 'language_model.model.layers.37.mlp.experts', 'language_model.model.layers.38.mlp.experts', 'language_model.model.layers.39.mlp.experts', 'language_model.model.layers.40.mlp.experts', 'language_model.model.layers.41.mlp.experts', 'language_model.model.layers.42.mlp.experts', 'language_model.model.layers.43.mlp.experts', 'language_model.model.layers.44.mlp.experts', 'language_model.model.layers.45.mlp.experts', 'language_model.model.layers.46.mlp.experts', 'language_model.model.layers.47.mlp.experts', 'mtp.layers.0.mlp.experts']},
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [dump_input.py:79] Dumping scheduler output for model execution: SchedulerOutput(scheduled_new_reqs=[NewRequestData(req_id=chatcmpl-9c6d24d863947cee-aca8c64d,prompt_token_ids_len=16,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, seed=None, stop=[], stop_token_ids=[248044], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=128, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, structured_outputs=None, extra_args=None),block_ids=([1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13]),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None)], scheduled_cached_reqs=CachedRequestData(req_ids=[],resumed_req_ids=set(),new_token_ids_lens=[],all_token_ids_lens={},new_block_ids=[],num_computed_tokens=[],num_output_tokens=[]), num_scheduled_tokens={chatcmpl-9c6d24d863947cee-aca8c64d: 16}, total_num_scheduled_tokens=16, scheduled_spec_decode_tokens={}, scheduled_encoder_inputs={}, num_common_prefix_blocks=[0, 0, 0, 1], finished_req_ids=[], free_encoder_mm_hashes=[], preempted_req_ids=[], has_structured_output_requests=false, pending_structured_output_tokens=false, num_invalid_spec_tokens=null, kv_connector_metadata=null, ec_connector_metadata=null)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [dump_input.py:81] Dumping scheduler stats: SchedulerStats(num_running_reqs=1, num_waiting_reqs=0, step_counter=0, current_wave=0, kv_cache_usage=0.0948905109489051, encoder_cache_usage=0.0, prefix_cache_stats=PrefixCacheStats(reset=False, requests=1, queries=16, hits=0, preempted_requests=0, preempted_queries=0, preempted_hits=0), connector_prefix_cache_stats=None, kv_cache_eviction_events=[], spec_decoding_stats=None, kv_connector_stats=None, waiting_lora_adapters={}, running_lora_adapters={}, cudagraph_stats=None, perf_stats=None)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] EngineCore encountered a fatal error.
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] Traceback (most recent call last):
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 1093, in run_engine_core
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] engine_core.run_busy_loop()
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 1128, in run_busy_loop
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] self._process_engine_step()
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 1165, in _process_engine_step
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] outputs, model_executed = self.step_fn()
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 497, in step_with_batch_queue
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] model_output = future.result()
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self.__get_result()
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] raise self._exception
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/uniproc_executor.py", line 80, in collective_rpc
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] result = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/serial_utils.py", line 459, in run_method
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return func(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return func(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py", line 657, in sample_tokens
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self.model_runner.sample_tokens(grammar_output)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return func(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 3796, in sample_tokens
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] propose_draft_token_ids(sampled_token_ids)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 3762, in propose_draft_token_ids
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] self._draft_token_ids = self.propose_draft_token_ids(
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 4223, in propose_draft_token_ids
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] draft_token_ids = self.drafter.propose(
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/spec_decode/eagle.py", line 653, in propose
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ret_hidden_states = self.model(**model_kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 402, in call
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self.aot_compiled_fn(self, *args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/aot_compile.py", line 124, in call
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self.fn(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen3_5_mtp.py", line 405, in forward
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] def forward(
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/caching.py", line 198, in call
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self.optimized_call(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 936, in call_wrapped
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self._wrapped_call(self, *args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 455, in call
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] raise e
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 442, in call
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return super(self.cls, obj).call(*args, **kwargs) # type: ignore[misc]
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self._call_impl(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return forward_call(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File ".583", line 27, in forward
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] submod_1 = self.submod_1(getitem, s59, getitem_1, getitem_2, getitem_3); getitem = getitem_1 = getitem_2 = submod_1 = None
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 936, in call_wrapped
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self._wrapped_call(self, *args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 455, in call
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] raise e
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 442, in call
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return super(self.cls, obj).call(*args, **kwargs) # type: ignore[misc]
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self._call_impl(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return forward_call(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File ".585", line 6, in forward
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] unified_attention_with_output = torch.ops.vllm.unified_attention_with_output(query_2, key_2, value, output_3, 'mtp.layers.0.self_attn.attn', kv_cache_dummy_dep = unified_kv_cache_update); query_2 = key_2 = value = output_3 = unified_kv_cache_update = unified_attention_with_output = None
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 1209, in call
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return self._op(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/attention/kv_transfer_utils.py", line 39, in wrapper
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] return func(*args, **kwargs)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/attention/attention.py", line 702, in unified_attention_with_output
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] self.impl.forward(
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/attention/backends/flashinfer.py", line 1513, in forward
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] decode_wrapper.run(
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/flashinfer/decode.py", line 1395, in run
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] self._cached_module.paged_run(*run_args)
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/usr/local/lib/python3.12/dist-packages/flashinfer/prefill.py", line 666, in paged_run
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] paged_run_func(
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "python/tvm_ffi/cython/function.pxi", line 929, in tvm_ffi.core.Function.call
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "", line 0, in __tvm_ffi_paged_run
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] File "/workspace/build/aot/generated/batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_e4m3_dtype_o_bf16_dtype_idx_i32_head_dim_qk_256_head_dim_vo_256_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False/batch_prefill.cu", line 330, in BatchPrefillWithPagedKVCacheRun(tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::Array, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::Optionaltvm::ffi::TensorView, int64_t, int64_t, int64_t, bool, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, double, double, double, double, int64_t)::<lambda()>
(EngineCore_DP0 pid=213) ERROR 03-05 17:47:37 [core.py:1102] RuntimeError: Check failed: (status == cudaSuccess) is false: BatchPrefillWithPagedKVCache failed with error an illegal memory access was encountered
(EngineCore_DP0 pid=213) Process EngineCore_DP0:
(EngineCore_DP0 pid=213) Traceback (most recent call last):
(EngineCore_DP0 pid=213) File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=213) self.run()
(EngineCore_DP0 pid=213) File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=213) self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 1104, in run_engine_core
(EngineCore_DP0 pid=213) raise e
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 1093, in run_engine_core
(EngineCore_DP0 pid=213) engine_core.run_busy_loop()
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 1128, in run_busy_loop
(EngineCore_DP0 pid=213) self._process_engine_step()
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 1165, in _process_engine_step
(EngineCore_DP0 pid=213) outputs, model_executed = self.step_fn()
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 497, in step_with_batch_queue
(EngineCore_DP0 pid=213) model_output = future.result()
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(EngineCore_DP0 pid=213) return self.__get_result()
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
(EngineCore_DP0 pid=213) raise self._exception
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/uniproc_executor.py", line 80, in collective_rpc
(EngineCore_DP0 pid=213) result = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/serial_utils.py", line 459, in run_method
(EngineCore_DP0 pid=213) return func(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(EngineCore_DP0 pid=213) return func(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py", line 657, in sample_tokens
(EngineCore_DP0 pid=213) return self.model_runner.sample_tokens(grammar_output)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(EngineCore_DP0 pid=213) return func(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 3796, in sample_tokens
(EngineCore_DP0 pid=213) propose_draft_token_ids(sampled_token_ids)
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 3762, in propose_draft_token_ids
(EngineCore_DP0 pid=213) self._draft_token_ids = self.propose_draft_token_ids(
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 4223, in propose_draft_token_ids
(EngineCore_DP0 pid=213) draft_token_ids = self.drafter.propose(
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/spec_decode/eagle.py", line 653, in propose
(EngineCore_DP0 pid=213) ret_hidden_states = self.model(**model_kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 402, in call
(EngineCore_DP0 pid=213) return self.aot_compiled_fn(self, *args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/aot_compile.py", line 124, in call
(EngineCore_DP0 pid=213) return self.fn(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen3_5_mtp.py", line 405, in forward
(EngineCore_DP0 pid=213) def forward(
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/caching.py", line 198, in call
(EngineCore_DP0 pid=213) return self.optimized_call(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 936, in call_wrapped
(EngineCore_DP0 pid=213) return self._wrapped_call(self, *args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 455, in call
(EngineCore_DP0 pid=213) raise e
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 442, in call
(EngineCore_DP0 pid=213) return super(self.cls, obj).call(*args, **kwargs) # type: ignore[misc]
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(EngineCore_DP0 pid=213) return self._call_impl(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(EngineCore_DP0 pid=213) return forward_call(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File ".583", line 27, in forward
(EngineCore_DP0 pid=213) submod_1 = self.submod_1(getitem, s59, getitem_1, getitem_2, getitem_3); getitem = getitem_1 = getitem_2 = submod_1 = None
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 936, in call_wrapped
(EngineCore_DP0 pid=213) return self._wrapped_call(self, *args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 455, in call
(EngineCore_DP0 pid=213) raise e
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 442, in call
(EngineCore_DP0 pid=213) return super(self.cls, obj).call(*args, **kwargs) # type: ignore[misc]
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(EngineCore_DP0 pid=213) return self._call_impl(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(EngineCore_DP0 pid=213) return forward_call(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File ".585", line 6, in forward
(EngineCore_DP0 pid=213) unified_attention_with_output = torch.ops.vllm.unified_attention_with_output(query_2, key_2, value, output_3, 'mtp.layers.0.self_attn.attn', kv_cache_dummy_dep = unified_kv_cache_update); query_2 = key_2 = value = output_3 = unified_kv_cache_update = unified_attention_with_output = None
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 1209, in call
(EngineCore_DP0 pid=213) return self._op(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/attention/kv_transfer_utils.py", line 39, in wrapper
(EngineCore_DP0 pid=213) return func(*args, **kwargs)
(EngineCore_DP0 pid=213) ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/attention/attention.py", line 702, in unified_attention_with_output
(EngineCore_DP0 pid=213) self.impl.forward(
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/attention/backends/flashinfer.py", line 1513, in forward
(EngineCore_DP0 pid=213) decode_wrapper.run(
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/flashinfer/decode.py", line 1395, in run
(EngineCore_DP0 pid=213) self._cached_module.paged_run(*run_args)
(EngineCore_DP0 pid=213) File "/usr/local/lib/python3.12/dist-packages/flashinfer/prefill.py", line 666, in paged_run
(EngineCore_DP0 pid=213) paged_run_func(
(EngineCore_DP0 pid=213) File "python/tvm_ffi/cython/function.pxi", line 929, in tvm_ffi.core.Function.call
(EngineCore_DP0 pid=213) File "", line 0, in __tvm_ffi_paged_run
(EngineCore_DP0 pid=213) File "/workspace/build/aot/generated/batch_prefill_with_kv_cache_dtype_q_bf16_dtype_kv_e4m3_dtype_o_bf16_dtype_idx_i32_head_dim_qk_256_head_dim_vo_256_posenc_0_use_swa_False_use_logits_cap_False_f16qk_False/batch_prefill.cu", line 330, in BatchPrefillWithPagedKVCacheRun(tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::Array, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::Optionaltvm::ffi::TensorView, int64_t, int64_t, int64_t, bool, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, tvm::ffi::Optionaltvm::ffi::Tensor, double, double, double, double, int64_t)::<lambda()>
(EngineCore_DP0 pid=213) RuntimeError: Check failed: (status == cudaSuccess) is false: BatchPrefillWithPagedKVCache failed with error an illegal memory access was encountered
(APIServer pid=1) ERROR 03-05 17:47:37 [async_llm.py:708] AsyncLLM output_handler failed.
(APIServer pid=1) ERROR 03-05 17:47:37 [async_llm.py:708] Traceback (most recent call last):
(APIServer pid=1) ERROR 03-05 17:47:37 [async_llm.py:708] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 664, in output_handler
(APIServer pid=1) ERROR 03-05 17:47:37 [async_llm.py:708] outputs = await engine_core.get_output_async()
(APIServer pid=1) ERROR 03-05 17:47:37 [async_llm.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1) ERROR 03-05 17:47:37 [async_llm.py:708] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 1009, in get_output_async
(APIServer pid=1) ERROR 03-05 17:47:37 [async_llm.py:708] raise self._format_exception(outputs) from None
(APIServer pid=1) ERROR 03-05 17:47:37 [async_llm.py:708] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
(APIServer pid=1) INFO: 172.17.0.1:52342 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
[rank0]:[W305 17:47:38.520443441 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())

rtx 6000 pro server edition

This model is causing me a lot of pain...I'm going to give it a shot on sglang.

@chadbek ,

RuntimeError during inference with vLLM and MTP enabled: "BatchPrefillWithPagedKVCache failed with error an illegal memory access was encountered." This occurs in the FlashInfer backend's CUDA kernel (batch_prefill.cu) when handling the prefill phase for paged KV cache in a batched setup..

test running it with MTP: 0 or get rid of spec decode arg, does that work? Try MTP with enforce eager? search on trying a different backend for mtp other than Flashinfer

any updates after vllm 0.17.0 was released? Still can't get a good enough performance compared to even gpt-oss:120b mxfp4

Sign up or log in to comment