Corrected typo in TensorRT-LLM serving command
#8
by 1MrazorT1 - opened
Problem
The TRT-LLM serving command in the README uses --reasoning_parser nano_v3
(underscore), but the valid options listed by trtllm-serve --help are:
--reasoning_parser [deepseek-r1|qwen3|nano-v3]
Using nano_v3 causes an error when running the command.
Fix
Changed nano_v3 to nano-v3 to match the actual CLI option.
Verified
Tested on TensorRT-LLM release:1.3.0rc5 with
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-FP8 on 4× H100-80GB.