Does it work on DGX?

#3
by johnlockejrr - opened
==========
== CUDA ==
==========

CUDA Version 13.2.0

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

Traceback (most recent call last):
  File "/usr/local/bin/vllm", line 4, in <module>
    from vllm.entrypoints.cli.main import main
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/__init__.py", line 3, in <module>
    from vllm.entrypoints.cli.benchmark.latency import BenchmarkLatencySubcommand
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/benchmark/latency.py", line 5, in <module>
    from vllm.benchmarks.latency import add_cli_args, main
  File "/usr/local/lib/python3.12/dist-packages/vllm/benchmarks/latency.py", line 15, in <module>
    from vllm.engine.arg_utils import EngineArgs
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 35, in <module>
    from vllm.config import (
  File "/usr/local/lib/python3.12/dist-packages/vllm/config/__init__.py", line 6, in <module>
    from vllm.config.compilation import (
  File "/usr/local/lib/python3.12/dist-packages/vllm/config/compilation.py", line 22, in <module>
    from vllm.platforms import current_platform
  File "/usr/local/lib/python3.12/dist-packages/vllm/platforms/__init__.py", line 279, in __getattr__
    _current_platform = resolve_obj_by_qualname(platform_cls_qualname)()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/utils/import_utils.py", line 111, in resolve_obj_by_qualname
    module = importlib.import_module(module_name)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/platforms/cuda.py", line 21, in <module>
    import vllm._C  # noqa
    ^^^^^^^^^^^^^^
ImportError: /usr/local/lib/python3.12/dist-packages/vllm/_C.abi3.so: undefined symbol: _ZN2at4cuda24getCurrentCUDABlasHandleEv

Some problem with the Python ABI in the image.

you need to rebuild the docker

In case anyone you are still having this issue, I got it working yesterday by specifying a specific commit for vllm when building the docker container:

./build-and-copy.sh -t vllm-gemma4-spark-tf5 --vllm-ref dd9342e6b --tf5

Sign up or log in to comment