Text-to-Speech
vllm
mistral-common

Getting undefined symbol: _ZN3c1013MessageLoggerC1EPKciib when following instructions

#12
by costelter - opened

Hi!

When I follow your instructions on installing vllm + vllm-omni I get the following error:

  File "/home/ubuntu/src/voxtral-tts/.venv/lib/python3.12/site-packages/vllm/utils/import_utils.py", line 111, in resolve_obj_by_qualname
    module = importlib.import_module(module_name)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/voxtral-tts/.venv/lib/python3.12/site-packages/vllm/platforms/cuda.py", line 19, in <module>
    import vllm._C  # noqa
    ^^^^^^^^^^^^^^
ImportError: /home/ubuntu/src/voxtral-tts/.venv/lib/python3.12/site-packages/vllm/_C.abi3.so: undefined symbol: _ZN3c1013MessageLoggerC1EPKciib

when running vllm serve. I thought this might by a Blackwell problem - so I rerun the steps on a Ada Lovelace GPU (L40), but got the same error.

What version of CUDA did you use? On this particular machine a cuda-toolkit 12.9 is installed running Ubuntu 24.04.

Also tried "latest" vLLM from source, but getting the same error.

Mistral AI_ org

Any chance you could open an issue on vLLM-Omni for this one: https://github.com/vllm-project/vllm-omni

Aye, will do.

here is the packages version that fixed this issue for me, good luck :

python -c "
import torch, torchvision, transformers, vllm
print('torch:', torch.version)
print('torchvision:', torchvision.version)
print('transformers:', transformers.version)
print('vllm:', vllm.version)
"
torch: 2.10.0+cu130
torchvision: 0.25.0+cu130
transformers: 5.3.0
vllm: 0.18.0

Yes, indeed this version combos work! Thank you. I will try that over the weekend on a Spark. too. ;-)

That combo does not work at all? it does not even support omni?

Mistral AI_ org

@Kwissbeats It use an vllm docker and build vllm-omni within the docker.

The following version combination worked when I hit the same problem when trying to load Qwen/Qwen2.5-14B-Instruct model. Although this is the mistral repo, mentioning it here for anyone facing a similar problem.

torch==2.8.0
torchvision==0.23.0
torchaudio==2.8.0
vllm==0.10.2

I however installed cuda specific versions like so, since I understood that vLLM needs a particular version of torch and flash-attn version (see below) needs to match the same version
uv pip install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url https://download.pytorch.org/whl/cu129 --force-reinstall

Download the specific vllm wheel from https://github.com/vllm-project/vllm/releases/tag/v0.10.2 -> Assets section
uv pip install vllm-0.10.2+cu129-cp38-abi3-manylinux1_x86_64.whl --no-build-isolation

Extra: For me flash-attn wasn't loading either. So I did this
I installed flash_attn==2.8.3, like so
uv pip install flash_attn-2.8.3+cu12torch2.8cxx11abiFALSE-cp312-cp312-linux_x86_64.whl
I downloaded the wheel from here: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.8.3 -> Assets section

Finally I had to reinstall flashinfer-python and flashinfer-cubin
uv pip install -U --pre flashinfer-python --index-url https://flashinfer.ai/whl/nightly/ --no-deps --force-reinstall
uv pip install -U --pre flashinfer-cubin --index-url https://flashinfer.ai/whl/nightly/ --force-reinstall

And finally, install flashinfer-jit-cache
uv pip install -U --pre flashinfer-jit-cache --index-url https://flashinfer.ai/whl/nightly/cu129

Thanks @mwalol for pointing in the correct direction

Sign up or log in to comment