extremely slow on vllm

#1
by ktsaou - opened

On vllm latest nightly I get:

(EngineCore_DP0 pid=447633) INFO 01-31 20:52:24 [gpu_model_runner.py:4108] Starting to load model cyankiwi/GLM-4.7-Flash-REAP-23B-A3B-AWQ-4bit...
(EngineCore_DP0 pid=447633) INFO 01-31 20:52:25 [cuda.py:364] Using TRITON_MLA attention backend out of potential backends: ('TRITON_MLA',)
(EngineCore_DP0 pid=447633) INFO 01-31 20:52:25 [mla_attention.py:1920] Using FlashAttention prefill for MLA
(EngineCore_DP0 pid=447633) INFO 01-31 20:52:25 [compressed_tensors_wNa16.py:114] Using MarlinLinearKernel for CompressedTensorsWNA16
(EngineCore_DP0 pid=447633) INFO 01-31 20:52:25 [compressed_tensors_moe.py:199] Using CompressedTensorsWNA16MarlinMoEMethod
(EngineCore_DP0 pid=447633) INFO 01-31 20:52:25 [compressed_tensors_moe.py:1265] Using Marlin backend for WNA16 MoE (group_size=32, num_bits=4)

This gives extremely slow performance. vllm runs several times slower than llama.cpp.

Is there a solution?

Sign up or log in to comment