Highest performance inference on <8 RTX 6000 Pros setups
#6
by curiouspp8 - opened
Is there any way to run any of those quants via high performance engine like sglang or vllm?
Quantized safetensor versions dont fit into 1/2/4 GPUs like VLLMs wants and not sure where they are with gguf support. Just wondering if anyone did setup like that. VLLM's TP needs 1/2/4/8 GPUs.