Run on 3090

#1
by faheemraza1 - opened

Hi, will I be able to run inference for this model using vLLM on an RTX 3090?

Sign up or log in to comment