Qwen3.6-27B-AWQ-INT4 for running in vLLM

#19
by Duonglv - opened

Please release Qwen3.6-27B-AWQ-INT4 or similar one to run in vLLM.
I found many quantized models, but they are not stable to use.
Thank a lot.

Do you guys have plan or share the plan to release 4 bit quantization for this 27B dense model?

This is probably a dumb question...but has anyone tried https://huggingface.co/btbtyler09/Qwen3.6-27B-GPTQ-4bit ?

Sign up or log in to comment