30.3 GB?
#6
by pedalnomica - opened
Anyone know why a 4-bit quant of a 27B model is showing as over 30 GB? It is basically as large as the fp8 version. I know not all layers get quantized... but this seems pretty extreme.
Am I missing something? Is there a mistake?
Thanks!
Some layers are in 4bits and others in 16bits, but for the fp8 version everything is 8bits, so it kind of end up being the same size.
Oh, no, no, this is simply because this model is broken.
GPTQ INT 4 Model of this size should typically take up 20 gigabytes.
And I've got continuous output of !!!!!!!!!!!!!! On my vllm setup with 4xV100.
The 35b gptq model and 122b works perfectly on the exact same setup and parameters. So, maybe only this 27B model is broken.
Please, please, official NVFP4, please!!! PLEASE!