This model wasn't trained with FP4 or NVFP4
1
#8 opened 3 days ago
by
yangus87
1*H100 with vLLM 0.19.0 Failed
#7 opened 3 days ago
by
JeffreySheng
Question about q_scale / KV cache scale fallback in vLLM for Gemma-4-31B-IT-NVFP4: expected accuracy impact?
👀 1
#6 opened 4 days ago
by
Shaoqing
Why not quantize the MATRICES of Wq, Wk, Wv, Wo?
#5 opened 8 days ago
by
BeetSoup
这个版本对于5090单卡来说还是太大了
10
#4 opened 9 days ago
by
iwaitu
Why is this 4bit version has a 32.7 GB size?
➕ 3
17
#3 opened 9 days ago
by
alexcardo