Hello, dear author.

#1
by someshijun - opened

I am truly, deeply grateful for your hard work. This is the exact model I have been looking forward to for a long time. Since we weren't sure if the official Nunchaku team would eventually support it, I took the initiative to create a node based on your existing documentation. This node can load both svdq-fp4_r32-FLUX.2-klein-9B-Nunchaku.safetensors and svdq-int4_r32-FLUX.2-klein-9B-Nunchaku.safetensors, and I have integrated it into my custom node suite (https://github.com/shigjfg/ComfyUI-Magic-Assistant). My goal is to help more people use your quantized model easily and widely within ComfyUI. Thank you again, from the bottom of my heart, for your incredible work on this quantization!💕❤️

I was wondering if you could consider releasing an r256 quantized version in the future? While the current r32 version is already impressive, I feel like it doesn’t pair/work particularly well with LoRAs. 🥹

In fact, I was planning to update LoRa support. However, the changes are quite significant, and Nunchaku's official PR processing speed is rather slow. I'll tidy up the code and release the source code later; please follow the updates for this model.

In fact, I was planning to update LoRa support. However, the changes are quite significant, and Nunchaku's official PR processing speed is rather slow. I'll tidy up the code and release the source code later; please follow the updates for this model.
Okay, I'll wait forever.

In fact, I was planning to update LoRa support. However, the changes are quite significant, and Nunchaku's official PR processing speed is rather slow. I'll tidy up the code and release the source code later; please follow the updates for this model.

Dear author, I was wondering when the LoRA-compatible files might be released? I tried to set it up myself using your current files and the official documentation, but I kept failing 🥹, so I’ve decided to give up for now.

Sign up or log in to comment