how to use this?
Pardon my ignorance, but I'm not sure how to use it.
- I'm running ComfyUI on runpod. i uploaded the Q6 file in the tex-encoders folder, but COmy don't see it. Where do I need to save it?
- I've also installed the custom node; Where do I need to connect it?
thanks a lot! I'd really like to use this as it sounds very promising
Below is the default workflow I included in this repo:

You'll need City96 GGUF nodes or equivalent GGUF CLIP loader to use this as an enhanced clip model (which I highly recommend you try!). You should just be able to drop it into your text-encoders folder following installing a GGUF clip node. To use the LLM prompt enhancement feature you MUST run the model in LM Studio or another local host and configure the node appropriate for your setup (by default I have it setup for LM Studio implementation).
As a clip model just dropping it into you text encoders folder should do it once you have the City96 nodes installed (they come up under the bootleg category by default in custom node selector, or just double click and search GGUF CLIP and choose the appropriate loader).
On the VAE, any Z-image turbo compatible VAE (ae.safetensors, UltraFlux VAE, Z-Image_clear_vae, etc) will work.
Benny, can you add the non-GGUF version to the repo? (Yes, I can manually build a safetensor version from the gguf, but it would be nice to get whatever originals in the highest quality (bf16 ?) you have, rather than a reconstruction.
Yeah for sure! To be honest - some of it is a mess, I was manually transferring files between my mac and PC with external media and some of the base files may or may not be intact for the older versions as a result. So, it may take me a minute but I'll try to update when I get a chance!