how to run the new dev fp8 model ?

#3
by SherifOneway - opened

so i downloaded the new scaled model but sadly the 2 ways to run the model is not working with the fp8 ones, so i'm asking if there is a way to run it right now ?

Comfy Org org

Only by using this pull request right now, it's work in progress: https://github.com/Comfy-Org/ComfyUI/pull/13817

also the way i currently run it at least has very underwhelming results (bf16 not much better) . Flux.1 (with high guidance) levels of waxy and shiny. I hope at least that aspect changes when it is better supported. I doubt the fundamental weaknesses will change much. But it's kinda fast and does at least anatomy better than Ernie (which is a very low bar).

also i hope some level of external text encoder can be found because it also feels very sanitized.

also i hope some level of external text encoder can be found because it also feels very sanitized.

@Andyx1976 There is no external text encoder for this model, this is just a single monolithic model that takes in a prompt and spits out image, there are no individual components to swap out.

In any case, swapping out the text encoder of a diffusion model that does use an external text encoder (basically all pre-existing local models) will not magically uncensor the model. What matters for the text encoder is whether it understands the concept, whether it will refuse or not is irrelevant.

i'm aware, exception being prompt enhancers that can just refuse, but since it's fairly small and fairly fast i assume the built in text encoder is also limited. It seems to get lost on longer, more complicated multi people prompts,and with text placement.
But the main issue is the wax. there at least is hopefully room for improvement.

Sign up or log in to comment