LoRA-friendly
Hey, just wanted to say thanks for the uploads. Was wondering about your planned LoRA-friendly version your working on. Will this one not require those nodes? I can't seem to get your models to work well with any lora regardless of using those nodes you suggested.
yes the "mel" models are intended to be more lora friendly, its just a battle of keeping it small, and not having the lora issues, so im tweaking over and over till i find that, without out it becoming 20gb.
what are you seeing with loras?
if you are still struggling, check that image and look at my settings on the "mirror" that i have been using when stacking lora's because noise started to build up again. you can try those settings.
the bottom should be block 37 @ .95, 38 @ .85, 39 @ .75.. etc. until block 46, 46 needs to be 0 in most cases, then 47 @ 1.0, also, don't try and run lora's full strength against them, on average, my top end for the overall strength on the sink is 0.65-0.75
if you let me know what lora you are trying, i can let you know the settings (don't worry about the type of lora, im the guy who made the first adult lora's for hyv1.5, lol)
Thanks for the response. I followed your image on the main page and your info in this post. Block 21 to 36 are missing between the two. What are those suppose to be? The default 1.00? Does that Lora out connector on your mirror node stay unconnected like its shown in your image?
Also you stated the Lora has to have a strength no greater then .75. Does that apply to the strength of the bathroom sink node as well or just the Toothbrush node that you select your Lora from?
As for the one naughty Lora I was trying to use with your model, its called DR34ML4Y. Regardless of this one or any well mannered Lora's I try to use, they all have a washed out look to them. I find your node makes no difference at all. Disabled or enabled, I get the same quality output with a Lora enabled.
Without any Lora's, your model as well as the other NVFP4 models I tried all look great and I would gladly stick to using them 100% of the time for the faster generation times. It just sucks that adding any kind of Lora messes them up and makes me have to switch to a slower fp8 model to fix it.
In regards to your upcoming Lora-friendly model, I doubt many people care too much about the size. Its the speed and the little loss in quality that matters most. I've tried both large and small models for LTX and I found zero difference in vram/ram usage. Unless Windows task manager is bogus and is giving me bad information, there is little to no change in memory usage for me.
Regardless, I'm very much looking forward to your fixed lora version for the much faster NVFP4. Thanks a bunch for trying to fix this and offering it here on this site for free.
well the size is a concern, because, if its just 20gb, use the official (although they still haven't released a official distilled), but there's a lot of 20gb distilled's out there.
and yeah, that lora fight was a nightmare for me when I created that model, because it looks great by itself, fell apart when you add a lora.
and yes, 21-36 are all 1.00.
and yeah, the mirror can work 2 ways, it can either pass in block weights to the custom_weights input on the toothbrush (this is only triggered if preset is set to custom) then the toothbrush connects to the sink.
or you can wire it: toothbrush lora ouput into lora_packet on the mirror, then lora_out into the lora on the bathroom sink. (this overrides/mixes presets)
and the strength, overall, i designed all this to wire into the app mode so i can use it on my phone, so that global strength is a multiplier.
so, like if you set sink to .75, and then set dr34ml4y to 1, it loads that lora at .75.
this is so i can just set a bunch of loras at strength 1, and then set global to .25, and it loads them all at .25, just easy to adjust a lot of loras at once.
but that lora in particular, i run really low, because its strong, i usally have it at like .35, mixed with plora and penile paraxis? how ever that's spelled, both those at .25, then i can prompt anything i want, but one of those lora's does inject a lot of pov stuff, so that's a fight.
if you do still have issues, let me know, ill see if we can figure it out, are you seeing the info in the console when its loading loras?, it does tell you what strength things are loading it.
also, what sampler/scheduler, custom sigma's?
beyond that, I am working on another solution, not sure how far this will get, but its running mostly stable on my machine atm, and in this run i made the model 8gb, and it swaps in and out bf16 "experts" so it can run lora's in full bf16 without injecting them into fp4 weights, completely solves the issue, but not sure how "universal" this will be, but yeah, kind of made a moe for a non moe model.
in early testing, but just ran a 10 second 640x480 generation in less then 10gb vram, without lora artifacts.
First off, thanks so much for trying to help me out with this. It is very much appreciated.
If I run that Lora at .35, it doesn't do anything to change the scene or make the necessary corrections to to fix certain body parts. Perhaps it works well for you on account of the combination between it and the other Loras you are using. I have no clue what those Loras are that you suggested. Did a search on here and on Civitai but came up with nothing.
That current project your working on sounds really promising. I hope it works out nicely in the end.
Going back to the size of the model, so the other NVFP4 distilled 20gig models will still work with your node? I thought it needed your model in order for it to work.
I've tested out many models at many different sizes. Even the much larger NVFP4 models still generate faster then the smaller FP8's. That's what I meant when I stated earlier that size shouldn't really matter if the main focus is on generation speed and maintaining quality.
First off, thanks so much for trying to help me out with this. It is very much appreciated.
If I run that Lora at .35, it doesn't do anything to change the scene or make the necessary corrections to to fix certain body parts. Perhaps it works well for you on account of the combination between it and the other Loras you are using. I have no clue what those Loras are that you suggested. Did a search on here and on Civitai but came up with nothing.
That current project your working on sounds really promising. I hope it works out nicely in the end.
Going back to the size of the model, so the other NVFP4 distilled 20gig models will still work with your node? I thought it needed your model in order for it to work.
I've tested out many models at many different sizes. Even the much larger NVFP4 models still generate faster then the smaller FP8's. That's what I meant when I stated earlier that size shouldn't really matter if the main focus is on generation speed and maintaining quality.
hugging face is bad at telling me when i have messages ;)
yes, those nodes do work with the other nvfp4 models, but they shouldn't even be needed as these nodes are designed to correct issues with injecting loras on over quantized, which, at least the official 20gb model, doesn't have those issues.
and its weird you are having issues like that, taking penile peraxis for example, it will fully load and work at 50% strength for me on the fp4's, now to be honest, i don't use the dreamlay (ltxxx as it's really called) as i've never liked "generalized" lora's, they nend to not do well for me.