head_swap_extracted_mix_rank_adaptive_fro_0.98_fp16_00001_.safetensors

#13
by IIMacGyverII - opened

you're workflow calls for this lora "head_swap_extracted_mix_rank_adaptive_fro_0.98_fp16_00001_.safetensors" but i cannot find it in this repo of as a link in the workflow's notes.

I swapped it for head_swap_extracted_mix_rank_adaptive_fro_0.98.safetensors and it looks to be working.

this is impressive. nice work. thank you. BTW I am currently doing 19 second swaps and it's holding the face well. Running on an RTX 6000 Pro blackwell. I will eventually push the length as far as I can.

Thank you for any tests and optimizations you find; they are welcome. I haven't had much time to explore many optimizations.

I tried everything but got no results. The link contains workflow and screen recordings of the results. LoRa head_swap_extracted_mix_rank_adaptive_fro_0.98.safetensors not found (maybe it will work with her)
https://www.dropbox.com/scl/fo/1o9l7t5qcewajvu8nums5/AAlu4dDgAmxJQ7jPVqV_yrA?rlkey=7bnf7pc2wvj7ywqzstgk8jytj&st=k9eg579z&dl=0

"head_swap_extracted_mix_rank_adaptive_fro_0.98.safetensors" is in here https://huggingface.co/Alissonerdx/BFS-Best-Face-Swap-Video/tree/main/ltx-2.3

Not see. 2 LoRa only: head_swap_v3_rank_64.safetensors and head_swap_v3_rank_adaptive_fro_098.safetensors. But they do not work. Can you send me?

I tried everything but got no results. The link contains workflow and screen recordings of the results. LoRa head_swap_extracted_mix_rank_adaptive_fro_0.98.safetensors not found (maybe it will work with her)
https://www.dropbox.com/scl/fo/1o9l7t5qcewajvu8nums5/AAlu4dDgAmxJQ7jPVqV_yrA?rlkey=7bnf7pc2wvj7ywqzstgk8jytj&st=k9eg579z&dl=0

You're applying a bunch of nodes that didn't exist in my workflow. The chance of one of these nodes simply skipping blocks in the official model or doing something that harms the LoRa effect is huge, which is why I didn't include them. Try running the workflow the way I instructed.

Nodes that optimize often just mess everything up. If you want it to work, you have to use what I sent. Besides that strange LoRa load you're using, the chances of it not knowing how to correctly read the LTX model keys are also high, because imagine if you use a node that doesn't support a new model, how do you think it will work on its own? That might be the case with that LoRa load node you're using. There are many variables.

Start by using a native LoRa loader, not a random LoRa loader that may not be supported.

Use Native LoRa Loader - Not work
изображение
Result

изображение

@editvideo20024
Dude, you're using 50% strength, how do you expect the Lora to work? The minimum has to be 0.8 because otherwise the Lora won't have the impact to remove the area with the face in the video. Did you try with 1 strength?

on the first attempt

No comment.... I am loser. I change LoRa Loader but forgot about LoRa weight. Thanks a lot for pointing that out!! By the way, it also works with my custom Lora Loader.

where did you put the custom lora loader? can you share a picture of the linking? thanks!

Sign up or log in to comment