Could VACE be added to v10?

#109
by Pizzathehuttt - opened

V10 seems to follow degenerate prompts significantly better for me than MEGA v3, but I'd love to have VACE for more seemless long videos.
If that's too much work, would merging the models function in the same way? (specifically the 6gig modular version of VACE meant to be added to base WAN).

"Mega" is already VACE added to "v10" T2V. There is no I2V "VACE", as it works with the T2V model. "Mega v2" is probably closer to v10 in terms of merging technique, so perhaps give that version a try. I personally like v3 much better, though. Make sure you are using the recommended sampler combo (ipndm/beta).

"Mega" is already VACE added to "v10" T2V. There is no I2V "VACE", as it works with the T2V model. "Mega v2" is probably closer to v10 in terms of merging technique, so perhaps give that version a try. I personally like v3 much better, though. Make sure you are using the recommended sampler combo (ipndm/beta).

Is it possible to continue upgrading V10 to V11 ? V10 seems much better in terms of keeping the face Consistency. Thanx a lot anyway.

"Mega" is already VACE added to "v10" T2V. There is no I2V "VACE", as it works with the T2V model. "Mega v2" is probably closer to v10 in terms of merging technique, so perhaps give that version a try. I personally like v3 much better, though. Make sure you are using the recommended sampler combo (ipndm/beta).

Is it possible to continue upgrading V10 to V11 ? V10 seems much better in terms of keeping the face Consistency. Thanx a lot anyway.

I think many of us would love to see the non-Mega versions continue, at least for awhile. If Mega performed identically to non-Mega obviously it wouldn't matter, but they don't.

There really isn't anything worth "upgrading" with v10 to make a "v11".

"Mega" exists because of VACE, which opened up a new pathway and more use cases with just 1 model. That is why I've been working on "mega" merge tuning now, much like I think v10 peaked the former merging method.

I will say I have not been maintaining newer NSFW merges for v10, though, which is particularly time consuming juggling LORAs. "Mega" greatly helps out in this regard, as I only need to maintain 1.

I do want to say first your work is awesome and appreciated. But perhaps for those of us looking to expand on it in different ways, you could possibly list at least the main LORAs that you've used for the v10 NSFW merge somewhere? Or is it just a case of using the most popular ones from civit?
I've literally upgraded to a 3090 and more RAM just because of how capable your model is, and I've learned much in the process. I'd like more control, but would love knowing your model better as a starting point to improve my own understanding.

It is mostly just using the most popular, general NSFW LORAs available when I do mixing. The LORAs and amounts change a bit each time I sit down to do it as new ones come out.

However, this is the list I used for my latest v3.1 "Mega":

body/genitals
lora:PENISLORA_epoch42e62e30e16.safetensors:0.5:0
lora:Wan2.2-T2V-Femaled-Vaginus-Low-V1.safetensors:0.5:0
lora:Wan2.2-Low-MysticXXX.safetensors:0.5:0

all in ones
lora:DR34ML4Y_I2V_14B_LOW.safetensors:0.5:0

cum/cumshots
<lora:Wan2.2 - T2V - Facial - LOW 14B.safetensors:0.2:0>
lora:wan22-f4c3spl4sh-154epoc-low-k3nk.safetensors:0.2:0
lora:wan-cumshot-T2V-22epo-k3nk.safetensors:0.25:0
lora:wan22-cumshot-54epoc-low-k3nk.safetensors:0.25:0
<lora:Wan2.2 v2 - T2V - Body Shots - LOW 14B.safetensors:0.25:0>
<lora:56Low noise-Cumshot Aesthetics.safetensors:0.3:0>
lora:wan-thiccum-v3.safetensors:0.3:0

unexpected dicks
lora:T2V20Stroking20v22014b.safetensors:-0.1:0
<lora:Wan2.2 - T2V - Futanari - LOW 14B.safetensors:-0.1:0>
<lora:Wan2.2 - T2V - Stroking It - LOW 14B.safetensors:-0.1:0>

sexual motions
lora:Twerking_low_noise.safetensors:0.1:0
lora:nipple_stroke_WAN22_I2V_v1_low_noise.safetensors:0.1:0
lora:W22_LN_i2v_POV_Missionary_Insertion_v1.safetensors:0.2:0
lora:Titfuck_WAN14B_V3.0__I2V.safetensors:0.2:0
lora:BetterTitfuck_v4_July2025.safetensors:0.2:0
lora:Sensual_fingering_v1_low_noise.safetensors:0.3:0
<lora:T2V - Rubbing Pussy - 14B.safetensors:0.3:0>
lora:fingering_for_wan_v1.0_e184.safetensors:0.3:0
lora:jfj-deepthroat-W22-T2V-LN-v1.safetensors:0.3:0
lora:wan_t2v_pov_blowjob_v1.2.safetensors:0.3:0
lora:doggyPOV_v1_1.safetensors:0.5:0
lora:wan_cowgirl_v1.3.safetensors:0.3:0
<lora:Wan2.2 v2 - T2V - Cowgirl - LOW 14B.safetensors:0.3:0>
lora:wan_pov_missionary_i2v_v1.1.safetensors:0.3:0
lora:wan2.2_t2v_lownoise_pov_missionary_v1.0.safetensors:0.4:0
lora:wan22-pov-handjob-low.safetensors:0.25:0
<lora:T2V - POV Handjob - 14b.safetensors:0.25:0>
lora:Handjobs_low_noise.safetensors:0.25:0

I will note that I use a modified LORA loading system that normalizes strengths to a specific value of 2.7 before merging, which prevents overfitting (and also counteracts strange behavior when using the SaveCheckpoint node, I think related to scaling).

Sign up or log in to comment