Temporal upscaler ghosting on anything longer than 5 seconds
First of all, it's amazing how much work you've put in to creating all of your workflows. They are clean and very user friendly IMO. I have my own complex workflows that have been great in production but I've been plagued with the limitation of the temporal upscaler for some time. The issue is that with any video longer than 121 frames at 24fps there is significant ghosting with any movement in the video. If I use 25fps then the ghosting disappears completely at long video lengths, however the resulting video has a loss of some detail and is slightly desaturated. I need high fidelity to the source, which I can only seem to get to work with videos that are 5 seconds, or less. Also, the LTX-2.3 (and 2.0) math prefers to keep the total frame count based on 24fps, which obviously messes with a 25 fps workflow. I hope that all makes sense.
Here's the reason I want to use the temporal upscaler. LTX-2.3 (and 2.0) distorts detail and there is a lot of decoherence on moving objects when they are small in frame. The model is fantastic for closer shots of people/objects, but falls apart when they are smaller in frame. The temporal upscaler fixes this, but it doesn't seem to be trained on anything over 5 seconds, or 25fps.
This leads me to why I'm messaging you. Out of desperation from failing with my temporal implementation I tried your "LTX-2.3_-_I2V_T2V_Dev_Full-Steps.json" workflow with your default settings of 24 fps and 10 second video length (241 frames), with the temporal upscaler activated, hoping it would work. When I run your workflow I get the exact same 'ghosting' of moving objects. My question is, have you been able to get a clean output with the workflow you posted, and if so how? I am baffled as to what I could be missing that will improve it, but I'm always open to various techniques. By the way, your implementation is pretty much the same as mine. Do you have any insight in to this?