Image to Video
Can this model (GGUF) do Image to Video? If yes, is it possible to share a ComfyUI Workflow for that.
Any help appreciated.
I got the workflow shared from Unsloth running-
Yes, this model (GGUF) can make video from image.
This is my workflow: https://drive.chenjia.org/api/public/dl/dcfCfOCn?inline=true
I tried a lot of config, this one is the best I got.
Only 150-300 seconds for each run depend on video length.
Yes, this model (GGUF) can make video from image.
This is my workflow: https://drive.chenjia.org/api/public/dl/dcfCfOCn?inline=true
I tried a lot of config, this one is the best I got.
Only 150-300 seconds for each run depend on video length.
hey this works great, much appreciated for sharing your workflow! will help me too understanding more and more of this stuff, having working examples on my end.
i guess we can use the UD distilled version now for preview of what might be generated, and exchange for the dev versions of the models for higher quality?
dev model generate video obviously better quality. (but not too much)
you can try use dev model for HIGH sampler and distilled lora for LOW sampler.
for preview, you can try my workflow: https://drive.chenjia.org/api/public/dl/57gJI5Fq?inline=true
this is super fast but super low quality, only for prompt testing purpose.
This doesn't work on comfy ui desktop

