Cosmos-Predict2.5-2B-base-distilled-LoRA

#1
by kdutt2000 - opened

Regarding your distilled Lora on here. Is it for preview 1 or 2. Also I've used it and it's great. How does it work?

https://huggingface.co/nvidia/Cosmos-Predict2.5-2B/tree/main/base/distilled - https://huggingface.co/nvidia/Cosmos-Predict2.5-2B/tree/main/base/pre-trained
= Cosmos-Predict2.5-2B-base-distilled-LoRA

This is LoRA, a simple weighted difference extraction method. Since it was extracted from a Cosmos-Predict2.5-2B-base distilled model, technically it is neither.

Please refer to this document for details.
https://github.com/nvidia-cosmos/cosmos-predict2.5/blob/main/docs/post-training_video2world_action.md#4-distillation

Do you know if Anima-Comradeship-v1T17H is based on preview 1 or 2? Also, do I save it in the checkpoints folder or is in the diffusion model folder for ComfyUI As I don't know if it includes the text encoder and the vae. Thank you.

Anima-Comradeship-v1T17H is based on Preview 2. You can place it in the diffusion model folder.

Do you integrate the Cosmos-Predict2.5-2B-base-distilled-LoRA in your checkpoint? Also, what strength do you recommend if I do use that Lora. As in theory it should help with prompt adherence and quality. Great work as always. πŸ‘ I love the RDBT - Anima stability LORAs and Checkpoints too.

DMD2 LoRA is merged in the v1T16H and v1T17H models.

How do you merge all of the LORAs together? Do you just stack them and lower the weights as there is a really cool custom node for LORAs: https://github.com/ethanfel/ComfyUI-LoRA-Optimizer

Basically it allows you to use different LORAs together and there is a node called autotuner which basically automatically tune them together to make them work better...

It's pretty interesting πŸ€”

Also RDBT - Anima has great stability/DMD2 LORA too. For Preview 2.

Sign up or log in to comment