Latent Observer PRO
obsxrver
AI & ML interests
Researcher
Recent Activity
updated a Space about 12 hours ago
obsxrver/wan2.2-i2v-lora-demo liked a model about 19 hours ago
drbaph/Woosh liked a model about 20 hours ago
HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-AggressiveOrganizations
posted an update about 23 hours ago
posted an update 12 days ago
Post
697
Announcement: PATREON!
https://www.patreon.com/c/LatentObserver
Hello friends! I have created a patreon. If I get enough interested customers, I will potentially start taking commissions for custom-trained LoRAs. If you want to check it out, the link is there.
https://www.patreon.com/c/LatentObserver
Hello friends! I have created a patreon. If I get enough interested customers, I will potentially start taking commissions for custom-trained LoRAs. If you want to check it out, the link is there.
repliedto their post 2 months ago
hello @obsxrver , I wonder if you use LTXV2 (quality / generation time is impressive) and if you plan to adapt your existing loras and also this vast.ai template for LTXV2 ? Thanks
Thanks for the question. To answer, I haven't experimented with LTXV2 at all, and am not sure what it is capable of in ahem,,, my domain, and there are no plans to release a version of this geared towards LTXV2 at this time.
repliedto their post 3 months ago
Sadly no. It requires a powerful NVIDIA GPU
repliedto their post 5 months ago
so you can either download the files from Jupyter in the instance portal, or for cloud upload, you have to go to settings, and set up a cloud connection then add the api key to the webui. Check the advanced setup section on GitHub. I’ll set up a local install script soon enough too
Post
5220
(https://github.com/obsxrver/wan22-lora-training)
If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you.
This is currently the easiest, fastest, and cheapest way to get a high-quality training run done.
Why this method?
* Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes.
* Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required.
* Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less.
* Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary.
How it works:
1. Click the Vast.AI template link (in the repo).
2. Open the WebUI in your browser.
3. Upload your dataset and press Train.
4. Come back in an hour to find your LoRA in your Google Drive.
It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half.
Repo + Template Link:
https://github.com/obsxrver/wan22-lora-training
Let me know
if you have questions
If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you.
This is currently the easiest, fastest, and cheapest way to get a high-quality training run done.
Why this method?
* Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes.
* Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required.
* Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less.
* Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary.
How it works:
1. Click the Vast.AI template link (in the repo).
2. Open the WebUI in your browser.
3. Upload your dataset and press Train.
4. Come back in an hour to find your LoRA in your Google Drive.
It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half.
Repo + Template Link:
https://github.com/obsxrver/wan22-lora-training
Let me know
if you have questions
posted an update 5 months ago
Post
5220
(https://github.com/obsxrver/wan22-lora-training)
If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you.
This is currently the easiest, fastest, and cheapest way to get a high-quality training run done.
Why this method?
* Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes.
* Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required.
* Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less.
* Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary.
How it works:
1. Click the Vast.AI template link (in the repo).
2. Open the WebUI in your browser.
3. Upload your dataset and press Train.
4. Come back in an hour to find your LoRA in your Google Drive.
It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half.
Repo + Template Link:
https://github.com/obsxrver/wan22-lora-training
Let me know
if you have questions
If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you.
This is currently the easiest, fastest, and cheapest way to get a high-quality training run done.
Why this method?
* Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes.
* Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required.
* Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less.
* Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary.
How it works:
1. Click the Vast.AI template link (in the repo).
2. Open the WebUI in your browser.
3. Upload your dataset and press Train.
4. Come back in an hour to find your LoRA in your Google Drive.
It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half.
Repo + Template Link:
https://github.com/obsxrver/wan22-lora-training
Let me know
if you have questions