Cseti commited on
Commit
e49564c
·
1 Parent(s): f1b1d64

first commit

Browse files
Files changed (38) hide show
  1. .gitattributes +1 -0
  2. .gitignore +2 -0
  3. CLAUDE.md +58 -0
  4. README.md +45 -3
  5. hunyuanvideo/rf-inversion/README.md +3 -0
  6. hunyuanvideo/rf-inversion/hunyuanvideo-rf-inversion.json +3 -0
  7. ltx/2.3/i2v-two-pass/README.md +47 -0
  8. ltx/2.3/i2v-two-pass/ltx2.3-i2v-two-pass.json +3 -0
  9. ltx/2.3/i2v-two-pass/media/preview.mp4 +3 -0
  10. ltx/2.3/i2v-two-pass/media/preview_thumb.png +3 -0
  11. posts/2026-03-30-moments-short-film/README.md +91 -0
  12. posts/2026-03-30-moments-short-film/media/cseti_moments_teaser_web.mp4 +3 -0
  13. posts/2026-03-30-moments-short-film/media/cseti_moments_teaser_web_thumb.png +3 -0
  14. posts/2026-03-30-moments-short-film/media/keyframes-montage.jpg +3 -0
  15. wan/2.2/face-detailer/README.md +64 -0
  16. wan/2.2/face-detailer/media/preview-backwardlook.mp4 +3 -0
  17. wan/2.2/face-detailer/media/preview-backwardlook_thumb.png +3 -0
  18. wan/2.2/face-detailer/media/preview-meeting.mp4 +3 -0
  19. wan/2.2/face-detailer/media/preview-meeting_thumb.png +3 -0
  20. wan/2.2/face-detailer/media/preview.png +3 -0
  21. wan/2.2/face-detailer/wan2.2-face-detailer.json +3 -0
  22. wan/2.2/lightning-gguf/README.md +3 -0
  23. wan/2.2/lightning-gguf/media/preview.mp4 +3 -0
  24. wan/2.2/lightning-gguf/wan2.2-lightning-gguf.json +3 -0
  25. wan/2.2/t2v-a14b/README.md +3 -0
  26. wan/2.2/t2v-a14b/media/preview.mp4 +3 -0
  27. wan/2.2/t2v-a14b/wan2.2-t2v-a14b.json +3 -0
  28. wan/2.2/upscaling/README.md +57 -0
  29. wan/2.2/upscaling/media/preview-v2-looping.png +3 -0
  30. wan/2.2/upscaling/wan2.2-upscaling-v1.json +3 -0
  31. wan/2.2/vace-endless-extension/README.md +3 -0
  32. wan/2.2/vace-endless-extension/wan2.2-vace-endless-extension.json +3 -0
  33. wan/2.2/vace/README.md +3 -0
  34. wan/2.2/vace/media/preview.webm +3 -0
  35. wan/2.2/vace/wan2.2-vace.json +3 -0
  36. z-image/two-stage/README.md +41 -0
  37. z-image/two-stage/media/preview.png +3 -0
  38. z-image/two-stage/z-image-two-stage.json +3 -0
.gitattributes CHANGED
@@ -58,3 +58,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
61
+ *.json filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ .claude/
2
+ drafts/
CLAUDE.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ## Repository Overview
6
+
7
+ A personal collection of ComfyUI workflows for AI video generation, video editing, upscaling, and image generation. Workflows are distributed as `.json` files that can be drag-and-dropped into ComfyUI. Each workflow lives in its own directory alongside a `README.md` and a `media/` folder with preview assets.
8
+
9
+ ## Directory Structure
10
+
11
+ ```
12
+ <model-family>/<version>/<workflow-name>/
13
+ <workflow-name>.json # The ComfyUI workflow
14
+ README.md # Description, requirements, notes, changelog
15
+ media/ # Preview images/videos (thumbnails, mp4s)
16
+
17
+ posts/<date-slug>/
18
+ README.md # Long-form making-of post
19
+ media/ # Supporting media
20
+
21
+ drafts/ # Work-in-progress workflows (not published)
22
+ ```
23
+
24
+ ## Workflow Families
25
+
26
+ - **ltx/** — LTX Video 2.x workflows (image-to-video, two-pass, audio support)
27
+ - **wan/** — WAN 2.2 workflows (T2V, VACE, face detailing, upscaling, Lightning GGUF)
28
+ - **hunyuanvideo/** — HunyuanVideo workflows (RF-inversion video editing)
29
+ - **z-image/** — Image generation workflows (two-stage T2I with refinement)
30
+
31
+ ## README Convention for Workflows
32
+
33
+ Each workflow README must follow this structure (keep sections in order, no extras):
34
+
35
+ 1. `# Title` — model name, type (Image-to-Video / Video-to-Video / etc.)
36
+ 2. One-paragraph description of what the workflow does
37
+ 3. `## Preview` — YouTube embed or local media link
38
+ 4. `## Requirements` — ComfyUI version, models with HuggingFace download links, custom nodes with GitHub links
39
+ 5. `## Notes` — key parameters, sampler settings, VRAM tips, optional nodes
40
+ 6. `## Changelog` — dated entries, newest first, format: `- \`YYYY-MM-DD\` — note`
41
+
42
+ **No emojis** anywhere in README files. Use plain text markers if needed.
43
+
44
+ ## Adding a New Workflow
45
+
46
+ 1. Create the directory: `<model-family>/<version>/<workflow-slug>/`
47
+ 2. Export and place the `.json` from ComfyUI
48
+ 3. Create `README.md` following the convention above
49
+ 4. Create `media/` and add at minimum one preview thumbnail
50
+ 5. Add a row to the top-level `README.md` table under the correct model section (keep `| Workflow | Description | Type | Updated |` column order)
51
+
52
+ ## Updating the Main README
53
+
54
+ The root `README.md` is the index. It contains one table per model family. Column order is fixed: `Workflow | Description | Type | Updated`. The `Type` column uses these values: `Video gen`, `Video edit`, `Upscaling`, `Image gen`.
55
+
56
+ ## Posts
57
+
58
+ Long-form making-of articles go under `posts/<YYYY-MM-DD-slug>/README.md`. They are linked from the `## Posts` table in the root README. Posts reference workflows via relative paths (e.g. `../../ltx/2.3/i2v-two-pass/`).
README.md CHANGED
@@ -1,3 +1,45 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ComfyUI Workflows
2
+
3
+ A personal collection of ComfyUI workflows for AI video generation, upscaling, and video editing.
4
+ Each workflow entry includes a description, example output, requirements, and usage notes.
5
+
6
+ Drag-and-drop any `.json` file directly into ComfyUI to load the workflow.
7
+
8
+ ---
9
+
10
+ ## Posts
11
+
12
+ | Post | Date |
13
+ |---|---|
14
+ | [MOMENTS — Making of](posts/2026-03-30-moments-short-film/) | 2026-03-30 |
15
+
16
+ ---
17
+
18
+ ## LTX Video 2.3
19
+
20
+ | Workflow | Description | Type | Updated |
21
+ | ------------------------------------------------- | -------------------------------------------------------------------------------- | --------- | ---------- |
22
+ | [I2V Two-Pass](ltx/2.3/i2v-two-pass/) | Two-pass I2V with start/end guide images, 720p→1080p upscale, and audio support | Video gen | 2026-03-30 |
23
+
24
+ ## WAN 2.2
25
+
26
+ | Workflow | Description | Type | Updated |
27
+ | --------------------------------------------------------- | ------------------------------------------------------------------------- | ---------- | ---------- |
28
+ | [T2V A14B](wan/2.2/t2v-a14b/) | Text-to-video with the WAN 2.2 A14B model, GGUF quantized | Video gen | 2025-07-30 |
29
+ | [Lightning GGUF](wan/2.2/lightning-gguf/) | Fast T2V with cached T5 encoding and Lightning-tuned steps | Video gen | 2025-08-09 |
30
+ | [VACE](wan/2.2/vace/) | Video-to-video with automatic subject segmentation and depth conditioning | Video edit | 2025-07-15 |
31
+ | [VACE Endless Extension](wan/2.2/vace-endless-extension/) | Extend a video indefinitely using VACE looping | Video gen | 2025-11-10 |
32
+ | [Upscaling v1](wan/2.2/upscaling/) | Per-subject crop upscaling with VFI frame interpolation | Upscaling | 2025-10-13 |
33
+ | [Face Detailer](wan/2.2/face-detailer/) | SAM2-based segmentation and WAN 2.2 detail pass on existing video | Video edit | 2026-03-29 |
34
+
35
+ ## HunyuanVideo
36
+
37
+ | Workflow | Description | Type | Updated |
38
+ | ----------------------------------------------------------- | ---------------------------------------------------------------------------- | ---------- | ---------- |
39
+ | [RF-Inversion](hunyuanvideo/rf-inversion/) | Video editing via RF-inversion — relight, restyle, or modify existing video | Video edit | 2025-01-23 |
40
+
41
+ ## Z-Image
42
+
43
+ | Workflow | Description | Type | Updated |
44
+ | ------------------------------- | ------------------------------------------------------- | --------- | ---------- |
45
+ | [Two-Stage](z-image/two-stage/) | Two-pass T2I workflow with refinement and color grading | Image gen | 2026-03-29 |
hunyuanvideo/rf-inversion/README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ ## Changelog
2
+
3
+ - `2025-01-23` — Initial upload
hunyuanvideo/rf-inversion/hunyuanvideo-rf-inversion.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95431ecf28978a833268b76e51b5e7f88bdf5a2598ef2fc695087766af3d7836
3
+ size 47453
ltx/2.3/i2v-two-pass/README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LTX Video 2.3 — I2V Two-Pass with Audio
2
+
3
+ **Model:** LTX Video 2.3 — 22B Distilled (FP8)
4
+ **Type:** Image-to-Video
5
+
6
+ Two-pass Image-to-Video workflow using LTX Video 2.3 with audio generation support. Takes two guide images (start and end frame) and generates a video in a 720p first pass, then upscales to 1080p in a second pass. Audio is encoded via the LTX 2.3 audio VAE and composited into the final output.
7
+
8
+ ---
9
+
10
+ ## Preview
11
+
12
+ [![Watch on YouTube — click to open](https://img.youtube.com/vi/5pMIWq4kwo8/maxresdefault.jpg)](https://www.youtube.com/watch?v=5pMIWq4kwo8)
13
+ *Click to open on YouTube*
14
+
15
+ ---
16
+
17
+ ## Requirements
18
+
19
+ - **ComfyUI:** recent stable build
20
+ - **Model:** `ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors` — [Kijai/LTX2.3_comfy](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/diffusion_models/ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors)
21
+ - **Video VAE:** `LTX23_video_vae_bf16.safetensors` — [Kijai/LTX2.3_comfy](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/LTX23_video_vae_bf16.safetensors)
22
+ - **Audio VAE:** `LTX23_audio_vae_bf16.safetensors` — [Kijai/LTX2.3_comfy](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/LTX23_audio_vae_bf16.safetensors)
23
+ - **Preview VAE:** `taeltx2_3.safetensors` — [Kijai/LTX2.3_comfy](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/taeltx2_3.safetensors)
24
+ - **Text encoder:** `gemma_3_12B_it_fp8_scaled.safetensors` — [Kijai/LTX2.3_comfy](https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/text_encoders)
25
+ - **Text projection:** `ltx-2.3_text_projection_bf16.safetensors` — [Kijai/LTX2.3_comfy](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/text_encoders/ltx-2.3_text_projection_bf16.safetensors)
26
+ - **Custom nodes:**
27
+ - [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
28
+ - [ComfyUI-VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite)
29
+ - [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
30
+ - [ComfyUI_Fill-Nodes](https://github.com/filliptm/ComfyUI_Fill-Nodes)
31
+
32
+ ---
33
+
34
+ ## Notes
35
+
36
+ - The workflow uses two guide images (start and end frame) via `LTXVAddGuideMulti`. Image 1 is placed at frame 0, Image 2 at frame 161 by default.
37
+ - **Pass 1:** 1280x720, 241 frames @ 24fps, 8 steps, `lcm` sampler, `linear_quadratic` scheduler, CFG 1. NAG guidance is enabled (scale 11, rescale 0.25).
38
+ - **Pass 2:** The output is upscaled to 1920x1080 with `nvidia_rtx_vsr` via `ImageResizeKJv2`, then re-encoded and sampled again with `euler_ancestral` sampler for 8 steps.
39
+ - An audio file can be supplied via `VHS_LoadAudioUpload` — the audio is encoded with the LTX 2.3 audio VAE and blended into the latent space during both passes.
40
+ - A LoRA (`ltx-2.3-22b-distilled-lora-dynamic`) is present in the graph but **disabled** — enable it if you want dynamic motion tuning.
41
+ - `LTX2MemoryEfficientSageAttentionPatch`, `LTXVChunkFeedForward`, and `ModelPatchTorchSettings` are used for VRAM optimization. Requires SageAttention installed.
42
+
43
+ ---
44
+
45
+ ## Changelog
46
+
47
+ - `2026-03-30` — Initial upload
ltx/2.3/i2v-two-pass/ltx2.3-i2v-two-pass.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60d153bde46947dd4b6888ad971d9a04c3df38a1619c7a9ed1bd4b3f06a53f2e
3
+ size 96759
ltx/2.3/i2v-two-pass/media/preview.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b67918595c92b6826c32402148576e910af3065de7910f28412bee1a7d497101
3
+ size 355345
ltx/2.3/i2v-two-pass/media/preview_thumb.png ADDED

Git LFS Details

  • SHA256: 678af3188bec20c48f5c6464fa9b3e0d6308a63c770edd34a8cbd31bf9875b2b
  • Pointer size: 131 Bytes
  • Size of remote file: 389 kB
posts/2026-03-30-moments-short-film/README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MOMENTS — Making of
2
+
3
+ *2026-03-30*
4
+
5
+ Lately I've been watching a lot of podcasts. Scientists, philosophers, that kind of thing. Neil deGrasse Tyson is a big favorite for me. At some point I thought — I want to make something like this. Just a short video, one idea, one voice talking directly to camera.
6
+
7
+ Then the ArcaGidan challenge came out with the theme of Time, and that was the trigger. The first thing that popped into my head was that old saying — it's not the destination, it's the journey. I kept thinking about it. The journey isn't just the road... It's made of specific moments. The ones where things actually changed. Where you went one way instead of another, whether you chose it or not.
8
+
9
+ And that's when a quote I'd known for years suddenly fit perfectly — Soren Kierkegaard: "Life can only be understood backwards; but it must be lived forwards." You're in the middle of something and you have no idea what it is yet. You only get it years later, when something random throws you right back there.
10
+
11
+ Those things together became MOMENTS: a ~2-minute short film about the pivot points of a life.
12
+
13
+ ---
14
+
15
+ ## Preview
16
+
17
+ [![Preview — MOMENTS teaser](media/cseti_moments_teaser_web_thumb.png)](media/cseti_moments_teaser_web.mp4)
18
+
19
+ Watch the full short-film here: https://arcagidan.com/entry/06c37d91-060c-4a91-89e1-fae856888085
20
+
21
+ If you like it, please vote — that's the best way to support this kind of work and help me keep making more opensource stuff: new workflows, fine-tuned models (full finetune, LoRA, IC-LoRA...), and so on.
22
+
23
+ ---
24
+
25
+ ## Building the Pipeline
26
+
27
+ The basic flow was: write the script → write the prompts → generate a key frames → animate them → fix the face → upscale → add voice → add music → cut it together.
28
+
29
+ ### Narration: VibeVoice
30
+
31
+ I didn't want something that obviously sounds like AI. I needed a voice that felt like someone actually sitting there and talking. I used [VibeVoice](../../) with a cloned voice based on P.J. Taylor from LibriVox ([reader 9165](https://librivox.org/reader/9165)).
32
+
33
+ ### Key-frames: Two-Stage T2I → I2V
34
+
35
+ So what I do is: first I generate one or more still image, get them looking exactly how I want it, then use those images for the videos. That's it basically.
36
+
37
+ If I need key-frames for specific shots, I take the base image and manipulate it with an image-edit model like Qwen-Image-Edit or Nanobanana.
38
+
39
+ For the stills I used [Z-Image Two-Stage](../../z-image/two-stage/) — it does two passes, the second one at low denoise to clean up details.
40
+
41
+ ![Key-frames](media/keyframes-montage.jpg)
42
+
43
+ ### Animation: Image-to-Video
44
+
45
+ I tried a lot of different workflows and settings for I2V. Still can't say I've figured it out — every project feels like I'm half-guessing. But here's one I put together that worked well for this film: [LTX 2.3 two-pass](../../ltx/2.3/i2v-two-pass/)
46
+
47
+ The key-frames go in and the video prompt only describes motion, nothing visual. The visuals are already in the image.
48
+
49
+ ### Face Detailing: WAN 2.2
50
+
51
+ LTX 2.3 has a lot of artifacts, especially on smaller details — faces, hands, fine textures. It's just how it is. F
52
+
53
+ So I made a WAN 2.2 based detailer workflow for this. It segments the face - or anything else you want actually - with SAM2, crops it out, runs a WAN 2.2 pass on just that region, then puts it back into the original video. The rest of the clip stays untouched.
54
+
55
+ The workflow is here: [WAN 2.2 Face Detailer](../../wan/2.2/face-detailer/)
56
+
57
+ <table><tr>
58
+ <td><a href="../../wan/2.2/face-detailer/media/preview-meeting.mp4"><img src="../../wan/2.2/face-detailer/media/preview-meeting_thumb.png" width="100%"></a></td>
59
+ <td><a href="../../wan/2.2/face-detailer/media/preview-backwardlook.mp4"><img src="../../wan/2.2/face-detailer/media/preview-backwardlook_thumb.png" width="100%"></a></td>
60
+ </tr></table>
61
+
62
+ ### Music: Suno
63
+
64
+ The prompt I used:
65
+
66
+ > Minimalist ambient bed for spoken word; slowly evolving piano clusters over warm synth swells and distant reversed textures. Very soft sidechained pads breathe around the voice, with subtle granular noise drifting in and out. Energy stays low and steady, occasionally blooming into gentle chords before receding, perfect for underscoring intimate storytelling, emotional.
67
+
68
+ Suno did well with this.
69
+
70
+ ### Editing: Kdenlive
71
+
72
+ Nothing fancy here. The script was written with the cuts already in mind, so putting it together in Kdenlive was mostly just mechanical assembly.
73
+
74
+ ### Tools at a glance
75
+
76
+ | Role | Tool / Workflow |
77
+ | --------------- | ------------------------------------------------------------- |
78
+ | Music | Suno |
79
+ | TTS | VibeVoice — cloned voice (P.J. Taylor / LibriVox reader 9165) |
80
+ | Image/Video gen | ComfyUI |
81
+ | Image-to-Video | [LTX 2.3 two-pass](../../ltx/2.3/i2v-two-pass/) |
82
+ | Text-to-Image | [Z-Image Two-Stage](../../z-image/two-stage/) |
83
+ | Image editing | Qwen-Image / Nanobanana |
84
+ | Face detailing | [WAN 2.2 Face Detailer](../../wan/2.2/face-detailer/) |
85
+ | Video editing | Kdenlive |
86
+
87
+ ---
88
+
89
+ ## Changelog
90
+
91
+ - `2026-03-30` — Published
posts/2026-03-30-moments-short-film/media/cseti_moments_teaser_web.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e32a555dcae4dbd5499e4d0807ba6dcdbf77bce4a3e93fdb3c62df0ec93db17d
3
+ size 9213227
posts/2026-03-30-moments-short-film/media/cseti_moments_teaser_web_thumb.png ADDED

Git LFS Details

  • SHA256: cbe71249987956ce9464dcc82a39cb537253e037def679265536f2e7bdd33520
  • Pointer size: 131 Bytes
  • Size of remote file: 746 kB
posts/2026-03-30-moments-short-film/media/keyframes-montage.jpg ADDED

Git LFS Details

  • SHA256: 8aaefb85fd56900f3302014200796f31912e5edd8ec63c89da65063c9803c807
  • Pointer size: 130 Bytes
  • Size of remote file: 61.1 kB
wan/2.2/face-detailer/README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WAN 2.2 Face Detailer
2
+
3
+ **Model:** WAN 2.2 T2V A14B (fp8 scaled)
4
+ **Type:** Video-to-Video
5
+
6
+ Face enhancement pipeline for WAN 2.2 video output. Loads an existing video, detects and segments faces using SAM2, crops each face region, runs a focused WAN 2.2 generation pass to add detail, then composites the result back into the original frame.
7
+
8
+ ---
9
+
10
+ ## Preview
11
+
12
+ [![Preview — meeting scene](media/preview-meeting_thumb.png)](media/preview-meeting.mp4)
13
+
14
+ [![Preview — backward look scene](media/preview-backwardlook_thumb.png)](media/preview-backwardlook.mp4)
15
+
16
+ ---
17
+
18
+ ## Requirements
19
+
20
+ - **ComfyUI:** recent stable build
21
+
22
+ ### Models
23
+
24
+ - **Main model:** `Wan2_2-T2V-A14B-LOW_fp8_e4m3fn_scaled_KJ.safetensors` —
25
+ [Kijai/WanVideo_comfy_fp8_scaled](https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/resolve/main/T2V/Wan2_2-T2V-A14B-LOW_fp8_e4m3fn_scaled_KJ.safetensors)
26
+
27
+ - **VACE module:** `Wan2_2_Fun_VACE_module_A14B_LOW_fp8_e4m3fn_scaled_KJ.safetensors` —
28
+ [Kijai/WanVideo_comfy_fp8_scaled](https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/resolve/main/VACE/Wan2_2_Fun_VACE_module_A14B_LOW_fp8_e4m3fn_scaled_KJ.safetensors)
29
+
30
+ - **VAE:** `Wan2_1_VAE_bf16.safetensors` —
31
+ [Kijai/WanVideo_comfy](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1_VAE_bf16.safetensors)
32
+
33
+ - **Text encoder:** `umt5-xxl-enc-bf16.safetensors` —
34
+ [Kijai/WanVideo_comfy](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/umt5-xxl-enc-bf16.safetensors)
35
+
36
+ - **LoRA (step distill):** `lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors` —
37
+ [Kijai/WanVideo_comfy](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Lightx2v/lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors)
38
+
39
+ - **LoRA (reward):** `Wan2.2-Fun-A14B-InP-low-noise-HPS2.1.safetensors` — example LoRA, replace with your own if needed —
40
+ [alibaba-pai/Wan2.2-Fun-Reward-LoRAs](https://huggingface.co/alibaba-pai/Wan2.2-Fun-Reward-LoRAs/resolve/main/Wan2.2-Fun-A14B-InP-low-noise-HPS2.1.safetensors)
41
+
42
+ - **Segmentation:** `sam2.1_hiera_small.safetensors` —
43
+ [Kijai/sam2-safetensors](https://huggingface.co/Kijai/sam2-safetensors/resolve/main/sam2.1_hiera_small.safetensors)
44
+
45
+ ### Custom nodes
46
+
47
+ - [ComfyUI-WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper)
48
+ - [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
49
+ - [ComfyUI-VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite)
50
+ - [ComfyUI-segment-anything-2](https://github.com/kijai/ComfyUI-segment-anything-2)
51
+ - [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
52
+ - [ComfyUI-Florence2](https://github.com/kijai/ComfyUI-Florence2) — delete these nodes if you don't want to use florence2 for segmentation.
53
+ ---
54
+
55
+ ## Notes
56
+
57
+ - The VACE module is optional — disconnect it to run without VACE.
58
+ - SAM2 is used for face segmentation. The face bounding box index can be adjusted directly in the workflow if the wrong face is selected.
59
+
60
+ ---
61
+
62
+ ## Changelog
63
+
64
+ - `2026-03-29` — Initial upload
wan/2.2/face-detailer/media/preview-backwardlook.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdc913059a210cddd43ee12ad80fa2c73aad205e30351d778ea08f8a591262c0
3
+ size 734426
wan/2.2/face-detailer/media/preview-backwardlook_thumb.png ADDED

Git LFS Details

  • SHA256: a8367a77683dfab4b0897f2d8f190ecc4bcb9133ebc84900fbac1be92022b3dc
  • Pointer size: 132 Bytes
  • Size of remote file: 3.98 MB
wan/2.2/face-detailer/media/preview-meeting.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0301b94b775cf3dc5c943c8f56b6b87e5a1bc2a0db9efae6db802ba96fc04b5b
3
+ size 1082711
wan/2.2/face-detailer/media/preview-meeting_thumb.png ADDED

Git LFS Details

  • SHA256: 1e9de6b640a4d83ae44a5862a6d03eab2414e83f8fb4c4bd2bd9312eb3180048
  • Pointer size: 132 Bytes
  • Size of remote file: 1.46 MB
wan/2.2/face-detailer/media/preview.png ADDED

Git LFS Details

  • SHA256: 0bcc8e963e371edf07792aadc15b57d31e9e113bef35bac4b6ff284bf46ecf5a
  • Pointer size: 132 Bytes
  • Size of remote file: 2.75 MB
wan/2.2/face-detailer/wan2.2-face-detailer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a51bc897a65aadac4623d21dc9e85d5c0b044781d194cda5645711714f3fb63d
3
+ size 262508
wan/2.2/lightning-gguf/README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ ## Changelog
2
+
3
+ - `2025-08-09` — Initial upload
wan/2.2/lightning-gguf/media/preview.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f47907d5ee9ca478ba68a5a3d25459b7bea17a138318a08cb2dd73d8b570dfe
3
+ size 1226780
wan/2.2/lightning-gguf/wan2.2-lightning-gguf.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e605ebccb8f061cf20f3858826e3a57f16c8413b50762cd98fdea62768a7b7a
3
+ size 42711
wan/2.2/t2v-a14b/README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ ## Changelog
2
+
3
+ - `2025-07-30` — Initial upload
wan/2.2/t2v-a14b/media/preview.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3310a49cd6a46c315965f02994aa6812622b7f09d90396a5ab1bfc819d0f589
3
+ size 2459097
wan/2.2/t2v-a14b/wan2.2-t2v-a14b.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e39c02428dd6554640ab0dae22f25ef366a7829c4d42daf7af0b1b84ee515dc8
3
+ size 38151
wan/2.2/upscaling/README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WAN 2.2 Upscaling v1
2
+
3
+ **Model:** WAN 2.2 A14B LowNoise (GGUF Q4_K_M)
4
+ **Type:** Upscaling (Video-to-Video)
5
+
6
+ Per-subject crop upscaling pipeline for WAN 2.2 video output.
7
+ Subjects and faces are detected automatically via SAM2 segmentation and Florence2 face detection,
8
+ then cropped, upscaled with pixel-based models, refined with WAN video-to-video, and composited back.
9
+ RIFE VFI is used for frame interpolation on the final output.
10
+
11
+ ---
12
+
13
+ ## Preview
14
+
15
+ ![Preview v2 (looping)](media/preview-v2-looping.png)
16
+
17
+ ---
18
+
19
+ ## Requirements
20
+
21
+ - **ComfyUI:** recent stable build
22
+ - **Model:** `wan2.2/Wan2.2-T2V-A14B-LowNoise-Q4_K_M.gguf` — [download](https://huggingface.co/QuantStack/Wan2.2-T2V-A14B-GGUF/tree/main) (place in `ComfyUI/models/unet/`)
23
+ - **VAE:** `wan/Wan2_1_VAE_bf16.safetensors` — [download](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) (place in `ComfyUI/models/VAE/`)
24
+ - **Text encoder:** `umt5-xxl-enc-bf16.safetensors` — [download](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/text_encoders) (place in `ComfyUI/models/text_encoders/`)
25
+ - **LoRA (speed):** `Wan/lightx2v/lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors` — [download](https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v)
26
+ - **LoRA (quality, example):** `Wan/wan2.2_fun-reward/Wan2.2-Fun-A14B-InP-low-noise-HPS2.1.safetensors` — reward LoRA for quality improvement; replace with any WAN 14B compatible LoRA or remove
27
+ - **SAM2 model:** `sam2.1_hiera_base_plus.safetensors` — auto-downloaded by `DownloadAndLoadSAM2Model`
28
+ - **RIFE model:** `rife49.pth` — auto-downloaded by `RIFE VFI`
29
+ - **Upscale models** (place in `ComfyUI/models/upscale_models/`):
30
+ - `4x-ClearRealityV1.pth` — [download](https://openmodeldb.info/models/4x-ClearRealityV1) — fast photo-realistic upscaler
31
+ - `phhofm/1xDeJPG_realplksr_otf.pth` — [download](https://openmodeldb.info/models/1x-DeJPG-realplksr-otf) — JPEG artifact remover / enhancer
32
+ - `1xSkinContrast-HighAlternative-SuperUltraCompact.pth` — [download](https://openmodeldb.info/models/1x-SkinContrast-HighAlternative-SuperUltraCompact) — skin detail enhancer
33
+ - `DF2K_JPEG.pth` — [download](https://openmodeldb.info/models/4x-realsr-df2k-jpeg) — slower but higher quality upscaler
34
+ - **Custom nodes:**
35
+ - [ComfyUI-WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper)
36
+ - [ComfyUI-VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite)
37
+ - [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
38
+ - [ComfyUI-Florence2](https://github.com/kijai/ComfyUI-Florence2)
39
+ - [ComfyUI-segment-anything-2](https://github.com/kijai/ComfyUI-segment-anything-2)
40
+ - [ComfyUI-Frame-Interpolation](https://github.com/Fannovel16/ComfyUI-Frame-Interpolation)
41
+ - [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
42
+
43
+ ---
44
+
45
+ ## Notes
46
+
47
+ - **Face index:** After running Florence2, each detected face gets a numbered ID visible above the bounding box in the preview. Set the `Index` field in the face crop node to the ID of the face you want to upscale.
48
+ - **crop_size_mult:** Controls the crop area around the face. Use a higher value for faces that are large on screen, lower for small faces. Follow the preview output to calibrate.
49
+ - **VRAM:** For 1080p output on under 16 GB VRAM, reduce `context_frames` to 33 or 41 in `WanVideoContextOptions`.
50
+ - **WAN denoise strength:** Upscale passes use 0.3 denoise — low enough to preserve structure, high enough for texture refinement.
51
+ - Input resolution: 832x480, 81 frames. The pipeline outputs at a higher resolution after upscaling.
52
+
53
+ ---
54
+
55
+ ## Changelog
56
+
57
+ - `2025-10-13` — Initial upload
wan/2.2/upscaling/media/preview-v2-looping.png ADDED

Git LFS Details

  • SHA256: 7ad437d59838e52db5e89c70ce9d5c8509d31a35b94912c20ec042bc85a2fb0b
  • Pointer size: 132 Bytes
  • Size of remote file: 6.91 MB
wan/2.2/upscaling/wan2.2-upscaling-v1.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8045ccdb02922a699d81e78c54a9aeb9d52a361f3469ca1ae29b3cdae2ad54ca
3
+ size 265934
wan/2.2/vace-endless-extension/README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ ## Changelog
2
+
3
+ - `2025-11-10` — Initial upload
wan/2.2/vace-endless-extension/wan2.2-vace-endless-extension.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0759d7a20d0b7e50eeaccaad5e1eb1df840333cc0a2192f6a2dbd0f15a11bb97
3
+ size 231460
wan/2.2/vace/README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ ## Changelog
2
+
3
+ - `2025-07-15` — Initial upload
wan/2.2/vace/media/preview.webm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c5a387ccf4b4aadfb339f58dd01b1be31e327ffe66129365e6be3e84ac449c1
3
+ size 1881158
wan/2.2/vace/wan2.2-vace.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:806f7513c18449be7dacfa15cfc999e5f0317f12a5de171dc12dfc81e6429359
3
+ size 78894
z-image/two-stage/README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Z-Image Two-Stage
2
+
3
+ **Model:** Z-Image (z_image_turbo) + MoodyMix DPO refinement
4
+ **Type:** Text-to-Image
5
+
6
+ Two-pass text-to-image workflow using Comfy-Org's Z-Image model. The first pass generates a full-resolution base image with the BF16 base model; the second pass applies a DPO-tuned refinement model at low denoise strength to improve detail and style consistency. A GLSL-based post-processing subgraph handles HSL color grading.
7
+
8
+ ---
9
+
10
+ ## Preview
11
+
12
+ ![Preview](media/preview.png)
13
+
14
+ ---
15
+
16
+ ## Requirements
17
+
18
+ - **ComfyUI:** recent stable build (subgraph support required, frontend >= 1.43)
19
+ - **Model (Stage 1):** `z-image/z_image_bf16.safetensors` — [download](https://huggingface.co/Comfy-Org/z_image_turbo/resolve/main/split_files/diffusion_models/z_image_turbo_bf16.safetensors)
20
+ - **Model (Stage 2):** `z-image/moodyMix_zitV10DPO.safetensors` — [CivitAI](https://civitai.com/models/620406) (verify V10 DPO version)
21
+ - **VAE:** `Flux/ultraflux.safetensors` — [CivitAI](https://civitai.com/models/2231253) — alternatively `ae.safetensors` from the official split ([download](https://huggingface.co/Comfy-Org/z_image_turbo/resolve/main/split_files/vae/ae.safetensors))
22
+ - **Text encoder:** `qwen_3_4b.safetensors` (lumina2) — [download](https://huggingface.co/Comfy-Org/z_image_turbo/resolve/main/split_files/text_encoders/qwen_3_4b.safetensors)
23
+ - **Custom nodes:**
24
+ - [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes) — `DiffusionModelLoaderKJ`, `ImageResizeKJv2`, `Film Grain`, and others
25
+ - [rgthree-comfy](https://github.com/rgthree/rgthree-comfy) — `Seed (rgthree)`, `Image Comparer (rgthree)`
26
+
27
+ ---
28
+
29
+ ## Notes
30
+
31
+ - **Stage 1:** 20 steps, CFG 4, sampler `res_multistep`, scheduler `simple`, full denoise
32
+ - **Stage 2:** 2 steps, CFG 1, sampler `res_multistep`, denoise 0.36 — intended as a light refinement pass
33
+ - Base latent resolution: 1920×1088; the second pass includes a 2× upscale step
34
+ - The color grading subgraph applies global HSL adjustments (default: saturation -10, all else neutral); it can be bypassed or tuned per shot
35
+ - The workflow uses ComfyUI subgraphs — load it in a recent frontend build that supports the subgraph renderer (`LG`)
36
+
37
+ ---
38
+
39
+ ## Changelog
40
+
41
+ - `2026-03-29` — Initial upload
z-image/two-stage/media/preview.png ADDED

Git LFS Details

  • SHA256: 2b5e5f185ac2d7128d6a9c72b68e787c48dfd5cbed29d8eedaee53fc673eaf70
  • Pointer size: 132 Bytes
  • Size of remote file: 8.95 MB
z-image/two-stage/z-image-two-stage.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe19f97eca330bc9c4f22a1b4a924ebe34c378d4aae14e39ae70885f8d334edd
3
+ size 103598