> ð Click on the language section to expand / èšèªãã¯ãªãã¯ããŠå±é
## Dataset Configuration
Please create a TOML file for dataset configuration.
Image and video datasets are supported. The configuration file can include multiple datasets, either image or video datasets, with caption text files or metadata JSONL files.
The cache directory must be different for each dataset.
Each video is extracted frame by frame without additional processing and used for training. It is recommended to use videos with a frame rate of 24fps for HunyuanVideo, 16fps for Wan2.1 and 30fps for FramePack. You can check the videos that will be trained using `--debug_mode video` when caching latent (see [here](/README.md#latent-caching)).
æ¥æ¬èª
ããŒã¿ã»ããã®èšå®ãè¡ãããã®TOMLãã¡ã€ã«ãäœæããŠãã ããã
ç»åããŒã¿ã»ãããšåç»ããŒã¿ã»ããããµããŒããããŠããŸããèšå®ãã¡ã€ã«ã«ã¯ãç»åãŸãã¯åç»ããŒã¿ã»ãããè€æ°å«ããããšãã§ããŸãããã£ãã·ã§ã³ããã¹ããã¡ã€ã«ãŸãã¯ã¡ã¿ããŒã¿JSONLãã¡ã€ã«ã䜿çšã§ããŸãã
ãã£ãã·ã¥ãã£ã¬ã¯ããªã¯ãåããŒã¿ã»ããããšã«ç°ãªããã£ã¬ã¯ããªã§ããå¿
èŠããããŸãã
åç»ã¯è¿œå ã®ããã»ã¹ãªãã§ãã¬ãŒã ããšã«æœåºãããåŠç¿ã«çšããããŸãããã®ãããHunyuanVideoã¯24fpsãWan2.1ã¯16fpsãFramePackã¯30fpsã®ãã¬ãŒã ã¬ãŒãã®åç»ã䜿çšããããšããå§ãããŸããlatentãã£ãã·ã¥æã®`--debug_mode video`ã䜿çšãããšãåŠç¿ãããåç»ã確èªã§ããŸãïŒ[ãã¡ã](/README.ja.md#latentã®äºåãã£ãã·ã¥)ãåç
§ïŒã
### Sample for Image Dataset with Caption Text Files
```toml
# resolution, caption_extension, batch_size, num_repeats, enable_bucket, bucket_no_upscale should be set in either general or datasets
# otherwise, the default values will be used for each item
# general configurations
[general]
resolution = [960, 544]
caption_extension = ".txt"
batch_size = 1
enable_bucket = true
bucket_no_upscale = false
[[datasets]]
image_directory = "/path/to/image_dir"
cache_directory = "/path/to/cache_directory"
num_repeats = 1 # optional, default is 1. Number of times to repeat the dataset. Useful to balance the multiple datasets with different sizes.
# multiple_target = true # optional, default is false. Set to true for Qwen-Image-Layered training.
# other datasets can be added here. each dataset can have different configurations
```
`image_directory` is the directory containing images. The captions are stored in text files with the same filename as the image, but with the extension specified by `caption_extension` (for example, `image1.jpg` and `image1.txt`).
`cache_directory` is optional, default is None to use the same directory as the image directory. However, we recommend to set the cache directory to avoid accidental sharing of the cache files between different datasets.
`num_repeats` is also available. It is optional, default is 1 (no repeat). It repeats the images (or videos) that many times to expand the dataset. For example, if `num_repeats = 2` and there are 20 images in the dataset, each image will be duplicated twice (with the same caption) to have a total of 40 images. It is useful to balance the multiple datasets with different sizes.
For Qwen-Image-Layered training, set `multiple_target = true`. Also, in the `image_directory`, for each "image to be trained + segmentation (layer) results" combination, store the following (if `caption_extension` is `.txt`):
|Item|Example|Note|
|---|---|---|
|Caption file|`image1.txt`| |
|Image to be trained (image to be layered)|`image1.png`| |
|Segmentation (layer) result images|`image1_1.png`, `image1_2.png`, ...|Alpha channel required|
The next combination would be stored as `/path/to/layer_images/image2.txt` for caption, and `/path/to/layer_images/image2.png`, `/path/to/layer_images/image2_0.png`, `/path/to/layer_images/image2_1.png` for images.
æ¥æ¬èª
`image_directory`ã¯ç»åãå«ããã£ã¬ã¯ããªã®ãã¹ã§ãããã£ãã·ã§ã³ã¯ãç»åãšåããã¡ã€ã«åã§ã`caption_extension`ã§æå®ããæ¡åŒµåã®ããã¹ããã¡ã€ã«ã«æ ŒçŽããŠãã ããïŒäŸïŒ`image1.jpg`ãš`image1.txt`ïŒã
`cache_directory` ã¯ãªãã·ã§ã³ã§ããããã©ã«ãã¯ç»åãã£ã¬ã¯ããªãšåããã£ã¬ã¯ããªã«èšå®ãããŸãããã ããç°ãªãããŒã¿ã»ããéã§ãã£ãã·ã¥ãã¡ã€ã«ãå
±æãããã®ãé²ãããã«ãæç€ºçã«å¥ã®ãã£ãã·ã¥ãã£ã¬ã¯ããªãèšå®ããããšããå§ãããŸãã
`num_repeats` ã¯ãªãã·ã§ã³ã§ãããã©ã«ã㯠1 ã§ãïŒç¹°ãè¿ããªãïŒãç»åïŒãåç»ïŒãããã®åæ°ã ãåçŽã«ç¹°ãè¿ããŠããŒã¿ã»ãããæ¡åŒµããŸããããšãã°`num_repeats = 2`ãšãããšããç»å20æã®ããŒã¿ã»ãããªããåç»åã2æãã€ïŒåäžã®ãã£ãã·ã§ã³ã§ïŒèš40æååšããå Žåãšåãã«ãªããŸããç°ãªãããŒã¿æ°ã®ããŒã¿ã»ããéã§ãã©ã³ã¹ãåãããã«äœ¿çšå¯èœã§ãã
resolution, caption_extension, batch_size, num_repeats, enable_bucket, bucket_no_upscale 㯠general ãŸã㯠datasets ã®ã©ã¡ããã«èšå®ããŠãã ãããçç¥æã¯åé
ç®ã®ããã©ã«ãå€ã䜿çšãããŸãã
`[[datasets]]`以äžã远å ããããšã§ãä»ã®ããŒã¿ã»ããã远å ã§ããŸããåããŒã¿ã»ããã«ã¯ç°ãªãèšå®ãæãŠãŸãã
Qwen-Image-Layeredã®åŠç¿ã®å Žåã`multiple_target = true`ãèšå®ããŠãã ããããŸãã`image_directory`å
ã«ãããããã®ãåŠç¿ããç»åïŒåå²çµæãçµã¿åããããšã«ã以äžãæ ŒçŽããŠãã ããïŒ`caption_extension`ã`.txt`ã®å ŽåïŒã
|é
ç®|äŸ|åè|
|---|---|---|
|ãã£ãã·ã§ã³ãã¡ã€ã«|`image1.txt`| |
|åŠç¿ããç»åïŒåå²å¯Ÿè±¡ã®ç»åïŒ|`image1.png`| |
|åå²çµæã®ã¬ã€ã€ãŒç»å矀|`image1_1.png`, `image1_2.png`, ...|ã¢ã«ãã¡ãã£ã³ãã«å¿
é |
次ã®çµã¿åããã¯ã`/path/to/layer_images/image2.txt`ã«å¯ŸããŠã`/path/to/layer_images/image2.png`, `/path/to/layer_images/image2_0.png`, `/path/to/layer_images/image2_1.png`ã®ããã«æ ŒçŽããŸãã
### Sample for Image Dataset with Metadata JSONL File
```toml
# resolution, batch_size, num_repeats, enable_bucket, bucket_no_upscale should be set in either general or datasets
# caption_extension is not required for metadata jsonl file
# cache_directory is required for each dataset with metadata jsonl file
# general configurations
[general]
resolution = [960, 544]
batch_size = 1
enable_bucket = true
bucket_no_upscale = false
[[datasets]]
image_jsonl_file = "/path/to/metadata.jsonl"
cache_directory = "/path/to/cache_directory" # required for metadata jsonl file
num_repeats = 1 # optional, default is 1. Same as above.
# multiple_target = true # optional, default is false. Set to true for Qwen-Image-Layered training.
# other datasets can be added here. each dataset can have different configurations
```
JSONL file format for metadata:
```json
{"image_path": "/path/to/image1.jpg", "caption": "A caption for image1"}
{"image_path": "/path/to/image2.jpg", "caption": "A caption for image2"}
```
For Qwen-Image-Layered training, set `multiple_target = true`. Also, in the metadata JSONL file, for each "image to be trained + segmentation (layer) results" combination, specify the image paths with numbered attributes like `image_path_0`, `image_path_1`, etc.
```json
{"image_path_0": "/path/to/image1_base.png", "image_path_1": "/path/to/image1_layer1.png", "image_path_2": "/path/to/image1_layer2.png", "caption": "A caption for image1"}
{"image_path_0": "/path/to/image2_base.png", "image_path_1": "/path/to/image2_layer1.png", "image_path_2": "/path/to/image2_layer2.png", "caption": "A caption for image2"}
```
æ¥æ¬èª
resolution, batch_size, num_repeats, enable_bucket, bucket_no_upscale 㯠general ãŸã㯠datasets ã®ã©ã¡ããã«èšå®ããŠãã ãããçç¥æã¯åé
ç®ã®ããã©ã«ãå€ã䜿çšãããŸãã
metadata jsonl ãã¡ã€ã«ã䜿çšããå Žåãcaption_extension ã¯å¿
èŠãããŸããããŸããcache_directory ã¯å¿
é ã§ãã
ãã£ãã·ã§ã³ã«ããããŒã¿ã»ãããšåæ§ã«ãè€æ°ã®ããŒã¿ã»ããã远å ã§ããŸããåããŒã¿ã»ããã«ã¯ç°ãªãèšå®ãæãŠãŸãã
Qwen-Image-Layeredã®åŠç¿ã®å Žåã`multiple_target = true`ãèšå®ããŠãã ããããŸããmetadata jsonl ãã¡ã€ã«å
ã§ãåç»åã«å¯ŸããŠè€æ°ã®ã¿ãŒã²ããç»åãæå®ããå Žåã¯ã`image_path_0`, `image_path_1`ã®ããã«æ°åãä»äžããŠãã ããã
### Sample for Video Dataset with Caption Text Files
```toml
# Common parameters (resolution, caption_extension, batch_size, num_repeats, enable_bucket, bucket_no_upscale)
# can be set in either general or datasets sections
# Video-specific parameters (target_frames, frame_extraction, frame_stride, frame_sample, max_frames, source_fps)
# must be set in each datasets section
# general configurations
[general]
resolution = [960, 544]
caption_extension = ".txt"
batch_size = 1
enable_bucket = true
bucket_no_upscale = false
[[datasets]]
video_directory = "/path/to/video_dir"
cache_directory = "/path/to/cache_directory" # recommended to set cache directory
target_frames = [1, 25, 45]
frame_extraction = "head"
source_fps = 30.0 # optional, source fps for videos in the directory, decimal number
[[datasets]]
video_directory = "/path/to/video_dir2"
cache_directory = "/path/to/cache_directory2" # recommended to set cache directory
frame_extraction = "full"
max_frames = 45
# other datasets can be added here. each dataset can have different configurations
```
`video_directory` is the directory containing videos. The captions are stored in text files with the same filename as the video, but with the extension specified by `caption_extension` (for example, `video1.mp4` and `video1.txt`).
__In HunyuanVideo and Wan2.1, the number of `target_frames` must be "N\*4+1" (N=0,1,2,...).__ Otherwise, it will be truncated to the nearest "N*4+1".
In FramePack, it is recommended to set `frame_extraction` to `full` and `max_frames` to a sufficiently large value, as it can handle longer videos. However, if the video is too long, an Out of Memory error may occur during VAE encoding. The videos in FramePack are trimmed to "N * latent_window_size * 4 + 1" frames (for example, 37, 73, 109... if `latent_window_size` is 9).
If the `source_fps` is specified, the videos in the directory are considered to be at this frame rate, and some frames will be skipped to match the model's frame rate (24 for HunyuanVideo and 16 for Wan2.1). __The value must be a decimal number, for example, `30.0` instead of `30`.__ The skipping is done automatically and does not consider the content of the images. Please check if the converted data is correct using `--debug_mode video`.
If `source_fps` is not specified (default), all frames of the video will be used regardless of the video's frame rate.
æ¥æ¬èª
å
±éãã©ã¡ãŒã¿ïŒresolution, caption_extension, batch_size, num_repeats, enable_bucket, bucket_no_upscaleïŒã¯ãgeneralãŸãã¯datasetsã®ããããã«èšå®ã§ããŸãã
åç»åºæã®ãã©ã¡ãŒã¿ïŒtarget_frames, frame_extraction, frame_stride, frame_sample, max_frames, source_fpsïŒã¯ãådatasetsã»ã¯ã·ã§ã³ã«èšå®ããå¿
èŠããããŸãã
`video_directory`ã¯åç»ãå«ããã£ã¬ã¯ããªã®ãã¹ã§ãããã£ãã·ã§ã³ã¯ãåç»ãšåããã¡ã€ã«åã§ã`caption_extension`ã§æå®ããæ¡åŒµåã®ããã¹ããã¡ã€ã«ã«æ ŒçŽããŠãã ããïŒäŸïŒ`video1.mp4`ãš`video1.txt`ïŒã
__HunyuanVideoããã³Wan2.1ã§ã¯ãtarget_framesã®æ°å€ã¯ãN\*4+1ãã§ããå¿
èŠããããŸãã__ ãã以å€ã®å€ã®å Žåã¯ãæãè¿ãN\*4+1ã®å€ã«åãæšãŠãããŸãã
FramePackã§ãåæ§ã§ãããFramePackã§ã¯åç»ãé·ããŠãåŠç¿å¯èœãªããã `frame_extraction`ã«`full` ãæå®ãã`max_frames`ãååã«å€§ããªå€ã«èšå®ããããšããå§ãããŸãããã ããããŸãã«ãé·ããããšVAEã®encodeã§Out of Memoryãšã©ãŒãçºçããå¯èœæ§ããããŸããFramePackã®åç»ã¯ããN * latent_window_size * 4 + 1ããã¬ãŒã ã«ããªãã³ã°ãããŸãïŒlatent_window_sizeã9ã®å Žåã37ã73ã109âŠâŠïŒã
`source_fps`ãæå®ããå Žåããã£ã¬ã¯ããªå
ã®åç»ããã®ãã¬ãŒã ã¬ãŒããšã¿ãªããŠãã¢ãã«ã®ãã¬ãŒã ã¬ãŒãã«ããããã«ããã€ãã®ãã¬ãŒã ãã¹ãããããŸãïŒHunyuanVideoã¯24ãWan2.1ã¯16ïŒã__å°æ°ç¹ãå«ãæ°å€ã§æå®ããŠãã ããã__ äŸïŒ`30`ã§ã¯ãªã`30.0`ãã¹ãããã¯æ©æ¢°çã«è¡ãããç»åã®å
容ã¯èæ
®ããŸããã倿åŸã®ããŒã¿ãæ£ãããã`--debug_mode video`ã§ç¢ºèªããŠãã ããã
`source_fps`ãæå®ããªãå Žåãåç»ã®ãã¬ãŒã ã¯ïŒåç»èªäœã®ãã¬ãŒã ã¬ãŒãã«é¢ä¿ãªãïŒãã¹ãŠäœ¿çšãããŸãã
ä»ã®æ³šæäºé
ã¯ç»åããŒã¿ã»ãããšåæ§ã§ãã
### Sample for Video Dataset with Metadata JSONL File
```toml
# Common parameters (resolution, caption_extension, batch_size, num_repeats, enable_bucket, bucket_no_upscale)
# can be set in either general or datasets sections
# Video-specific parameters (target_frames, frame_extraction, frame_stride, frame_sample, max_frames, source_fps)
# must be set in each datasets section
# caption_extension is not required for metadata jsonl file
# cache_directory is required for each dataset with metadata jsonl file
# general configurations
[general]
resolution = [960, 544]
batch_size = 1
enable_bucket = true
bucket_no_upscale = false
[[datasets]]
video_jsonl_file = "/path/to/metadata.jsonl"
target_frames = [1, 25, 45]
frame_extraction = "head"
cache_directory = "/path/to/cache_directory_head"
source_fps = 30.0 # optional, source fps for videos in the jsonl file
# same metadata jsonl file can be used for multiple datasets
[[datasets]]
video_jsonl_file = "/path/to/metadata.jsonl"
target_frames = [1]
frame_stride = 10
cache_directory = "/path/to/cache_directory_stride"
# other datasets can be added here. each dataset can have different configurations
```
JSONL file format for metadata:
```json
{"video_path": "/path/to/video1.mp4", "caption": "A caption for video1"}
{"video_path": "/path/to/video2.mp4", "caption": "A caption for video2"}
```
`video_path` can be a directory containing multiple images.
æ¥æ¬èª
metadata jsonl ãã¡ã€ã«ã䜿çšããå Žåãcaption_extension ã¯å¿
èŠãããŸããããŸããcache_directory ã¯å¿
é ã§ãã
`video_path`ã¯ãè€æ°ã®ç»åãå«ããã£ã¬ã¯ããªã®ãã¹ã§ãæ§ããŸããã
ä»ã®æ³šæäºé
ã¯ä»ãŸã§ã®ããŒã¿ã»ãããšåæ§ã§ãã
### frame_extraction Options
- `head`: Extract the first N frames from the video.
- `chunk`: Extract frames by splitting the video into chunks of N frames.
- `slide`: Extract frames from the video with a stride of `frame_stride`.
- `uniform`: Extract `frame_sample` samples uniformly from the video.
- `full`: Extract all frames from the video.
In the case of `full`, the entire video is used, but it is trimmed to "N*4+1" frames. It is also trimmed to the `max_frames` if it exceeds that value. To avoid Out of Memory errors, please set `max_frames`.
The frame extraction methods other than `full` are recommended when the video contains repeated actions. `full` is recommended when each video represents a single complete motion.
For example, consider a video with 40 frames. The following diagrams illustrate each extraction:
æ¥æ¬èª
- `head`: åç»ããæåã®Nãã¬ãŒã ãæœåºããŸãã
- `chunk`: åç»ãNãã¬ãŒã ãã€ã«åå²ããŠãã¬ãŒã ãæœåºããŸãã
- `slide`: `frame_stride`ã«æå®ãããã¬ãŒã ããšã«åç»ããNãã¬ãŒã ãæœåºããŸãã
- `uniform`: åç»ããäžå®ééã§ã`frame_sample`åã®Nãã¬ãŒã ãæœåºããŸãã
- `full`: åç»ããå
šãŠã®ãã¬ãŒã ãæœåºããŸãã
`full`ã®å Žåãååç»ã®å
šäœãçšããŸããããN*4+1ãã®ãã¬ãŒã æ°ã«ããªãã³ã°ãããŸãããŸã`max_frames`ãè¶
ããå Žåããã®å€ã«ããªãã³ã°ãããŸããOut of Memoryãšã©ãŒãé¿ããããã«ã`max_frames`ãèšå®ããŠãã ããã
`full`以å€ã®æœåºæ¹æ³ã¯ãåç»ãç¹å®ã®åäœãç¹°ãè¿ããŠããå Žåã«ãå§ãããŸãã`full`ã¯ããããã®åç»ãã²ãšã€ã®å®çµããã¢ãŒã·ã§ã³ã®å Žåã«ãå§ãããŸãã
äŸãã°ã40ãã¬ãŒã ã®åç»ãäŸãšããæœåºã«ã€ããŠã以äžã®å³ã§èª¬æããŸãã
```
Original Video, 40 frames: x = frame, o = no frame
oooooooooooooooooooooooooooooooooooooooo
head, target_frames = [1, 13, 25] -> extract head frames:
xooooooooooooooooooooooooooooooooooooooo
xxxxxxxxxxxxxooooooooooooooooooooooooooo
xxxxxxxxxxxxxxxxxxxxxxxxxooooooooooooooo
chunk, target_frames = [13, 25] -> extract frames by splitting into chunks, into 13 and 25 frames:
xxxxxxxxxxxxxooooooooooooooooooooooooooo
oooooooooooooxxxxxxxxxxxxxoooooooooooooo
ooooooooooooooooooooooooooxxxxxxxxxxxxxo
xxxxxxxxxxxxxxxxxxxxxxxxxooooooooooooooo
NOTE: Please do not include 1 in target_frames if you are using the frame_extraction "chunk". It will make the all frames to be extracted.
泚: frame_extraction "chunk" ã䜿çšããå Žåãtarget_frames ã« 1 ãå«ããªãã§ãã ãããå
šãŠã®ãã¬ãŒã ãæœåºãããŠããŸããŸãã
slide, target_frames = [1, 13, 25], frame_stride = 10 -> extract N frames with a stride of 10:
xooooooooooooooooooooooooooooooooooooooo
ooooooooooxooooooooooooooooooooooooooooo
ooooooooooooooooooooxooooooooooooooooooo
ooooooooooooooooooooooooooooooxooooooooo
xxxxxxxxxxxxxooooooooooooooooooooooooooo
ooooooooooxxxxxxxxxxxxxooooooooooooooooo
ooooooooooooooooooooxxxxxxxxxxxxxooooooo
xxxxxxxxxxxxxxxxxxxxxxxxxooooooooooooooo
ooooooooooxxxxxxxxxxxxxxxxxxxxxxxxxooooo
uniform, target_frames =[1, 13, 25], frame_sample = 4 -> extract `frame_sample` samples uniformly, N frames each:
xooooooooooooooooooooooooooooooooooooooo
oooooooooooooxoooooooooooooooooooooooooo
oooooooooooooooooooooooooxoooooooooooooo
ooooooooooooooooooooooooooooooooooooooox
xxxxxxxxxxxxxooooooooooooooooooooooooooo
oooooooooxxxxxxxxxxxxxoooooooooooooooooo
ooooooooooooooooooxxxxxxxxxxxxxooooooooo
oooooooooooooooooooooooooooxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxooooooooooooooo
oooooxxxxxxxxxxxxxxxxxxxxxxxxxoooooooooo
ooooooooooxxxxxxxxxxxxxxxxxxxxxxxxxooooo
oooooooooooooooxxxxxxxxxxxxxxxxxxxxxxxxx
Three Original Videos, 20, 25, 35 frames: x = frame, o = no frame
full, max_frames = 31 -> extract all frames (trimmed to the maximum length):
video1: xxxxxxxxxxxxxxxxx (trimmed to 17 frames)
video2: xxxxxxxxxxxxxxxxxxxxxxxxx (25 frames)
video3: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (trimmed to 31 frames)
```
### Sample for Image Dataset with Control Images
The dataset with control images. This is used for the one frame training for FramePack, or for FLUX.1 Kontext, FLUX.2 and Qwen-Image-Edit training.
The dataset configuration with caption text files is similar to the image dataset, but with an additional `control_directory` parameter.
The control images are used from the `control_directory` with the same filename (or different extension) as the image, for example, `image_dir/image1.jpg` and `control_dir/image1.png`. The images in `image_directory` should be the target images (the images to be generated during inference, the changed images). The `control_directory` should contain the starting images for inference. The captions should be stored in `image_directory`.
If multiple control images are specified, the filenames of the control images should be numbered (excluding the extension). For example, specify `image_dir/image1.jpg` and `control_dir/image1_0.png`, `control_dir/image1_1.png`. You can also specify the numbers with four digits, such as `image1_0000.png`, `image1_0001.png`.
The metadata JSONL file format is the same as the image dataset, but with an additional `control_path` parameter.
```json
{"image_path": "/path/to/image1.jpg", "control_path": "/path/to/control1.png", "caption": "A caption for image1"}
{"image_path": "/path/to/image2.jpg", "control_path": "/path/to/control2.png", "caption": "A caption for image2"}
If multiple control images are specified, the attribute names should be `control_path_0`, `control_path_1`, etc.
```json
{"image_path": "/path/to/image1.jpg", "control_path_0": "/path/to/control1_0.png", "control_path_1": "/path/to/control1_1.png", "caption": "A caption for image1"}
{"image_path": "/path/to/image2.jpg", "control_path_0": "/path/to/control2_0.png", "control_path_1": "/path/to/control2_1.png", "caption": "A caption for image2"}
```
The control images can also have an alpha channel. In this case, the alpha channel of the image is used as a mask for the latent. This is only for the one frame training of FramePack.
æ¥æ¬èª
å¶åŸ¡ç»åãæã€ããŒã¿ã»ããã§ããçŸæç¹ã§ã¯FramePackã®åäžãã¬ãŒã åŠç¿ãFLUX.1 KontextãFLUX.2ãQwen-Image-Editã®åŠç¿ã«äœ¿çšããŸãã
ãã£ãã·ã§ã³ãã¡ã€ã«ãçšããå Žåã¯`control_directory`ã远å ã§æå®ããŠãã ãããå¶åŸ¡ç»åã¯ãç»åãšåããã¡ã€ã«åïŒãŸãã¯æ¡åŒµåã®ã¿ãç°ãªããã¡ã€ã«åïŒã®ã`control_directory`ã«ããç»åã䜿çšãããŸãïŒäŸïŒ`image_dir/image1.jpg`ãš`control_dir/image1.png`ïŒã`image_directory`ã®ç»åã¯åŠç¿å¯Ÿè±¡ã®ç»åïŒæšè«æã«çæããç»åãå€ååŸã®ç»åïŒãšããŠãã ããã`control_directory`ã«ã¯æšè«æã®éå§ç»åãæ ŒçŽããŠãã ããããã£ãã·ã§ã³ã¯`image_directory`ãžæ ŒçŽããŠãã ããã
è€æ°æã®å¶åŸ¡ç»åãæå®å¯èœã§ãããã®å Žåãå¶åŸ¡ç»åã®ãã¡ã€ã«åïŒæ¡åŒµåãé€ãïŒãžæ°åãä»äžããŠãã ãããäŸãã°ã`image_dir/image1.jpg`ãš`control_dir/image1_0.png`, `control_dir/image1_1.png`ã®ããã«æå®ããŸãã`image1_0000.png`, `image1_0001.png`ã®ããã«æ°åã4æ¡ã§æå®ããããšãã§ããŸãã
ã¡ã¿ããŒã¿JSONLãã¡ã€ã«ã䜿çšããå Žåã¯ã`control_path`ã远å ããŠãã ãããè€æ°æã®å¶åŸ¡ç»åãæå®ããå Žåã¯ã`control_path_0`, `control_path_1`ã®ããã«æ°åãä»äžããŠãã ããã
FramePackã®åäžãã¬ãŒã åŠç¿ã§ã¯ãå¶åŸ¡ç»åã¯ã¢ã«ãã¡ãã£ã³ãã«ãæã€ããšãã§ããŸãããã®å Žåãç»åã®ã¢ã«ãã¡ãã£ã³ãã«ã¯latentãžã®ãã¹ã¯ãšããŠäœ¿çšãããŸãã
### Resizing Control Images for Image Dataset / ç»åããŒã¿ã»ããã§ã®å¶åŸ¡ç»åã®ãªãµã€ãº
By default, the control images are resized to the same size as the target images. You can change the resizing method with the following options:
- `no_resize_control`: Do not resize the control images. They will be cropped to match the rounding unit of each architecture (for example, 16 pixels).
- `control_resolution`: Resize the control images to the specified resolution. For example, specify `control_resolution = [1024, 1024]`. Aspect Ratio Bucketing will be applied.
```toml
[[datasets]]
# Image directory or metadata jsonl file as above
image_directory = "/path/to/image_dir"
control_directory = "/path/to/control_dir"
control_resolution = [1024, 1024]
no_resize_control = false
```
If both are specified, `control_resolution` is treated as the maximum resolution. That is, if the total number of pixels of the control image exceeds that of `control_resolution`, it will be resized to `control_resolution`.
The recommended resizing method for control images may vary depending on the architecture. Please refer to the section for each architecture.
The previous options `flux_kontext_no_resize_control` and `qwen_image_edit_no_resize_control` are still available, but it is recommended to use `no_resize_control`.
The `qwen_image_edit_control_resolution` is also available, but it is recommended to use `control_resolution`.
**The technical details of `no_resize_control`:**
When this option is specified, the control image is trimmed to a multiple of 16 pixels (depending on the architecture) and converted to latent and passed to the model.
Each element in the batch must have the same resolution, which is adjusted by advanced Aspect Ratio Bucketing (buckets are divided by the resolution of the target image and also the resolution of the control image).
æ¥æ¬èª
ããã©ã«ãã§ã¯ãå¶åŸ¡ç»åã¯ã¿ãŒã²ããç»åãšåããµã€ãºã«ãªãµã€ãºãããŸãã以äžã®ãªãã·ã§ã³ã§ããªãµã€ãºæ¹åŒã倿Žã§ããŸãã
- `no_resize_control`: å¶åŸ¡ç»åããªãµã€ãºããŸãããã¢ãŒããã¯ãã£ããšã®äžžãåäœïŒ16ãã¯ã»ã«ãªã©ïŒã«åãããŠããªãã³ã°ãããŸãã
- `control_resolution`: å¶åŸ¡ç»åãæå®ããè§£å床ã«ãªãµã€ãºããŸããäŸãã°ã`control_resolution = [1024, 1024]`ãšæå®ããŸããAspect Ratio Bucketingãé©çšãããŸãã
äž¡æ¹ãåæã«æå®ããããšã`control_resolution`ã¯æå€§è§£å床ãšããŠæ±ãããŸããã€ãŸããå¶åŸ¡ç»åã®ç·ãã¯ã»ã«æ°ã`control_resolution`ã®ç·ãã¯ã»ã«æ°ãè¶
ããå Žåã`control_resolution`ã«ãªãµã€ãºãããŸãã
ã¢ãŒããã¯ãã£ã«ãããæšå¥šã®å¶åŸ¡ç»åã®ãªãµã€ãºæ¹æ³ã¯ç°ãªããŸããåã¢ãŒããã¯ãã£ã®ç¯ãåç
§ããŠãã ããã
以åã®ãªãã·ã§ã³`flux_kontext_no_resize_control`ãš`qwen_image_edit_no_resize_control`ã¯äœ¿çšå¯èœã§ããã`no_resize_control`ã䜿çšããããšãæšå¥šããŸãã
`qwen_image_edit_control_resolution`ã䜿çšå¯èœã§ããã`control_resolution`ã䜿çšããããšãæšå¥šããŸãã
**`no_resize_control`ã®æè¡çãªè©³çް:**
ãã®ãªãã·ã§ã³ãæå®ãããå Žåãå¶åŸ¡ç»åã¯16ãã¯ã»ã«ã®åæ°ïŒã¢ãŒããã¯ãã£ã«äŸåïŒã«ããªãã³ã°ãããlatentã«å€æãããŠã¢ãã«ã«æž¡ãããŸãã
ãããå
ã®åèŠçŽ ã¯åãè§£å床ã§ããå¿
èŠããããŸãããã¿ãŒã²ããç»åã®è§£å床ãšå¶åŸ¡ç»åã®è§£å床ã®äž¡æ¹ã§ãã±ãããåå²ãããé«åºŠãªAspect Ratio Bucketingã«ãã£ãŠèª¿æŽãããŸãã
### Sample for Video Dataset with Control Images
The dataset with control videos is used for training ControlNet models.
The dataset configuration with caption text files is similar to the video dataset, but with an additional `control_directory` parameter.
The control video for a video is used from the `control_directory` with the same filename (or different extension) as the video, for example, `video_dir/video1.mp4` and `control_dir/video1.mp4` or `control_dir/video1.mov`. The control video can also be a directory without an extension, for example, `video_dir/video1.mp4` and `control_dir/video1`.
```toml
[[datasets]]
video_directory = "/path/to/video_dir"
control_directory = "/path/to/control_dir" # required for dataset with control videos
cache_directory = "/path/to/cache_directory" # recommended to set cache directory
target_frames = [1, 25, 45]
frame_extraction = "head"
```
The dataset configuration with metadata JSONL file is same as the video dataset, but metadata JSONL file must include the control video paths. The control video path can be a directory containing multiple images.
```json
{"video_path": "/path/to/video1.mp4", "control_path": "/path/to/control1.mp4", "caption": "A caption for video1"}
{"video_path": "/path/to/video2.mp4", "control_path": "/path/to/control2.mp4", "caption": "A caption for video2"}
```
æ¥æ¬èª
å¶åŸ¡åç»ãæã€ããŒã¿ã»ããã§ããControlNetã¢ãã«ã®åŠç¿ã«äœ¿çšããŸãã
ãã£ãã·ã§ã³ãçšããå Žåã®ããŒã¿ã»ããèšå®ã¯åç»ããŒã¿ã»ãããšäŒŒãŠããŸããã`control_directory`ãã©ã¡ãŒã¿ã远å ãããŠããŸããäžã«ããäŸãåç
§ããŠãã ãããããåç»ã«å¯Ÿããå¶åŸ¡çšåç»ãšããŠãåç»ãšåããã¡ã€ã«åïŒãŸãã¯æ¡åŒµåã®ã¿ãç°ãªããã¡ã€ã«åïŒã®ã`control_directory`ã«ããåç»ã䜿çšãããŸãïŒäŸïŒ`video_dir/video1.mp4`ãš`control_dir/video1.mp4`ãŸãã¯`control_dir/video1.mov`ïŒããŸããæ¡åŒµåãªãã®ãã£ã¬ã¯ããªå
ã®ãè€æ°æã®ç»åãå¶åŸ¡çšåç»ãšããŠäœ¿çšããããšãã§ããŸãïŒäŸïŒ`video_dir/video1.mp4`ãš`control_dir/video1`ïŒã
ããŒã¿ã»ããèšå®ã§ã¡ã¿ããŒã¿JSONLãã¡ã€ã«ã䜿çšããå Žåã¯ãåç»ãšå¶åŸ¡çšåç»ã®ãã¹ãå«ããå¿
èŠããããŸããå¶åŸ¡çšåç»ã®ãã¹ã¯ãè€æ°æã®ç»åãå«ããã£ã¬ã¯ããªã®ãã¹ã§ãæ§ããŸããã
## Architecture-specific Settings / ã¢ãŒããã¯ãã£åºæã®èšå®
The dataset configuration is shared across all architectures. However, some architectures may require additional settings or have specific requirements for the dataset.
### FramePack
For FramePack, you can set the latent window size for training. It is recommended to set it to 9 for FramePack training. The default value is 9, so you can usually omit this setting.
```toml
[[datasets]]
fp_latent_window_size = 9
```
æ¥æ¬èª
åŠç¿æã®latent window sizeãæå®ã§ããŸããFramePackã®åŠç¿ã«ãããŠã¯ã9ãæå®ããããšãæšå¥šããŸããçç¥æã¯9ã䜿çšãããŸãã®ã§ãéåžžã¯çç¥ããŠæ§ããŸããã
### FramePack One Frame Training
For the default one frame training of FramePack, you need to set the following parameters in the dataset configuration:
```toml
[[datasets]]
fp_1f_clean_indices = [0]
fp_1f_target_index = 9
fp_1f_no_post = false
```
**Advanced Settings:**
**Note that these parameters are still experimental, and the optimal values are not yet known.** The parameters may also change in the future.
`fp_1f_clean_indices` sets the `clean_indices` value passed to the FramePack model. You can specify multiple indices. `fp_1f_target_index` sets the index of the frame to be trained (generated). `fp_1f_no_post` sets whether to add a zero value as `clean_latent_post`, default is `false` (add zero value).
The number of control images should match the number of indices specified in `fp_1f_clean_indices`.
The default values mean that the first image (control image) is at index `0`, and the target image (the changed image) is at index `9`.
For training with 1f-mc, set `fp_1f_clean_indices` to `[0, 1]` and `fp_1f_target_index` to `9` (or another value). This allows you to use multiple control images to train a single generated image. The control images will be two in this case.
```toml
[[datasets]]
fp_1f_clean_indices = [0, 1]
fp_1f_target_index = 9
fp_1f_no_post = false
```
For training with kisekaeichi, set `fp_1f_clean_indices` to `[0, 10]` and `fp_1f_target_index` to `1` (or another value). This allows you to use the starting image (the image just before the generation section) and the image following the generation section (equivalent to `clean_latent_post`) to train the first image of the generated video. The control images will be two in this case. `fp_1f_no_post` should be set to `true`.
```toml
[[datasets]]
fp_1f_clean_indices = [0, 10]
fp_1f_target_index = 1
fp_1f_no_post = true
```
With `fp_1f_clean_indices` and `fp_1f_target_index`, you can specify any number of control images and any index of the target image for training.
If you set `fp_1f_no_post` to `false`, the `clean_latent_post_index` will be `1 + fp1_latent_window_size`.
You can also set the `no_2x` and `no_4x` options for cache scripts to disable the clean latents 2x and 4x.
The 2x indices are `1 + fp1_latent_window_size + 1` for two indices (usually `11, 12`), and the 4x indices are `1 + fp1_latent_window_size + 1 + 2` for sixteen indices (usually `13, 14, ..., 28`), regardless of `fp_1f_no_post` and `no_2x`, `no_4x` settings.
æ¥æ¬èª
â» **以äžã®ãã©ã¡ãŒã¿ã¯ç ç©¶äžã§æé©å€ã¯ãŸã äžæã§ãã** ãŸããã©ã¡ãŒã¿èªäœã倿Žãããå¯èœæ§ããããŸãã
ããã©ã«ãã®1ãã¬ãŒã åŠç¿ãè¡ãå Žåã`fp_1f_clean_indices`ã«`[0]`ãã`fp_1f_target_index`ã«`9`ïŒãŸãã¯5ãã15çšåºŠã®å€ïŒãã`no_post`ã«`false`ãèšå®ããŠãã ãããïŒèšè¿°äŸã¯è±èªçããã¥ã¡ã³ããåç
§ã以éåããïŒ
**ããé«åºŠãªèšå®ïŒ**
`fp_1f_clean_indices`ã¯ãFramePackã¢ãã«ã«æž¡ããã `clean_indices` ã®å€ãèšå®ããŸããè€æ°æå®ãå¯èœã§ãã`fp_1f_target_index`ã¯ãåŠç¿ïŒçæïŒå¯Ÿè±¡ã®ãã¬ãŒã ã®ã€ã³ããã¯ã¹ãèšå®ããŸãã`fp_1f_no_post`ã¯ã`clean_latent_post` ããŒãå€ã§è¿œå ãããã©ãããèšå®ããŸãïŒããã©ã«ãã¯`false`ã§ããŒãå€ã§è¿œå ããŸãïŒã
å¶åŸ¡ç»åã®ææ°ã¯`fp_1f_clean_indices`ã«æå®ããã€ã³ããã¯ã¹ã®æ°ãšããããŠãã ããã
ããã©ã«ãã®1ãã¬ãŒã åŠç¿ã§ã¯ãéå§ç»åïŒå¶åŸ¡ç»åïŒ1æãã€ã³ããã¯ã¹`0`ãçæå¯Ÿè±¡ã®ç»åïŒå€ååŸã®ç»åïŒãã€ã³ããã¯ã¹`9`ã«èšå®ããŠããŸãã
1f-mcã®åŠç¿ãè¡ãå Žåã¯ã`fp_1f_clean_indices`ã« `[0, 1]`ãã`fp_1f_target_index`ã«`9`ãèšå®ããŠãã ãããããã«ããåç»ã®å
é ã®2æã®å¶åŸ¡ç»åã䜿çšããŠãåŸç¶ã®1æã®çæç»åãåŠç¿ããŸããå¶åŸ¡ç»åã¯2æã«ãªããŸãã
kisekaeichiã®åŠç¿ãè¡ãå Žåã¯ã`fp_1f_clean_indices`ã« `[0, 10]`ãã`fp_1f_target_index`ã«`1`ïŒãŸãã¯ä»ã®å€ïŒãèšå®ããŠãã ãããããã¯ãéå§ç»åïŒçæã»ã¯ã·ã§ã³ã®çŽåã®ç»åïŒïŒ`clean_latent_pre`ã«çžåœïŒãšãçæã»ã¯ã·ã§ã³ã«ç¶ã1æã®ç»åïŒ`clean_latent_post`ã«çžåœïŒã䜿çšããŠãçæåç»ã®å
é ã®ç»åïŒ`target_index=1`ïŒãåŠç¿ããŸããå¶åŸ¡ç»åã¯2æã«ãªããŸãã`f1_1f_no_post`ã¯`true`ã«èšå®ããŠãã ããã
`fp_1f_clean_indices`ãš`fp_1f_target_index`ãå¿çšããããšã§ãä»»æã®ææ°ã®å¶åŸ¡ç»åããä»»æã®ã€ã³ããã¯ã¹ãæå®ããŠåŠç¿ããããšãå¯èœã§ãã
`fp_1f_no_post`ã`false`ã«èšå®ãããšã`clean_latent_post_index`㯠`1 + fp1_latent_window_size` ã«ãªããŸãã
æšè«æã® `no_2x`ã`no_4x`ã«å¯Ÿå¿ããèšå®ã¯ããã£ãã·ã¥ã¹ã¯ãªããã®åŒæ°ã§è¡ããŸãããªãã2xã®index㯠`1 + fp1_latent_window_size + 1` ããã®2åïŒéåžžã¯`11, 12`ïŒã4xã®index㯠`1 + fp1_latent_window_size + 1 + 2` ããã®16åã«ãªããŸãïŒéåžžã¯`13, 14, ..., 28`ïŒã§ãããããã®å€ã¯`fp_1f_no_post`ã`no_2x`, `no_4x`ã®èšå®ã«é¢ããããåžžã«åãã§ãã
### FLUX.1 Kontext [dev]
The FLUX.1 Kontext dataset configuration uses an image dataset with control images. However, only one control image can be used.
`fp_1f_*` settings are not used in FLUX.1 Kontext. Masks are also not used.
If you set `no_resize_control`, it disables resizing of the control image.
Since FLUX.1 Kontext assumes a fixed [resolution of control images](https://github.com/black-forest-labs/flux/blob/1371b2bc70ac80e1078446308dd5b9a2ebc68c87/src/flux/util.py#L584), it may be better to prepare the control images in advance to match these resolutions and use `no_resize_control`.
æ¥æ¬èª
FLUX.1 Kontextã®ããŒã¿ã»ããèšå®ã¯ãå¶åŸ¡ç»åãæã€ç»åããŒã¿ã»ããã䜿çšããŸãããã ããå¶åŸ¡ç»åã¯1æãã䜿çšã§ããŸããã
`fp_1f_*`ã®èšå®ã¯FLUX.1 Kontextã§ã¯äœ¿çšããŸããããŸããã¹ã¯ã䜿çšãããŸããã
ãŸãã`no_resize_control`ãèšå®ãããšãå¶åŸ¡ç»åã®ãªãµã€ãºãç¡å¹ã«ããŸãã
FLUX.1 Kontextã¯[å¶åŸ¡ç»åã®åºå®è§£å床](https://github.com/black-forest-labs/flux/blob/1371b2bc70ac80e1078446308dd5b9a2ebc68c87/src/flux/util.py#L584)ãæ³å®ããŠããããããããã®è§£å床ã«ããããŠå¶åŸ¡ç»åãäºåã«çšæãã`no_resize_control`ã䜿çšããæ¹ãè¯ãå ŽåããããŸãã
### Qwen-Image-Edit and Qwen-Image-Edit-2509/2511
The Qwen-Image-Edit dataset configuration uses an image dataset with control images. However, only one control image can be used for the standard model (not `2509` or `2511`).
By default, the control image is resized to the same resolution (and aspect ratio) as the image.
If you set `no_resize_control`, it disables resizing of the control image. For example, if the image is 960x544 and the control image is 512x512, the control image will remain 512x512.
Also, you can specify the resolution of the control image separately from the training image resolution by using `control_resolution`. If you want to resize the control images the same as the official code, specify [1024,1024]. **We strongly recommend specifying this value.**
`no_resize_control` can be specified together with `control_resolution`.
If `no_resize_control` or `control_resolution` is specified, each control image can have a different resolution. The control image is resized according to the specified settings.
```toml
[[datasets]]
no_resize_control = false # optional, default is false. Disable resizing of control image
control_resolution = [1024, 1024] # optional, default is None. Specify the resolution of the control image.
```
`fp_1f_*` settings are not used in Qwen-Image-Edit.
æ¥æ¬èª
Qwen-Image-Editã®ããŒã¿ã»ããèšå®ã¯ãå¶åŸ¡ç»åãæã€ç»åããŒã¿ã»ããã䜿çšããŸããè€æ°æã®å¶åŸ¡ç»åã䜿çšå¯èœã§ãããç¡å°ïŒ`2509`ãŸãã¯`2511`ã§ãªãïŒã¢ãã«ã§ã¯1æã®ã¿äœ¿çšå¯èœã§ãã
ããã©ã«ãã§ã¯ãå¶åŸ¡ç»åã¯ç»åãšåãè§£å床ïŒããã³ã¢ã¹ãã¯ãæ¯ïŒã«ãªãµã€ãºãããŸãã
`no_resize_control`ãèšå®ãããšãå¶åŸ¡ç»åã®ãªãµã€ãºãç¡å¹ã«ããŸããããšãã°ãç»åã960x544ã§å¶åŸ¡ç»åã512x512ã®å Žåãå¶åŸ¡ç»åã¯512x512ã®ãŸãŸã«ãªããŸãã
ãŸãã`control_resolution`ã䜿çšããããšã§ãå¶åŸ¡ç»åã®è§£å床ãåŠç¿ç»åã®è§£å床ãšç°ãªãå€ã«æå®ã§ããŸããå
¬åŒã®ã³ãŒããšåãããã«å¶åŸ¡ç»åããªãµã€ãºãããå Žåã¯ã[1024, 1024]ãæå®ããŠãã ããã**ãã®å€ã®æå®ãåŒ·ãæšå¥šããŸãã**
`no_resize_control`ãš `control_resolution`ã¯åæã«æå®ã§ããŸãã
`no_resize_control`ãŸãã¯`control_resolution`ãæå®ãããå Žåãåå¶åŸ¡ç»åã¯ç°ãªãè§£å床ãæã€ããšãã§ããŸããå¶åŸ¡ç»åã¯æå®ãããèšå®ã«åŸã£ãŠãªãµã€ãºãããŸãã
```toml
[[datasets]]
no_resize_control = false # ãªãã·ã§ã³ãããã©ã«ãã¯falseãå¶åŸ¡ç»åã®ãªãµã€ãºãç¡å¹ã«ããŸã
control_resolution = [1024, 1024] # ãªãã·ã§ã³ãããã©ã«ãã¯Noneãå¶åŸ¡ç»åã®è§£å床ãæå®ããŸã
```
`fp_1f_*`ã®èšå®ã¯Qwen-Image-Editã§ã¯äœ¿çšããŸããã
### FLUX.2
The FLUX.2 dataset configuration uses an image dataset with control images (it can also be trained without control images). Multiple control images can be used.
`fp_1f_*` settings are not used in FLUX.2.
If you set `no_resize_control`, it disables resizing of the control images. If you want to follow the official FLUX.2 inference settings, please specify this option.
You can specify the resolution of the control images separately from the training image resolution by using `control_resolution`. If you want to follow the official FLUX.2 inference settings, specify [2024, 2024] (note that it is not 2048) when there is one control image, and [1024, 1024] when there are multiple control images, together with the `no_resize_control` option.
æ¥æ¬èª
FLUX.2ã®ããŒã¿ã»ããèšå®ã¯ãå¶åŸ¡ç»åãæã€ç»åããŒã¿ã»ããã䜿çšããŸãïŒå¶åŸ¡ç»åãªãã§ãåŠç¿ã§ããŸãïŒãè€æ°æã®å¶åŸ¡ç»åã䜿çšå¯èœã§ãã
`fp_1f_*`ã®èšå®ã¯FLUX.2ã§ã¯äœ¿çšããŸããã
`no_resize_control`ãèšå®ãããšãå¶åŸ¡ç»åã®ãªãµã€ãºãç¡å¹ã«ããŸããFLUX.2å
¬åŒã®æšè«æèšå®ã«æºæ ããå Žåã¯ããã®ãªãã·ã§ã³ãæå®ããŠãã ããã
`control_resolution`ã䜿çšããŠãå¶åŸ¡ç»åã®è§£å床ãåŠç¿ç»åã®è§£å床ãšç°ãªãå€ã«æå®ã§ããŸããFLUX.2å
¬åŒã®æšè«æèšå®ã«æºæ ããå Žåã¯ã`no_resize_control`ãªãã·ã§ã³ãšåæã«ãå¶åŸ¡ç»åã1æã®å Žåã¯`[2024, 2024]`ïŒ2048ã§ã¯ãªãã®ã§æ³šæïŒãå¶åŸ¡ç»åãè€æ°ã®å Žåã¯`[1024, 1024]`ãæå®ããŠãã ããã
## Specifications
```toml
# general configurations
[general]
resolution = [960, 544] # optional, [W, H], default is [960, 544]. This is the default resolution for all datasets
caption_extension = ".txt" # optional, default is None. This is the default caption extension for all datasets
batch_size = 1 # optional, default is 1. This is the default batch size for all datasets
num_repeats = 1 # optional, default is 1. Number of times to repeat the dataset. Useful to balance the multiple datasets with different sizes.
enable_bucket = true # optional, default is false. Enable bucketing for datasets
bucket_no_upscale = false # optional, default is false. Disable upscaling for bucketing. Ignored if enable_bucket is false
### Image Dataset
# sample image dataset with caption text files
[[datasets]]
image_directory = "/path/to/image_dir"
caption_extension = ".txt" # required for caption text files, if general caption extension is not set
resolution = [960, 544] # required if general resolution is not set
batch_size = 4 # optional, overwrite the default batch size
num_repeats = 1 # optional, overwrite the default num_repeats
enable_bucket = false # optional, overwrite the default bucketing setting
bucket_no_upscale = true # optional, overwrite the default bucketing setting
cache_directory = "/path/to/cache_directory" # optional, default is None to use the same directory as the image directory. NOTE: caching is always enabled
control_directory = "/path/to/control_dir" # optional, required for dataset with control images
# sample image dataset with metadata **jsonl** file
[[datasets]]
image_jsonl_file = "/path/to/metadata.jsonl" # includes pairs of image files and captions
resolution = [960, 544] # required if general resolution is not set
cache_directory = "/path/to/cache_directory" # required for metadata jsonl file
# caption_extension is not required for metadata jsonl file
# batch_size, num_repeats, enable_bucket, bucket_no_upscale are also available for metadata jsonl file
### Video Dataset
# sample video dataset with caption text files
[[datasets]]
video_directory = "/path/to/video_dir"
caption_extension = ".txt" # required for caption text files, if general caption extension is not set
resolution = [960, 544] # required if general resolution is not set
control_directory = "/path/to/control_dir" # optional, required for dataset with control images
# following configurations must be set in each [[datasets]] section for video datasets
target_frames = [1, 25, 79] # required for video dataset. list of video lengths to extract frames. each element must be N*4+1 (N=0,1,2,...)
# NOTE: Please do not include 1 in target_frames if you are using the frame_extraction "chunk". It will make the all frames to be extracted.
frame_extraction = "head" # optional, "head" or "chunk", "full", "slide", "uniform". Default is "head"
frame_stride = 1 # optional, default is 1, available for "slide" frame extraction
frame_sample = 4 # optional, default is 1 (same as "head"), available for "uniform" frame extraction
max_frames = 129 # optional, default is 129. Maximum number of frames to extract, available for "full" frame extraction
# batch_size, num_repeats, enable_bucket, bucket_no_upscale, cache_directory are also available for video dataset
# sample video dataset with metadata jsonl file
[[datasets]]
video_jsonl_file = "/path/to/metadata.jsonl" # includes pairs of video files and captions
target_frames = [1, 79]
cache_directory = "/path/to/cache_directory" # required for metadata jsonl file
# frame_extraction, frame_stride, frame_sample, max_frames are also available for metadata jsonl file
```
The metadata with .json file will be supported in the near future.