--- license: apache-2.0 --- # Templates - Image Editing (FLUX.2-klein-base-4B) This model is part of the open-source Diffusion Templates series by [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio). It is an image editing model capable of receiving an input image and precisely modifying specific objects, actions, or attributes within the image based on natural language instructions. * Open-source code: [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) * Technical report: [arXiv](https://arxiv.org/abs/2604.24351) * Project page: [GitHub](https://modelscope.github.io/diffusion-templates-web/) * Documentation: [English Version](https://diffsynth-studio-doc.readthedocs.io/en/latest/Diffusion_Templates/Introducing_Diffusion_Templates.html)、[中文版](https://diffsynth-studio-doc.readthedocs.io/zh-cn/latest/Diffusion_Templates/Introducing_Diffusion_Templates.html) * Online demo: [ModelScope](https://modelscope.cn/studios/DiffSynth-Studio/Diffusion-Templates) * Models: [ModelScope](https://modelscope.cn/collections/DiffSynth-Studio/KleinBase4B-Templates)、[ModelScope International](https://modelscope.ai/collections/DiffSynth-Studio/KleinBase4B-Templates)、[HuggingFace](https://huggingface.co/collections/DiffSynth-Studio/kleinbase4b-templates) * Datasets: [ModelScope](https://modelscope.cn/collections/DiffSynth-Studio/ImagePulseV2)、[ModelScope International](https://modelscope.cn/collections/DiffSynth-Studio/ImagePulseV2)、[HuggingFace](https://huggingface.co/collections/DiffSynth-Studio/imagepulsev2) ## Demo Results |Reference|Prompt: Put a hat on this cat.|Prompt: Make the cat turn its head to look to the right.| |-|-|-| |![](./assets/cat.jpg)|![](./assets/cat_Edit_hat.jpg)|![](./assets/cat_Edit_head.jpg)| |Reference|Prompt: Change the color of the car to matte black.|Prompt: Make it a rainy night.| |-|-|-| |![](./assets/car.jpg)|![](./assets/car_Edit_color.jpg)|![](./assets/car_Edit_rain.jpg)| |Reference|Prompt: Make her hair long and blonde.|Prompt: Change the book in her hands into coffee.| |-|-|-| |![](./assets/girl.jpg)|![](./assets/girl_Edit_hair.jpg)|![](./assets/girl_Edit_coffee.jpg)| ## Inference Code * Install [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) ``` git clone https://github.com/modelscope/DiffSynth-Studio.git cd DiffSynth-Studio pip install -e . ``` * Direct inference (requires 40GB GPU memory) ```python from diffsynth.diffusion.template import TemplatePipeline from diffsynth.pipelines.flux2_image import Flux2ImagePipeline, ModelConfig import torch from modelscope import dataset_snapshot_download from PIL import Image ``` ```python pipe = Flux2ImagePipeline.from_pretrained( torch_dtype=torch.bfloat16, device="cuda", model_configs=[ ModelConfig(model_id="black-forest-labs/FLUX.2-klein-base-4B", origin_file_pattern="transformer/*.safetensors"), ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="text_encoder/*.safetensors"), ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"), ], tokenizer_config=ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="tokenizer/"), ) template = TemplatePipeline.from_pretrained( torch_dtype=torch.bfloat16, device="cuda", model_configs=[ModelConfig(model_id="DiffSynth-Studio/Template-KleinBase4B-Edit")], ) dataset_snapshot_download( "DiffSynth-Studio/examples_in_diffsynth", allow_file_pattern=["templates/*"], local_dir="data/examples", ) image = template( pipe, prompt="Put a hat on this cat.", seed=0, cfg_scale=4, num_inference_steps=50, template_inputs = [{ "image": Image.open("data/examples/templates/image_reference.jpg"), "prompt": "Put a hat on this cat.", }], negative_template_inputs = [{ "image": Image.open("data/examples/templates/image_reference.jpg"), "prompt": "", }], ) image.save("image_Edit_hat.jpg") image = template( pipe, prompt="Make the cat turn its head to look to the right.", seed=0, cfg_scale=4, num_inference_steps=50, template_inputs = [{ "image": Image.open("data/examples/templates/image_reference.jpg"), "prompt": "Make the cat turn its head to look to the right.", }], negative_template_inputs = [{ "image": Image.open("data/examples/templates/image_reference.jpg"), "prompt": "", }], ) image.save("image_Edit_head.jpg") ``` * Enable lazy loading and memory management, requires 24G GPU memory ```python from diffsynth.diffusion.template import TemplatePipeline from diffsynth.pipelines.flux2_image import Flux2ImagePipeline, ModelConfig import torch from modelscope import dataset_snapshot_download from PIL import Image ``` ```python vram_config = { "offload_dtype": "disk", "offload_device": "disk", "onload_dtype": torch.float8_e4m3fn, "onload_device": "cpu", "preparing_dtype": torch.float8_e4m3fn, "preparing_device": "cuda", "computation_dtype": torch.bfloat16, "computation_device": "cuda", } pipe = Flux2ImagePipeline.from_pretrained( torch_dtype=torch.bfloat16, device="cuda", model_configs=[ ModelConfig(model_id="black-forest-labs/FLUX.2-klein-base-4B", origin_file_pattern="transformer/*.safetensors", **vram_config), ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="text_encoder/*.safetensors", **vram_config), ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"), ], tokenizer_config=ModelConfig(model_id="black-forest-labs/FLUX.2-klein-4B", origin_file_pattern="tokenizer/"), vram_limit=torch.cuda.mem_get_info("cuda")[1] / (1024 ** 3) - 0.5, ) template = TemplatePipeline.from_pretrained( torch_dtype=torch.bfloat16, device="cuda", model_configs=[ModelConfig(model_id="DiffSynth-Studio/Template-KleinBase4B-Edit")], lazy_loading=True, ) dataset_snapshot_download( "DiffSynth-Studio/examples_in_diffsynth", allow_file_pattern=["templates/*"], local_dir="data/examples", ) image = template( pipe, prompt="Put a hat on this cat.", seed=0, cfg_scale=4, num_inference_steps=50, template_inputs = [{ "image": Image.open("data/examples/templates/image_reference.jpg"), "prompt": "Put a hat on this cat.", }], negative_template_inputs = [{ "image": Image.open("data/examples/templates/image_reference.jpg"), "prompt": "", }], ) image.save("image_Edit_hat.jpg") image = template( pipe, prompt="Make the cat turn its head to look to the right.", seed=0, cfg_scale=4, num_inference_steps=50, template_inputs = [{ "image": Image.open("data/examples/templates/image_reference.jpg"), "prompt": "Make the cat turn its head to look to the right.", }], negative_template_inputs = [{ "image": Image.open("data/examples/templates/image_reference.jpg"), "prompt": "", }], ) image.save("image_Edit_head.jpg") ``` ## Training Code After installing DiffSynth-Studio, use the following script to start training. For more information, please refer to the [DiffSynth-Studio Documentation](https://diffsynth-studio-doc.readthedocs.io/zh-cn/latest/). ```shell modelscope download --dataset DiffSynth-Studio/diffsynth_example_dataset --include "flux2/Template-KleinBase4B-Edit/*" --local_dir ./data/diffsynth_example_dataset accelerate launch examples/flux2/model_training/train.py \ --dataset_base_path data/diffsynth_example_dataset/flux2/Template-KleinBase4B-Edit \ --dataset_metadata_path data/diffsynth_example_dataset/flux2/Template-KleinBase4B-Edit/metadata.jsonl \ --extra_inputs "template_inputs" \ --max_pixels 1048576 \ --dataset_repeat 50 \ --model_id_with_origin_paths "black-forest-labs/FLUX.2-klein-4B:text_encoder/*.safetensors,black-forest-labs/FLUX.2-klein-base-4B:transformer/*.safetensors,black-forest-labs/FLUX.2-klein-4B:vae/diffusion_pytorch_model.safetensors" \ --template_model_id_or_path "DiffSynth-Studio/Template-KleinBase4B-Edit:" \ --tokenizer_path "black-forest-labs/FLUX.2-klein-4B:tokenizer/" \ --learning_rate 1e-4 \ --num_epochs 2 \ --remove_prefix_in_ckpt "pipe.template_model." \ --output_path "./models/train/Template-KleinBase4B-Edit_full" \ --trainable_models "template_model" \ --use_gradient_checkpointing \ --find_unused_parameters ```