Correlation-Weighted Multi-Reward Optimization for Compositional Generation
Paper β’ 2603.18528 β’ Published
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Bruece/FLUX.1-dev-CMO")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]π€ Hugging Face | π arXiv
π Official LoRA Adapter for Correlation-Weighted Multi-Reward Optimization for Compositional Generation
This repository contains the official LoRA adapter for black-forest-labs/FLUX.1-dev fine-tuned using CMO (Correlation-Weighted Multi-Reward Optimization) to enhance compositional generation capabilities.
Below is the code to load and merge the LoRA adapter with the base FLUX.1-dev model.
import torch
from diffusers import FluxPipeline
from peft import PeftModel
model_id = "black-forest-labs/FLUX.1-dev"
lora_ckpt_path = "Bruece/FLUX.1-dev-CMO"
device = "cuda"
pipe = FluxPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
pipe.transformer = PeftModel.from_pretrained(pipe.transformer, lora_ckpt_path)
pipe.transformer = pipe.transformer.merge_and_unload()
pipe = pipe.to(device)
prompt = 'a photo of a black kite and a green bear'
image = pipe(prompt, height=512, width=512, num_inference_steps=40, guidance_scale=4.5).images[0]
image.save("flux_cmo_lora.png")
If you find this model useful for your research, please cite:
@article{wi2026correlation,
title={Correlation-Weighted Multi-Reward Optimization for Compositional Generation},
author={Wi, Jungmyung and Kim, Hyunsoo and Kim, Donghyun},
journal={arXiv preprint arXiv:2603.18528},
year={2026}
}
Base model
black-forest-labs/FLUX.1-dev