File size: 2,726 Bytes
67110a3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | ---
language: en
license: apache-2.0
library_name: diffusers
base_model: black-forest-labs/FLUX.1-dev
tags:
- flux
- diffusers
- lora
- cmo
- text-to-image
pipeline_tag: text-to-image
---
# FLUX.1-dev-CMO
<p align="center">
๐ค <a href="[https://huggingface.co/](https://huggingface.co/)Bruece/FLUX.1-dev-CMO"><b>Hugging Face</b></a> |
๐ <a href="[https://arxiv.org/abs/2603.18528](https://arxiv.org/abs/2603.18528)"><b>arXiv</b></a>
</p>
**๐ Official LoRA Adapter for [Correlation-Weighted Multi-Reward Optimization for Compositional Generation](https://arxiv.org/abs/2603.18528)**
This repository contains the official LoRA adapter for [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) fine-tuned using **CMO (Correlation-Weighted Multi-Reward Optimization)** to enhance compositional generation capabilities.
## ๐ Usage
Below is the code to load and merge the LoRA adapter with the base FLUX.1-dev model.
```python
import torch
from diffusers import FluxPipeline
from peft import PeftModel
model_id = "black-forest-labs/FLUX.1-dev"
lora_ckpt_path = "Bruece/FLUX.1-dev-CMO"
device = "cuda"
pipe = FluxPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
pipe.transformer = PeftModel.from_pretrained(pipe.transformer, lora_ckpt_path)
pipe.transformer = pipe.transformer.merge_and_unload()
pipe = pipe.to(device)
prompt = 'a photo of a black kite and a green bear'
image = pipe(prompt, height=512, width=512, num_inference_steps=40, guidance_scale=4.5).images[0]
image.save("flux_cmo_lora.png")
```
## ๐ผ๏ธ Qualitative Results
<details>
<summary>ConceptMix (<a href="[https://arxiv.org/abs/2408.14339](https://arxiv.org/abs/2408.14339)">Link</a>)</summary>
<br>
<img src="./conceptmix_results.png" alt="ConceptMix Results">
</details>
<details>
<summary>GenEval 2 (<a href="[https://arxiv.org/abs/2512.16853](https://arxiv.org/abs/2512.16853)">Link</a>)</summary>
<br>
<img src="./GenEval2_results.png" alt="GenEval 2 Results">
</details>
<details>
<summary>T2I-CompBench (<a href="[https://arxiv.org/pdf/2307.06350v2](https://arxiv.org/pdf/2307.06350v2)">Link</a>)</summary>
<br>
<img src="./T2I-CompBench_results.png" alt="T2I-CompBench Results">
</details>
## ๐ ๏ธ Training Details
- **Base Model:** FLUX.1-dev
- **Algorithm:** Correlation-Weighted Multi-Reward Optimization (CMO)
- **Precision:** bfloat16
## ๐ Citation
If you find this model useful for your research, please cite:
```bibtex
@article{wi2026correlation,
title={Correlation-Weighted Multi-Reward Optimization for Compositional Generation},
author={Wi, Jungmyung and Kim, Hyunsoo and Kim, Donghyun},
journal={arXiv preprint arXiv:2603.18528},
year={2026}
}
```
|