Instructions to use OJ-1/AsymFLUX.2-klein-9B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use OJ-1/AsymFLUX.2-klein-9B with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("OJ-1/AsymFLUX.2-klein-9B", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
metadata
base_model:
- black-forest-labs/FLUX.2-klein-base-9B
library_name: diffusers
license: other
license_name: flux-non-commercial-license
license_link: LICENSE.md
pipeline_tag: text-to-image
tags:
- flow-matching
- pixel-diffusion
- pixel-generation
- flux2
Asymmetric Flow Models
Pixel-space text-to-image model AsymFLUX.2-klein finetuned from black-forest-labs/FLUX.2-klein-base-9B, using the AsymFlow method proposed in the paper:
Asymmetric Flow Models
arXiv 2026
Hansheng Chen,
Jan Ackermann,
Minseo Kim,
Gordon Wetzstein,
Leonidas Guibas
Stanford University
Project Page | arXiv | Code | AsymFLUX.2 klein Demo🤗
Usage
Please first install the LakonLab v0.2.
We provide a Diffusers-style pipeline for AsymFLUX.2 klein. The example below loads the FLUX.2 klein Base 9B model, attaches the AsymFlow adapter, and generates an image directly in pixel space.
import math
import torch
from lakonlab.models.architectures import OklabColorEncoder
from lakonlab.models.diffusions.schedulers import FlowAdapterScheduler
from lakonlab.pipelines.pipeline_pixelflux2_klein import PixelFlux2KleinPipeline
pipe = PixelFlux2KleinPipeline.from_pretrained(
'black-forest-labs/FLUX.2-klein-base-9B',
vae=OklabColorEncoder(
use_affine_norm=True,
mean=(0.56, 0.0, 0.01),
std=0.16),
scheduler=FlowAdapterScheduler(
shift=17.0,
use_dynamic_shifting=True,
base_seq_len=1024 ** 2,
max_seq_len=2048 ** 2,
base_logshift=math.log(17.0),
max_logshift=math.log(34.0),
dynamic_shifting_type='sqrt',
base_scheduler='UniPCMultistep'),
torch_dtype=torch.bfloat16)
adapter_name = pipe.load_lakonlab_adapter( # you may later call `pipe.set_adapters([adapter_name, ...])` to combine other adapters (e.g., style LoRAs)
'Lakonik/AsymFLUX.2-klein-9B',
target_module_name='transformer')
pipe = pipe.to('cuda')
# Text-to-image generation example
prompt = 'Restored color photo from the 1900s. A middle-aged man with cybernetic metal hands is sitting on an old wooden chair and reading the newspaper. The newspaper has the prominent headline "AsymFLOW RELEASED" in large bold font. Close-up shot focusing on the newspaper.'
neg_prompt = 'Low quality, worst quality, blurry, deformed, bad anatomy, unclear text'
out = pipe(
prompt=prompt,
negative_prompt=neg_prompt,
width=960,
height=1280,
num_inference_steps=38,
guidance_scale=4.0,
generator=torch.Generator().manual_seed(42),
).images[0]
out.save('asymflux2_klein.png')
Citation
@article{chen2026asymmetric,
title={Asymmetric Flow Models},
author={Hansheng Chen and Jan Ackermann and Minseo Kim and Gordon Wetzstein and Leonidas Guibas},
journal={arXiv preprint arXiv:2605.12964},
url={https://arxiv.org/abs/2605.12964},
year={2026},
}
