Configuration Parsing Warning:In adapter_config.json: "peft.base_model_name_or_path" must be a string
Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string
FLUX.2 Klein 4B โ Overlap-Ownership LoRA
LoRA adapter fine-tuned on FLUX.2-klein-base-4B for count-preserving dense image generation with overlap-ownership attention zones.
Model Details
- Base model: black-forest-labs/FLUX.2-klein-base-4B
- LoRA rank: 16, alpha: 32
- Target modules: to_k, to_v, to_q, to_out.0, to_qkv_mlp_proj
- Training steps: 2000
- Training framework: PEFT + diffusers
Training Configuration
- Effective batch size: 4 (bs=1 x grad_accum=4)
- Learning rate: 5e-5
- Optimizer: AdamW
- Precision: bf16
- Features: GLIGEN-style layout attention + AIBL loss + Overlap-Ownership zones
Overlap-Ownership Zones (Idea 7)
Three-zone classification for overlapping bboxes:
- Winner zone: pixels owned by one bbox (strong positive bias)
- Contested zone: pixels claimed by multiple bboxes (moderate positive bias)
- Loser zone: pixels occluded by other bboxes (reduced bias)
Evaluation Results (1000 samples, val split)
| Metric | Value | Baseline |
|---|---|---|
| Exact Match | 14.72% | 15.34% |
| ยฑ1 Tolerance | 36.00% | โ |
| ยฑ2 Tolerance | 51.26% | โ |
| MAE | 4.285 | 4.22 |
| RMSE | 6.844 | 8.12 |
Usage
from diffusers import Flux2KleinPipeline
from peft import PeftModel
pipe = Flux2KleinPipeline.from_pretrained("black-forest-labs/FLUX.2-klein-base-4B")
pipe.transformer = PeftModel.from_pretrained(pipe.transformer, "BachNgoH/flux2-klein-4b-overlap-ownership-lora")
Framework Versions
- PEFT 0.18.1
- Diffusers
- PyTorch 2.x
- Downloads last month
- 13
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for BachNgoH/flux2-klein-4b-overlap-ownership-lora
Base model
black-forest-labs/FLUX.2-klein-base-4B