Datasets:
Formats:
webdataset
Size:
1M - 10M
Tags:
sgt
semantic-generative-tuning
unified-multimodal
image-segmentation
visual-understanding
visual-generation
License:
Update README.md (SGT paper info)
Browse files
README.md
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: any-to-any
|
| 4 |
+
library_name: bagel-mot
|
| 5 |
+
tags:
|
| 6 |
+
- sgt
|
| 7 |
+
- semantic-generative-tuning
|
| 8 |
+
- unified-multimodal
|
| 9 |
+
- image-segmentation
|
| 10 |
+
- visual-understanding
|
| 11 |
+
- visual-generation
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# SGT: Semantic Generative Tuning for Unified Multimodal Models
|
| 15 |
+
|
| 16 |
+
This repository hosts checkpoints fine-tuned with **Semantic Generative Tuning (SGT)** — a training
|
| 17 |
+
paradigm that couples visual *understanding* and *generation* in Unified Multimodal Models (UMMs)
|
| 18 |
+
by using **image segmentation as a generative proxy**.
|
| 19 |
+
|
| 20 |
+
> Unified multimodal models typically optimize understanding and generation with *misaligned*
|
| 21 |
+
> objectives (sparse text tokens vs. dense pixel targets), which isolates the two capabilities.
|
| 22 |
+
> SGT introduces segmentation — a **high-level semantic task** — as a unified generative objective
|
| 23 |
+
> that aligns the two branches, improves feature linear separability, and optimizes visual-textual
|
| 24 |
+
> attention allocation.
|
| 25 |
+
|
| 26 |
+
## 🧠 Method Overview
|
| 27 |
+
|
| 28 |
+
SGT reformulates classical visual tasks as generative proxies and establishes a **hierarchical
|
| 29 |
+
taxonomy** (low-/mid-/high-level). Extensive experiments show that **high-level semantic tasks
|
| 30 |
+
(e.g. image segmentation) are the optimal proxy**, outperforming depth, edge, reconstruction and
|
| 31 |
+
MAE/inpainting for synergizing understanding and generation.
|
| 32 |
+
|
| 33 |
+
Key findings:
|
| 34 |
+
|
| 35 |
+
1. **High-level > low-level**: segmentation gives larger gains in both understanding and generation
|
| 36 |
+
than depth / edge / pixel reconstruction.
|
| 37 |
+
2. **Perception, not reasoning**: visual supervision mainly strengthens vision-centric perception
|
| 38 |
+
(spatial, hallucination, OCR), rather than abstract reasoning.
|
| 39 |
+
3. **Architecture-agnostic**: the gains hold for both **BAGEL** and **OmniGen2**.
|
| 40 |
+
|
| 41 |
+
## 📦 Released Artifacts
|
| 42 |
+
|
| 43 |
+
| Repo | Type | Base Model | Content |
|
| 44 |
+
|---|---|---|---|
|
| 45 |
+
| [`Two-hot/SGT-BAGEL`](https://huggingface.co/Two-hot/SGT-BAGEL) | model | BAGEL-7B-MoT | SGT fine-tuned BAGEL checkpoint |
|
| 46 |
+
| [`Two-hot/SGT-Gen2`](https://huggingface.co/Two-hot/SGT-Gen2) | model | OmniGen2 | SGT fine-tuned OmniGen2 checkpoint (transformer/ only) |
|
| 47 |
+
| [`Two-hot/SAM-SGT`](https://huggingface.co/datasets/Two-hot/SAM-SGT) | dataset | — | Segmentation training data (tar-sharded) used by SGT |
|
| 48 |
+
|
| 49 |
+
### Use the SAM-SGT dataset
|
| 50 |
+
|
| 51 |
+
See [`Two-hot/SAM-SGT`](https://huggingface.co/datasets/Two-hot/SAM-SGT) for the data
|
| 52 |
+
layout and the extraction instructions (files are stored as 5GB tar shards to fit HF limits).
|
| 53 |
+
|
| 54 |
+
## 📊 Highlights
|
| 55 |
+
|
| 56 |
+
- **+6.02%** average gain over BAGEL on the **CV-Bench** evaluation.
|
| 57 |
+
- Consistent improvements in **spatial reasoning**, **hallucination resistance**, and **OCR**.
|
| 58 |
+
- Generation: gains across **GenEval** dimensions (Position / Color / Counting / Single-Object / etc.).
|
| 59 |
+
- Verified on two representative UMM architectures (**BAGEL**, **OmniGen2**).
|
| 60 |
+
|
| 61 |
+
## 📝 License
|
| 62 |
+
|
| 63 |
+
Apache-2.0. Base models remain under their original licenses:
|
| 64 |
+
BAGEL (Apache-2.0, based on Qwen2.5-7B + SigLIP + FLUX VAE) and
|
| 65 |
+
OmniGen2 (based on Qwen2.5-VL + diffusion transformer).
|
| 66 |
+
|
| 67 |
+
## ✍️ Citation
|
| 68 |
+
|
| 69 |
+
If you find this work useful, please cite our paper (anonymous ECCV 2026 submission, paper ID #3064):
|
| 70 |
+
|
| 71 |
+
```bibtex
|
| 72 |
+
@article{sgt2026,
|
| 73 |
+
title = {Semantic Generative Tuning for Unified Multimodal Models},
|
| 74 |
+
author = {Songsong Yu, Yuxin Chen, Ying Shan, and Yanwei Li},
|
| 75 |
+
journal = {arxiv},
|
| 76 |
+
year = {2026}
|
| 77 |
+
}
|
| 78 |
+
```
|