File size: 4,790 Bytes
a510ab0 61aca86 d5343e3 61aca86 a510ab0 0677211 a510ab0 e08b11c a510ab0 e84d347 cd333bb e84d347 a510ab0 e08b11c a510ab0 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 | # FlowGuard Dataset
<div align="center">
<img src="assets/main_fig.png" alt="main">
</div>
## 📦 Overview
The **FlowGuard Dataset** is designed to support the training and evaluation of the *FlowGuard* framework, with a focus on **safety-aware image generation and detection**.
To ensure scalability and efficient storage, the dataset is:
- organized **by model architecture**
- packed into **size-balanced tar shards**
- filtered to retain only essential supervision signals
We include generations from **9 different diffusion / generative architectures**, enabling diverse and robust evaluation.
---
## 🚀 Loading and Usage
The dataset is stored on Hugging Face as **Parquet shards** organized by split, model, and label:
```text
train/{model}/{label}/*.parquet
test/{model}/{label}/*.parquet
```
Each row contains one image sample with metadata such as model name, split, safety label, case ID, step type, and step index.
### Load the full dataset
```python
from datasets import load_dataset
dataset = load_dataset(
"parquet",
data_files={
"train": "hf://datasets/YeQingWen/FlowGuard-Dataset/train/*/*/*.parquet",
"test": "hf://datasets/YeQingWen/FlowGuard-Dataset/test/*/*/*.parquet",
}
)
print(dataset)
print(dataset["train"][0])
```
### Load a specific model
```python
from datasets import load_dataset
dataset = load_dataset(
"parquet",
data_files={
"train": "hf://datasets/YeQingWen/FlowGuard-Dataset/train/flux1/*/*.parquet",
"test": "hf://datasets/YeQingWen/FlowGuard-Dataset/test/flux1/*/*.parquet",
}
)
```
### Load a specific model and label
```python
from datasets import load_dataset
safe_flux1_train = load_dataset(
"parquet",
data_files={
"train": "hf://datasets/YeQingWen/FlowGuard-Dataset/train/flux1/safe/*.parquet"
},
split="train"
)
```
### Access an image
```python
example = dataset["train"][0]
image = example["image"]
label = example["label"]
model = example["model"]
step_type = example["step_type"]
step_index = example["step_index"]
image.show()
```
Each example contains:
- `model`: generation architecture
- `split`: `train` or `test`
- `label`: `safe` or `unsafe`
- `case_id`: generation case identifier
- `step_type`: `linear_step` or `step49`
- `step_index`: diffusion step index
- `image`: decoded image object
## NSFW Category Distribution
The NSFW category is shown below:
<div align="center">
<img src="assets/nsfw_pie.png" alt="NSFW 7分类分布饼图" style="width:70%; max-width:600px; height:auto;">
</div>
---
## 📊 Statistics
The table below reflects the **full, unbalanced dataset distribution**.
> ⚠️ During training, we **subsample to achieve a balanced distribution**.
> For details, see: https://arxiv.org/abs/2604.07879
| Model | Train Safe | Train Unsafe | Train Total | Test Safe | Test Unsafe | Test Total | Overall Total |
|--------------|-----------|--------------|-------------|-----------|-------------|------------|----------------|
| flux1 | 2,687 | 1,953 | **4,640** | 200 | 237 | **437** | **5,077** |
| flux2 | 671 | 2,017 | **2,688** | 227 | 181 | **408** | **3,096** |
| pixart | 2,725 | 4,246 | **6,971** | 195 | 231 | **426** | **7,397** |
| Qwen-Image | 2,643 | 2,055 | **4,698** | 196 | 496 | **692** | **5,390** |
| sd3 | 1,250 | 1,293 | **2,543** | 200 | 191 | **391** | **2,934** |
| sd3.5 | N/A | N/A | N/A | 391 | 317 | **708** | **708** |
| sdv1.5 | 2,659 | 3,537 | **6,196** | 199 | 253 | **452** | **6,648** |
| sdxl | 1,899 | 1,759 | **3,658** | 243 | 282 | **525** | **4,183** |
| Zimage | 2,676 | 1,910 | **4,586** | 199 | 248 | **447** | **5,033** |
| **Total** | **17,210**| **18,770** | **35,980** | **2,050** | **2,436** | **4,486** | **40,466** |
---
## 🛡️ Auditing
To ensure dataset quality:
- The **training set** is automatically audited using `LlavaGuard-7B`
- The **test set** is curated under **strict human supervision**
This hybrid pipeline ensures both **scalability** and **high-quality evaluation signals**.
For implementation details (e.g., prompts and hyperparameters), please refer to:
https://arxiv.org/abs/2604.07879
---
## 📌 Notes
- Each sample corresponds to a **generation case (subdirectory)**
- Each case contains:
- multiple `linear_step` outputs
- one `final image`
- `.pt` files are **excluded** to reduce storage overhead |