File size: 5,976 Bytes
085f227
 
7666602
 
 
 
 
 
 
 
 
 
 
 
085f227
7666602
 
 
 
 
5d4d16e
7666602
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d4d16e
7666602
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
---
license: apache-2.0
task_categories:
  - video-classification
  - image-to-image
tags:
  - object-removal
  - video-inpainting
  - image-inpainting
  - evaluation
  - benchmark
pretty_name: PROVE-Bench
size_categories:
  - n<1K
---

# PROVE-Bench: A Two-Tier Real-World Benchmark for Object Removal Evaluation

<div align="center">

[![Paper](https://img.shields.io/badge/arXiv-2605.14534-b31b1b)](https://arxiv.org/abs/2605.14534)
[![Project Page](https://img.shields.io/badge/🐳-Project%20Page-blue)](https://xiaomi-research.github.io/prove)
[![Code](https://img.shields.io/badge/GitHub-Code-black)](https://github.com/xiaomi-research/prove)

</div>

## Overview

**PROVE-Bench** is the benchmark component of the [PROVE](https://github.com/xiaomi-research/prove) (Perceptual RemOVal cohErence) evaluation framework. It provides two complementary real-world video subsets specifically designed for evaluating object removal methods:

- **PROVE-M**: 80 motion-augmented paired videos with ground truth
- **PROVE-H**: 100 challenging real-world videos without ground truth

Together, they address the fundamental realism–evaluability dilemma of existing benchmarks: real-world datasets lack paired references, while paired datasets are synthetic.

## Dataset Description

### PROVE-M: Motion-Augmented Real-World Paired Benchmark

| Attribute | Detail |
|---|---|
| Videos | 80 |
| Ground Truth | Yes (paired target-free video) |
| Resolution | 1080p |
| Frames per video | 81 |
| Format | Landscape / Portrait |
| Camera Motion | Dynamic (Ken Burns-style augmentation) |

**Construction pipeline:**
1. **Real-world paired capture** — For each scene, two consecutive videos are recorded using a tripod-mounted stationary camera: one with the target object, one without.
2. **Mask annotation** — Object masks obtained using SAM3 and manually refined frame by frame.
3. **Pairwise quality control** — Three-stage filtering (BG-PSNR ranking, mask-difference filtering, human selection) yields 80 high-quality paired cases.
4. **Motion augmentation** — Ken Burns-style geometric transformations (cropping, scaling, translation) are applied synchronously to the input–mask–GT triplet, simulating handheld shake, push/pull zoom, and target-following motion.

**Statistics:**

| Object Count | Illumination | Target Type | Target Motion | Small Target | Reflection-related |
|---|---|---|---|---|---|
| 40 single / 40 multi | 40 bright / 40 low-light | 60 person / 20 object | 67 dynamic / 13 static | 6 | 52 |

### PROVE-H: Hard Real-World Benchmark (without GT)

| Attribute | Detail |
|---|---|
| Videos | 100 |
| Ground Truth | No |
| Resolution | 1080p |
| Format | Landscape / Portrait |
| Masks | SAM3-generated (no manual refinement) |

**Scene categories:**

| General | Dynamic Background | Textured Background | Complex Reflections | Crowd | Fast Motion |
|---|---|---|---|---|---|
| 35 | 15 | 20 | 14 | 7 | 9 |

Challenging scenarios include: flowing water, flames, rain/snow, grasslands, deserts, multiple puddle reflections, dense crowds, and fast-motion scenes.

## Dataset Structure

```
PROVE-Bench/
├── PROVE-M/
│   ├── inputs/          # Input videos with target objects
│   ├── masks/           # Per-frame binary masks (white = target)
│   └── gt/             # Target-free ground-truth videos
└── PROVE-H/
    ├── inputs/          # Input videos with target objects
    └── masks/           # SAM3-generated per-frame masks
```

## Usage

### With PROVE Evaluation Code

```bash
# Clone the evaluation code
git clone https://github.com/xiaomi-research/prove.git
cd prove

# Configure dataset paths in utils/dataset.py
# Then run evaluation
python run_prove_metrics.py \
    --dataset PROVE-M \
    --result_dir /PATH/TO/YOUR_RESULTS \
    --metrics rc_s rc_t \
    --out_csv results.csv
```

### Data Format

- **Input videos**: Standard video formats (mp4)
- **Masks**: Per-frame binary images where white (255) indicates the region to be removed
- **Ground truth** (PROVE-M only): Target-free videos aligned frame-by-frame with inputs

> **Important:** Your generated results must share the same filenames as the original inputs (extensions may differ).

## Comparison with Existing Benchmarks

| Dataset | Real | GT | Shadows | Reflections | Multi-Effect | Crowds | Textured | Fast Motion | #Videos |
|---|---|---|---|---|---|---|---|---|---|
| [DAVIS](https://davischallenge.org/davis2017/code.html) | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | 90 |
| [Movies](https://omnimatte-rf.github.io/) | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✓ | 5 |
| [Kubric](https://d2nerf.github.io/) | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | 5 |
| [GenProp](https://huggingface.co/datasets/Shaldon/GenProp-Data) | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | 15 |
| [ROSE-Bench](https://huggingface.co/datasets/Kunbyte/ROSE-Dataset) | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | 60 |
| **PROVE-M (Ours)** | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | 80 |
| **PROVE-H (Ours)** | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 100 |

## Note

Due to compliance requirements, the open-source data differs slightly from the data used in the paper. The evaluation results based on this version may exhibit minor numerical differences from the paper, but the overall trends remain consistent.

## Citation

```bibtex
@article{li2026prove,
   title={PROVE: A Perceptual RemOVal cohErence Benchmark for Visual Media},
   author={Li, Fuhao and You, Shaofeng and Hu, Jiagao and Liu, Yu and Chen, Yuxuan and Wang, Zepeng and Wang, Fei and Zhou, Daiguo and Luan, Jian},
   journal={arXiv preprint arXiv:2605.14534},
   year={2026}
}
```

## License

This dataset is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).

## Contact

For questions about the dataset, please open an issue on the [GitHub repository](https://github.com/xiaomi-research/prove).