video video 5.06 5.06 | label class label 3
classes |
|---|---|
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks | |
1masks |
PROVE-Bench: A Two-Tier Real-World Benchmark for Object Removal Evaluation
Overview
PROVE-Bench is the benchmark component of the PROVE (Perceptual RemOVal cohErence) evaluation framework. It provides two complementary real-world video subsets specifically designed for evaluating object removal methods:
- PROVE-M: 80 motion-augmented paired videos with ground truth
- PROVE-H: 100 challenging real-world videos without ground truth
Together, they address the fundamental realismβevaluability dilemma of existing benchmarks: real-world datasets lack paired references, while paired datasets are synthetic.
Dataset Description
PROVE-M: Motion-Augmented Real-World Paired Benchmark
| Attribute | Detail |
|---|---|
| Videos | 80 |
| Ground Truth | Yes (paired target-free video) |
| Resolution | 1080p |
| Frames per video | 81 |
| Format | Landscape / Portrait |
| Camera Motion | Dynamic (Ken Burns-style augmentation) |
Construction pipeline:
- Real-world paired capture β For each scene, two consecutive videos are recorded using a tripod-mounted stationary camera: one with the target object, one without.
- Mask annotation β Object masks obtained using SAM3 and manually refined frame by frame.
- Pairwise quality control β Three-stage filtering (BG-PSNR ranking, mask-difference filtering, human selection) yields 80 high-quality paired cases.
- Motion augmentation β Ken Burns-style geometric transformations (cropping, scaling, translation) are applied synchronously to the inputβmaskβGT triplet, simulating handheld shake, push/pull zoom, and target-following motion.
Statistics:
| Object Count | Illumination | Target Type | Target Motion | Small Target | Reflection-related |
|---|---|---|---|---|---|
| 40 single / 40 multi | 40 bright / 40 low-light | 60 person / 20 object | 67 dynamic / 13 static | 6 | 52 |
PROVE-H: Hard Real-World Benchmark (without GT)
| Attribute | Detail |
|---|---|
| Videos | 100 |
| Ground Truth | No |
| Resolution | 1080p |
| Format | Landscape / Portrait |
| Masks | SAM3-generated (no manual refinement) |
Scene categories:
| General | Dynamic Background | Textured Background | Complex Reflections | Crowd | Fast Motion |
|---|---|---|---|---|---|
| 35 | 15 | 20 | 14 | 7 | 9 |
Challenging scenarios include: flowing water, flames, rain/snow, grasslands, deserts, multiple puddle reflections, dense crowds, and fast-motion scenes.
Dataset Structure
PROVE-Bench/
βββ PROVE-M/
β βββ inputs/ # Input videos with target objects
β βββ masks/ # Per-frame binary masks (white = target)
β βββ gt/ # Target-free ground-truth videos
βββ PROVE-H/
βββ inputs/ # Input videos with target objects
βββ masks/ # SAM3-generated per-frame masks
Usage
With PROVE Evaluation Code
# Clone the evaluation code
git clone https://github.com/xiaomi-research/prove.git
cd prove
# Configure dataset paths in utils/dataset.py
# Then run evaluation
python run_prove_metrics.py \
--dataset PROVE-M \
--result_dir /PATH/TO/YOUR_RESULTS \
--metrics rc_s rc_t \
--out_csv results.csv
Data Format
- Input videos: Standard video formats (mp4)
- Masks: Per-frame binary images where white (255) indicates the region to be removed
- Ground truth (PROVE-M only): Target-free videos aligned frame-by-frame with inputs
Important: Your generated results must share the same filenames as the original inputs (extensions may differ).
Comparison with Existing Benchmarks
| Dataset | Real | GT | Shadows | Reflections | Multi-Effect | Crowds | Textured | Fast Motion | #Videos |
|---|---|---|---|---|---|---|---|---|---|
| DAVIS | β | β | β | β | β | β | β | β | 90 |
| Movies | β | β | β | β | β | β | β | β | 5 |
| Kubric | β | β | β | β | β | β | β | β | 5 |
| GenProp | β | β | β | β | β | β | β | β | 15 |
| ROSE-Bench | β | β | β | β | β | β | β | β | 60 |
| PROVE-M (Ours) | β | β | β | β | β | β | β | β | 80 |
| PROVE-H (Ours) | β | β | β | β | β | β | β | β | 100 |
Note
Due to compliance requirements, the open-source data differs slightly from the data used in the paper. The evaluation results based on this version may exhibit minor numerical differences from the paper, but the overall trends remain consistent.
Citation
@article{li2026prove,
title={PROVE: A Perceptual RemOVal cohErence Benchmark for Visual Media},
author={Li, Fuhao and You, Shaofeng and Hu, Jiagao and Liu, Yu and Chen, Yuxuan and Wang, Zepeng and Wang, Fei and Zhou, Daiguo and Luan, Jian},
journal={arXiv preprint arXiv:2605.14534},
year={2026}
}
License
This dataset is released under the Apache 2.0 License.
Contact
For questions about the dataset, please open an issue on the GitHub repository.
- Downloads last month
- 18