CDDB / README.md
nebula's picture
Upload README.md with huggingface_hub
75f1c0a verified
metadata
pretty_name: CDDB
task_categories:
  - image-classification
task_ids:
  - multi-class-image-classification
tags:
  - deepfake
  - continual-learning
  - computer-vision
  - image-forensics
  - wacv
size_categories:
  - unknown
annotations_creators:
  - no-annotation
language:
  - en
license: unknown
configs:
  - config_name: default
    data_files:
      - split: train
        path: CDDB.tar
dataset_info:
  features: []

Dataset Card for CDDB

Dataset Description

CDDB is a benchmark dataset introduced in the WACV 2023 paper A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials. It is designed for continual deepfake detection, where manipulated images from different deepfake generation sources arrive sequentially instead of being observed all at once.

The benchmark is intended to evaluate both:

  • binary deepfake detection (real vs. fake)
  • continual and incremental learning under distribution shifts across deepfake sources

Compared with conventional static deepfake datasets, CDDB focuses on a more realistic setting in which new manipulation methods appear over time and a detector must adapt without catastrophically forgetting previously seen sources.

Supported Tasks

  • Binary image classification: real vs. fake
  • Multi-source deepfake classification
  • Continual learning / class-incremental learning
  • Domain generalization and robustness evaluation for deepfake detection

Dataset Sources

Paper Information

Title: A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials

Authors: Chuqiao Li, Zhiwu Huang, Danda Pani Paudel, Yabin Wang, Mohamad Shahbazi, Xiaopeng Hong, Luc Van Gool

Venue: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)

Year: 2023

Dataset Structure

This repository currently hosts the dataset archive:

  • CDDB.tar

After extraction, the dataset is expected to contain benchmark splits and source-specific subsets used for continual deepfake detection experiments. According to the original paper and project repository, CDDB is built from a collection of real and manipulated images aggregated from multiple existing deepfake datasets and generation pipelines.

The benchmark includes deepfakes derived from multiple sources, including generative and manipulation pipelines such as:

  • ProGAN
  • StyleGAN
  • BigGAN
  • CycleGAN
  • GauGAN
  • CRN
  • IMLE
  • SAN
  • FaceForensics++
  • WhichFaceReal
  • GLOW
  • StarGAN
  • WildDeepfake

The original benchmark is organized around different task sequences, including easy, hard, and long continual streams.

Dataset Creation

CDDB was proposed to study continual deepfake detection in a more practical setting where deepfake generators evolve over time. Instead of treating detection as a stationary benchmark, the dataset groups data into sequential tasks so that models can be evaluated on adaptation, retention, and generalization.

The benchmark is assembled from previously released open-source deepfake datasets and generation sources, rather than being collected from a single acquisition pipeline.

Intended Uses

CDDB is intended for research use in:

  • deepfake detection
  • continual learning
  • incremental learning
  • robustness analysis under source shift
  • benchmarking anti-forgetting strategies

It is particularly suitable for evaluating methods that must maintain performance on previously seen deepfake sources while adapting to newly introduced manipulations.

Out-of-Scope Uses

This dataset is not intended to:

  • certify production-ready deepfake detectors
  • serve as a complete benchmark for all real-world manipulations
  • support identity, biometric, or surveillance decisions
  • be used in safety-critical or high-stakes automated decision systems without additional validation

Considerations and Limitations

  • CDDB is assembled from multiple existing datasets and generation methods, so its licensing and redistribution conditions may depend on the underlying sources.
  • The benchmark reflects the manipulation methods and dataset availability at the time of the original publication.
  • Performance on CDDB does not guarantee robustness to newer generative models or real-world post-processing pipelines.
  • Models trained on this dataset may learn source-specific artifacts instead of general manipulation cues.

Licensing Information

The license for this redistributed archive is currently marked as unknown. Users should verify the licensing and redistribution terms of the original CDDB release and all upstream component datasets before commercial use or redistribution.

Citation

If you use this dataset, please cite the original paper:

@InProceedings{Li_2023_WACV,
  author    = {Li, Chuqiao and Huang, Zhiwu and Paudel, Danda Pani and Wang, Yabin and Shahbazi, Mohamad and Hong, Xiaopeng and Van Gool, Luc},
  title     = {A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials},
  booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
  month     = {January},
  year      = {2023},
  pages     = {1339--1349}
}

Acknowledgements

This dataset card is based on the original WACV 2023 paper and the official project repository. Credit for the benchmark, data construction, and experimental protocol belongs to the original authors.