SCARED-C / README.md
juseonghan's picture
Update README.md
c38c22c verified
metadata
license: other
license_name: scared-c-research-only
license_link: LICENSE
tags:
  - medical
  - surgical
  - endoscopy
  - depth-estimation
  - rgb-d
  - robotics
  - computer-vision
  - structure-from-motion
  - da-vinci
pretty_name: SCARED-C
size_categories:
  - 10K<n<100K
source_datasets:
  - extended|scared
task_categories:
  - depth-estimation
  - image-to-image
configs:
  - config_name: default
    data_files:
      - split: train
        path: dataset_*/keyframe_*/data/frame_data.tar.gz
      - split: validation
        path: dataset_*/keyframe_*/data/frame_data.tar.gz

Dataset Card for SCARED-C

SCARED-C is a corrected version of the SCARED endoscopic depth estimation dataset. By replacing the original kinematics-based camera poses with poses re-estimated through Structure-from-Motion (COLMAP) followed by a metric scale recovery step, SCARED-C expands the number of reliable RGB-D pairs in SCARED from 35 keyframes to 17,135 frames — a roughly 490× increase in reliably labeled real-tissue surgical data.

SCARED vs SCARED-C Left: original SCARED depth maps misaligned with RGB due to robot kinematics errors. Right: corrected SCARED-C depth maps after COLMAP-based pose re-estimation.

Dataset Details

Dataset Description

The original SCARED dataset provides ground-truth depth maps for ex-vivo porcine abdominal scenes captured using a structured light sensor mounted on a da Vinci endoscope. Because the sensor can only capture depth from a static viewpoint, the dataset was extended by moving the endoscope arm and projecting the keyframe depth map into neighboring video frames using the robot's forward kinematics. In practice, the da Vinci system is cable-driven, and the resulting kinematics errors cause severe misalignment between the projected depth maps and their corresponding RGB images, rendering most non-keyframe data unsuitable for training.

SCARED-C addresses this limitation by:

  1. Running COLMAP (Structure-from-Motion) on the left camera frames of each video sequence to re-estimate camera poses directly from image data, jointly with the keyframe image.
  2. Recovering metric scale via a simple algorithm that aligns the COLMAP sparse reconstruction to the structured-light keyframe depth map (using the median ratio of metric-to-unscaled depth).
  3. Reprojecting the keyframe depth map through the corrected metric poses to produce reliable RGB-D pairs across the full video sequence.

After excluding frames that COLMAP fails to co-register and the entirety of datasets 4 and 5 (which have known calibration issues in the original release), the resulting dataset contains 17,135 RGB-D pairs across 12 keyframe sequences spanning datasets 1, 2, 3, 6, and 7.

  • Language(s): N/A (image data)
  • License: Inherits the license of the original SCARED dataset. Intended for non-commercial academic research use only. Users should verify they have access rights to the original SCARED data before using SCARED-C.

Dataset Sources

Uses

Direct Use

SCARED-C is intended for research in surgical computer vision, particularly:

  • Training and evaluating monocular and stereo depth estimation models on real surgical tissue.
  • Benchmarking Structure-from-Motion and SLAM methods in endoscopic settings.
  • Evaluating zero-shot generalization of foundation models (e.g., FoundationStereo, Depth Anything) to surgical imagery.

Out-of-Scope Use

  • Clinical use of any kind. This dataset consists of ex-vivo porcine abdominal scenes and is not suitable for training models intended for direct clinical deployment without further validation, regulatory approval, and in-vivo data.
  • Commercial use is restricted by the underlying SCARED license terms.

Dataset Structure

The format of SCARED-C is identical to the original SCARED dataset, with all dataset_x/keyframe_y/data/frame_data.tar.gz files replaced with corrected poses. Three additional files are provided per sequence for convenience:

dataset_x/
  keyframe_y/
    data/
      frame_data.tar.gz       # Corrected per-frame poses + reprojected depth maps
      rgb_frames.tar.gz       # Pre-extracted RGB frames corresponding to frame_log.json
      rgb.mp4                 # Original video, included for completeness
    frame_log.json            # List of frames successfully co-registered by COLMAP
    intrinsics_colmap.yaml    # COLMAP-refined intrinsics (readable via cv2.FileStorage)
    endoscope_calibration.yaml  # Original stereo calibration (unchanged)

frame_log.json lists all frames that COLMAP successfully co-registered with the keyframe. The vast majority of original frames are retained; the most notable exception is dataset_2/keyframe_1, which drops from 88 to 11 frames due to limited visual overlap.

intrinsics_colmap.yaml contains the intrinsics refined by COLMAP during bundle adjustment. We did not observe a meaningful difference in downstream performance versus the original intrinsics, but they are included for completeness. They can be loaded with cv2.FileStorage() in the same way as endoscope_calibration.yaml; both files are needed if you require stereo calibration.

rgb_frames.tar.gz is provided as a convenience — these are the registered frames extracted from rgb.mp4, in correspondence with frame_log.json.

Users are also recommended to use the SCARED-toolkit for data processing and extraction.

Splits

We provide a suggested ~70/30 train/validation split to enable consistent benchmarking across future work. Sequence codes follow the format {dataset}_{keyframe}:

Split Sequences Frames
Train 1_1, 1_3, 2_2, 2_4, 3_1, 3_2, 3_3, 6_1, 6_2, 6_3, 7_1, 7_4 12,330
Validation 1_2, 1_4, 1_5, 2_1, 2_3, 2_5, 3_4, 3_5, 6_4, 6_5, 7_2, 7_3, 7_5 4,805
Total 25 sequences 17,135

The 25 keyframes (with structured-light ground-truth depth) from datasets 1, 2, 3, 6, and 7 are recommended as a held-out test set, since they are not part of any video sequence and therefore never appear in training.

Dataset Creation

Source Data

Data Collection and Processing

The source RGB images and structured-light keyframe depth maps come directly from the original SCARED dataset. Our processing pipeline is specified in the associated preprint.

Datasets 4 and 5 from the original SCARED release have known calibration issues and are excluded.

Who are the source data producers?

The original SCARED data was collected by Intuitive Surgical and released through the EndoVis sub-challenge at MICCAI 2019. The scenes are ex-vivo porcine abdomens captured with a structured-light sensor mounted on a da Vinci endoscope.

Annotations

Annotation process

No human annotation. Depth labels are derived from the structured-light keyframe (sensor-measured) and reprojected through COLMAP-estimated camera poses.

Personal and Sensitive Information

The dataset contains no human subjects. All scenes are ex-vivo porcine tissue.

Validation

Please see the associated preprint for validation experiments.

Citation

BibTeX:

@article{han2026scaredc,
  title   = {SCARED-C: Corrected Camera Poses for Endoscopic Depth Estimation},
  author  = {Han, John J. and Schmidt, Adam and Allan, Max and Wu, Jie Ying and Mohareri, Omid},
  journal = {arXiv preprint},
  year    = {2026}
}

More Information

Dataset Card Authors

John J. Han

Dataset Card Contact

Please open an issue on the GitHub repository.