juseonghan commited on
Commit
ff57d41
·
verified ·
1 Parent(s): 4b87265

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +214 -8
README.md CHANGED
@@ -1,12 +1,218 @@
1
- # SCARED-C: Corrected Camera Poses for Endoscopic Depth Estimation
2
- The SCARED dataset is a widely used benchmark for endoscopic depth estimation, offering ground-truth 3D reconstructions captured with a structured light sensor. However, the depth maps for non-keyframe images rely on robot kinematics that introduce substantial pose errors, limiting the reliably labeled portion of the dataset to 35 keyframes. We present SCARED-C, a corrected version of the SCARED dataset that expands the number of reliable RGB-D pairs from 35 to 17,135. Our pipeline applies COLMAP, a Structure-from-Motion system, to re-estimate camera poses for all frames, followed by a scale recovery step that aligns the resulting reconstructions to metric space using the ground-truth keyframe depth maps. We validate the corrected poses through (1) stereo disparity evaluation and (2) monocular depth estimation experiments.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
 
4
- The format of the dataset is identical to that of the original SCARED dataset. We simply replaced all `dataset_x/keyframe_y/data/frame_data.tar.gz` with the corrected poses. However, there are still some notable differences between the two.
5
 
6
- First, we have added a `frame_log.json` in each sequence folder. This file simply contains all of the registered images in COLMAP. You will notice that majority of the frames from the original dataset are included, and the only notable difference is dataset 2, keyframe 1 (88 frames --> 11 frames).
7
- Second, we add a `intrinsics_colmap.yaml`. This file contains the estimated intrinsics from COLMAP. We did not observe any notable difference when using these intrinsics versus the ones provided in the original dataset. We still include these in case they are useful for downstream tasks. You can read them the same way as the original `endoscope_calibration.yaml` (e.g., with `cv2.FileStorage()`) although you will need to read both if you need the stereo calibration as well.
8
- Finally, we also include `data/rgb_frames.tar.gz` in each sequence folder. These frames are simply the registered frames from `rgb.mp4`, e.g., the frames corresponding to `frame_log.json`. We pre-extract these frames for the user's convenience. We also include the original `rgb.mp4` just in case.
9
 
10
- For more details on the dataset, please refer to the corresponding preprint [here](https://scholar.google.com/citations?user=FORN3dsAAAAJ&hl=en). The COLMAP code and scale recovery code is [here](https://github.com/juseonghan/SCARED-C/tree/main).
 
11
 
12
- For questions, please open a Github issue and we will be as prompt as possible in getting back to you.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ license_name: scared-c-research-only
4
+ license_link: LICENSE
5
+ tags:
6
+ - medical
7
+ - surgical
8
+ - endoscopy
9
+ - depth-estimation
10
+ - rgb-d
11
+ - robotics
12
+ - computer-vision
13
+ - structure-from-motion
14
+ - da-vinci
15
+ pretty_name: SCARED-C
16
+ size_categories:
17
+ - 10K<n<100K
18
+ source_datasets:
19
+ - extended|scared
20
+ task_categories:
21
+ - depth-estimation
22
+ - image-to-image
23
+ configs:
24
+ - config_name: default
25
+ data_files:
26
+ - split: train
27
+ path: "dataset_*/keyframe_*/data/frame_data.tar.gz"
28
+ - split: validation
29
+ path: "dataset_*/keyframe_*/data/frame_data.tar.gz"
30
+ ---
31
 
32
+ # Dataset Card for SCARED-C
33
 
34
+ SCARED-C is a corrected version of the [SCARED](https://endovissub2019-scared.grand-challenge.org/) endoscopic depth estimation dataset. By replacing the original kinematics-based camera poses with poses re-estimated through Structure-from-Motion (COLMAP) followed by a metric scale recovery step, SCARED-C expands the number of reliable RGB-D pairs in SCARED from 35 keyframes to **17,135 frames** a roughly 490× increase in reliably labeled real-tissue surgical data.
 
 
35
 
36
+ ![SCARED vs SCARED-C](https://example.com/teaser.png)
37
+ *Left: original SCARED depth maps misaligned with RGB due to robot kinematics errors. Right: corrected SCARED-C depth maps after COLMAP-based pose re-estimation.*
38
 
39
+ ## Dataset Details
40
+
41
+ ### Dataset Description
42
+
43
+ The original SCARED dataset provides ground-truth depth maps for ex-vivo porcine abdominal scenes captured using a structured light sensor mounted on a da Vinci endoscope. Because the sensor can only capture depth from a static viewpoint, the dataset was extended by moving the endoscope arm and projecting the keyframe depth map into neighboring video frames using the robot's forward kinematics. In practice, the da Vinci system is cable-driven, and the resulting kinematics errors cause severe misalignment between the projected depth maps and their corresponding RGB images, rendering most non-keyframe data unsuitable for training.
44
+
45
+ SCARED-C addresses this limitation by:
46
+
47
+ 1. Running [COLMAP](https://colmap.github.io/) (Structure-from-Motion) on the left camera frames of each video sequence to re-estimate camera poses directly from image data, jointly with the keyframe image.
48
+ 2. Recovering metric scale via a simple algorithm that aligns the COLMAP sparse reconstruction to the structured-light keyframe depth map (using the median ratio of metric-to-unscaled depth).
49
+ 3. Reprojecting the keyframe depth map through the corrected metric poses to produce reliable RGB-D pairs across the full video sequence.
50
+
51
+ After excluding frames that COLMAP fails to co-register and the entirety of datasets 4 and 5 (which have known calibration issues in the original release), the resulting dataset contains 17,135 RGB-D pairs across 12 keyframe sequences spanning datasets 1, 2, 3, 6, and 7.
52
+
53
+ - **Language(s):** N/A (image data)
54
+ - **License:** Inherits the license of the original SCARED dataset. Intended for non-commercial academic research use only. Users should verify they have access rights to the original SCARED data before using SCARED-C.
55
+
56
+ ### Dataset Sources
57
+
58
+ - **Repository (data):** https://huggingface.co/datasets/juseonghan/SCARED-C
59
+ - **Repository (code):** https://github.com/juseonghan/SCARED-C
60
+ - **Paper:** SCARED-C: Corrected Camera Poses for Endoscopic Depth Estimation (Han et al.)
61
+ - **Original dataset:** [Allan et al., 2021](https://arxiv.org/abs/2101.01133) — Stereo Correspondence and Reconstruction of Endoscopic Data Challenge
62
+
63
+ ## Uses
64
+
65
+ ### Direct Use
66
+
67
+ SCARED-C is intended for research in surgical computer vision, particularly:
68
+
69
+ - Training and evaluating monocular and stereo **depth estimation** models on real surgical tissue.
70
+ - **RGB-D self-supervised pretraining** for surgical scene understanding.
71
+ - Benchmarking **Structure-from-Motion** and **SLAM** methods in endoscopic settings.
72
+ - Evaluating zero-shot generalization of foundation models (e.g., FoundationStereo, Depth Anything) to surgical imagery.
73
+
74
+ ### Out-of-Scope Use
75
+
76
+ - **Clinical use of any kind.** This dataset consists of ex-vivo porcine abdominal scenes and is not suitable for training models intended for direct clinical deployment without further validation, regulatory approval, and in-vivo data.
77
+ - **Training in-the-wild depth models without surgical-domain context.** The imagery is highly domain-specific (specular tissue, narrow FoV, controlled lighting); models trained solely on SCARED-C should not be expected to generalize to natural images.
78
+ - **Commercial use** is restricted by the underlying SCARED license terms.
79
+
80
+ ## Dataset Structure
81
+
82
+ The format of SCARED-C is **identical to the original SCARED dataset**, with all `dataset_x/keyframe_y/data/frame_data.tar.gz` files replaced with corrected poses. Three additional files are provided per sequence for convenience:
83
+
84
+ ```
85
+ dataset_x/
86
+ keyframe_y/
87
+ data/
88
+ frame_data.tar.gz # Corrected per-frame poses + reprojected depth maps
89
+ rgb_frames.tar.gz # Pre-extracted RGB frames corresponding to frame_log.json
90
+ rgb.mp4 # Original video, included for completeness
91
+ frame_log.json # List of frames successfully co-registered by COLMAP
92
+ intrinsics_colmap.yaml # COLMAP-refined intrinsics (readable via cv2.FileStorage)
93
+ endoscope_calibration.yaml # Original stereo calibration (unchanged)
94
+ ```
95
+
96
+ **`frame_log.json`** lists all frames that COLMAP successfully co-registered with the keyframe. The vast majority of original frames are retained; the most notable exception is `dataset_2/keyframe_1`, which drops from 88 to 11 frames due to limited visual overlap.
97
+
98
+ **`intrinsics_colmap.yaml`** contains the intrinsics refined by COLMAP during bundle adjustment. We did not observe a meaningful difference in downstream performance versus the original intrinsics, but they are included for completeness. They can be loaded with `cv2.FileStorage()` in the same way as `endoscope_calibration.yaml`; both files are needed if you require stereo calibration.
99
+
100
+ **`rgb_frames.tar.gz`** is provided as a convenience — these are the registered frames extracted from `rgb.mp4`, in correspondence with `frame_log.json`.
101
+
102
+ ### Splits
103
+
104
+ We provide a suggested ~70/30 train/validation split to enable consistent benchmarking across future work. Sequence codes follow the format `{dataset}_{keyframe}`:
105
+
106
+ | Split | Sequences | Frames |
107
+ |---|---|---|
108
+ | **Train** | 1_1, 1_3, 2_2, 2_4, 3_1, 3_2, 3_3, 6_1, 6_2, 6_3, 7_1, 7_4 | 12,330 |
109
+ | **Validation** | 1_2, 1_4, 1_5, 2_1, 2_3, 2_5, 3_4, 3_5, 6_4, 6_5, 7_2, 7_3, 7_5 | 4,805 |
110
+ | **Total** | 25 sequences | **17,135** |
111
+
112
+ The 25 keyframes (with structured-light ground-truth depth) from datasets 1, 2, 3, 6, and 7 are recommended as a held-out test set, since they are not part of any video sequence and therefore never appear in training.
113
+
114
+ ## Dataset Creation
115
+
116
+ ### Curation Rationale
117
+
118
+ Real-tissue depth estimation datasets are scarce in surgical computer vision because collecting ground-truth depth in the operating environment is physically constrained. SCARED is one of the only such datasets with structured-light ground truth, but its kinematics-based pose extension is unreliable, effectively limiting practitioners to the 35 keyframes. SCARED-C was created to recover the latent training signal in the remaining ~17K video frames by replacing the noisy kinematics with a more accurate vision-based pose estimate.
119
+
120
+ ### Source Data
121
+
122
+ #### Data Collection and Processing
123
+
124
+ The source RGB images and structured-light keyframe depth maps come directly from the original SCARED dataset. Our processing pipeline:
125
+
126
+ 1. **Pose estimation.** For each video sequence, COLMAP is run on the left-camera frames at native resolution (1024 × 1280). Because da Vinci images are natively undistorted, no additional undistortion is applied. The keyframe RGB image is included in the COLMAP input so that the keyframe is registered into the same reconstruction. The provided camera intrinsics initialize bundle adjustment and are refined during optimization.
127
+ 2. **Scale recovery.** The COLMAP sparse point cloud is projected onto the image plane at the keyframe pose to obtain an unscaled depth map $\hat{D}$. The metric scale factor is computed as $s = \mathrm{median}(D / \hat{D})$, where $D$ is the structured-light keyframe depth. The metric per-frame pose is then $T_i = (R_i, s \cdot \hat{t}_i)$.
128
+ 3. **Depth reprojection.** The keyframe depth map is reprojected through the corrected metric poses to produce a per-frame depth map for every co-registered video frame.
129
+
130
+ Datasets 4 and 5 from the original SCARED release have known calibration issues and are excluded.
131
+
132
+ #### Who are the source data producers?
133
+
134
+ The original SCARED data was collected by Intuitive Surgical and released through the EndoVis sub-challenge at MICCAI 2019. The scenes are ex-vivo porcine abdomens captured with a structured-light sensor mounted on a da Vinci endoscope.
135
+
136
+ ### Annotations
137
+
138
+ #### Annotation process
139
+
140
+ No human annotation. Depth labels are derived from the structured-light keyframe (sensor-measured) and reprojected through COLMAP-estimated camera poses.
141
+
142
+ #### Personal and Sensitive Information
143
+
144
+ The dataset contains no human subjects. All scenes are ex-vivo porcine tissue.
145
+
146
+ ### Recommendations
147
+
148
+ - Use the provided train/validation split for consistent benchmarking.
149
+ - Hold out the 25 keyframes as a test set; they are not part of any video sequence.
150
+ - For depth metrics, normalize and report relative error metrics (Abs Rel, RMSE, δ₁) in addition to absolute metric error, since the corrected depth maps depend on the median-based scale recovery.
151
+ - When using SCARED-C alongside other endoscopic datasets, account for the ex-vivo porcine domain.
152
+
153
+ ## Validation
154
+
155
+ Two indirect experiments validate the corrected poses (full results in the paper):
156
+
157
+ **Stereo disparity evaluation** with FoundationStereo:
158
+
159
+ | Dataset | EPE ↓ | Abs Rel ↓ | δ₁ ↑ |
160
+ |---|---|---|---|
161
+ | Original SCARED | 6.062 | 0.046 | 96.3 |
162
+ | **SCARED-C (ours)** | **1.912** | **0.026** | **99.0** |
163
+ | Keyframes only | 0.998 | 0.014 | 99.7 |
164
+
165
+ **Monocular depth estimation** (U-Net, evaluated on 25 held-out keyframes):
166
+
167
+ | Training data | Abs Rel ↓ | RMSE ↓ | δ₁ ↑ |
168
+ |---|---|---|---|
169
+ | Original SCARED | 0.856 | 0.283 | 18.5 |
170
+ | **SCARED-C (ours)** | **0.528** | **0.184** | **26.3** |
171
+
172
+ ## Citation
173
+
174
+ **BibTeX:**
175
+
176
+ ```bibtex
177
+ @article{han2026scaredc,
178
+ title = {SCARED-C: Corrected Camera Poses for Endoscopic Depth Estimation},
179
+ author = {Han, John J. and Schmidt, Adam and Allan, Max and Wu, Jie Ying and Mohareri, Omid},
180
+ journal = {arXiv preprint},
181
+ year = {2026}
182
+ }
183
+ ```
184
+
185
+ If you use SCARED-C, please also cite the original SCARED dataset:
186
+
187
+ ```bibtex
188
+ @article{allan2021stereo,
189
+ title = {Stereo correspondence and reconstruction of endoscopic data challenge},
190
+ author = {Allan, Max and Mcleod, Jonathan and Wang, Congcong and Rosenthal, Jean Claude and Hu, Zhenglei and Gard, Niklas and Eisert, Peter and Fu, Ke Xue and Zeffiro, Trevor and Xia, Wenyao and others},
191
+ journal = {arXiv preprint arXiv:2101.01133},
192
+ year = {2021}
193
+ }
194
+ ```
195
+
196
+ And, if applicable, COLMAP:
197
+
198
+ ```bibtex
199
+ @inproceedings{schoenberger2016sfm,
200
+ title = {Structure-from-Motion Revisited},
201
+ author = {Sch\"{o}nberger, Johannes Lutz and Frahm, Jan-Michael},
202
+ booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
203
+ year = {2016}
204
+ }
205
+ ```
206
+
207
+ ## More Information
208
+
209
+ - Scale-recovery and processing code: https://github.com/juseonghan/SCARED-C
210
+ - Original SCARED challenge: https://endovissub2019-scared.grand-challenge.org/
211
+
212
+ ## Dataset Card Authors
213
+
214
+ John J. Han
215
+
216
+ ## Dataset Card Contact
217
+
218
+ Please open an issue on the [GitHub repository](https://github.com/juseonghan/SCARED-C/issues).