--- license: cc-by-4.0 task_categories: - object-detection tags: - 3d-object-detection - 3d-bounding-box - monocular-3d - in-the-wild - depth-estimation pretty_name: WildDet3D-Data size_categories: - 1M_.npz` and `depth/v3det_human/v3det_train_.npz`, etc. ### Extract Camera Parameters ```bash mkdir -p camera && cd camera for f in ../packed/camera_*.tar.gz; do tar xzf "$f"; done cd .. ``` After extraction, you should have `depth/{split}/` and `camera/{split}/` directories with individual files per image. ## Step 2: Download Source Images Images must be downloaded from their original sources and organized into the following structure: ``` images/ ├── coco_train/ # COCO train2017 (includes LVIS images) ├── obj365_train/ # Objects365 training └── v3det_train/ # V3Det training ``` ### COCO train2017 ```bash wget http://images.cocodataset.org/zips/train2017.zip unzip train2017.zip mkdir -p images/coco_train mv train2017/* images/coco_train/ ``` ### Objects365 ```bash # Objects365 — download from https://www.objects365.org/ mkdir -p images/obj365_train # Images should be named: obj365_train_000000XXXXXX.jpg ``` ### V3Det Used by: Train V3Det splits only ```bash # V3Det — download from https://v3det.openxlab.org.cn/ mkdir -p images/v3det_train # Directory structure: images/v3det_train/{category_folder}/{image}.jpg # e.g., images/v3det_train/Q100507578/28_284_50119550013_7d06ded882_c.jpg ``` | Source | Directory | |--------|-----------| | COCO train2017 | `images/coco_train/` | | Objects365 train | `images/obj365_train/` | | V3Det train | `images/v3det_train/` | ## Annotation Format (COCO3D) Each annotation JSON follows the COCO3D format: ```json { "info": {"name": "InTheWild_v3_val"}, "images": [{ "id": 0, "width": 375, "height": 500, "file_path": "images/coco_val/000000000724.jpg", "K": [[fx, 0, cx], [0, fy, cy], [0, 0, 1]] }], "categories": [{"id": 0, "name": "stop sign"}], "annotations": [{ "id": 0, "image_id": 0, "category_id": 0, "category_name": "stop sign", "bbox2D_proj": [x1, y1, x2, y2], "center_cam": [cx, cy, cz], "dimensions": [width, height, length], "R_cam": [[r00, r01, r02], [r10, r11, r12], [r20, r21, r22]], "bbox3D_cam": [[x, y, z], ...], "valid3D": true }] } ``` **Image fields:** - **`K`**: Camera intrinsic matrix (3x3), at original image resolution - **`file_path`**: Relative path to the source image **Annotation fields:** - **`valid3D`**: `true` = valid 3D annotation, `false` = 3D box is filtered out (see note below) - **`center_cam`**: 3D box center in camera coordinates (meters) - **`dimensions`**: `[width, height, length]` in meters (Omni3D convention) - **`R_cam`**: 3x3 rotation matrix in camera coordinates (gravity-aligned, local Y = up) - **`bbox3D_cam`**: 8 corner points of the 3D bounding box in camera coordinates - **`bbox2D_proj`**: 2D bounding box `[x1, y1, x2, y2]` at original image resolution **Important: `valid3D` filtering.** Each annotation always has a valid 2D bounding box (`bbox2D_proj`), but the 3D box fields (`center_cam`, `dimensions`, `R_cam`, `bbox3D_cam`) should only be used when `valid3D=true`. Annotations with `valid3D=false` have 3D boxes that were filtered out due to quality checks (human rejection, size/geometry filtering, or depiction filtering) — their 3D fields contain placeholder values and should be ignored. The annotation counts in the overview table refer to `valid3D=true` annotations only. For training, filter annotations by `valid3D`: ```python for ann in data["annotations"]: if ann["valid3D"]: # Use both 2D and 3D annotations ... else: # 2D box is still valid, but skip 3D box ... ``` ## Which Files to Use | Use Case | Annotation Files | |----------|-----------------| | Train (Human only) | `InTheWild_v3_train_human_only.json` + `InTheWild_v3_v3det_human_only.json` | | Train (Essential) | `InTheWild_v3_train_human.json` + `InTheWild_v3_v3det_human.json` | | Train (Synthetic) | `InTheWild_v3_train_synthetic.json` + `InTheWild_v3_v3det_synthetic.json` | | Train (All) | Essential + Synthetic (all 4 files) | ## License - **Annotations**: CC BY 4.0 ## Paper [WildDet3D: Scaling Promptable 3D Detection in the Wild](https://arxiv.org/abs/2604.08626)