Datasets:

Modalities:
Image
Video
ArXiv:
Libraries:
Datasets
License:
vpraveen-nv's picture
Restructure data/: drop Spatial, rename to dense_captioning / event_verification / pointing / referring / temporal_localization / vqa (#4)
948402d

How to run prep_refdrone_data.py

This script prepares the RefDrone test split for public inference. It is strictly web-only: every annotation and image is downloaded from a public mirror on each run. The script never reads any pre-existing local copy of RefDrone or VisDrone elsewhere on disk.

Prerequisites

  • Python 3.8 or later
  • Internet access (for HuggingFace and the VisDrone image mirrors)
  • About 1 GB of free disk for the temporary archive plus the extracted images

Web sources

What Source Size
Annotation JSON (RefDrone_test_mdetr.json) https://huggingface.co/datasets/sunzc-sunny/RefDrone/resolve/main/RefDrone_test_mdetr.json ~3.3 MB
Image archive (VisDrone2019-DET-test-dev.zip) — primary https://github.com/ultralytics/assets/releases/download/v0.0.0/VisDrone2019-DET-test-dev.zip (Ultralytics GitHub Release) ~297 MB
Image archive — fallback Google Drive id 1PFdW_VFSCfZ_sTSZAGjQdifF_Xd5mf0V via gdown (canonical link from https://github.com/VisDrone/VisDrone-Dataset) ~297 MB

The image archive is tried in the order listed. The script falls through to the next mirror automatically if one fails (including the well-known Google Drive "too many users" quota error).

Step 1 — Navigate to the repo root

cd /path/to/iib

The script's repo root is the iib/ directory. All commands below are run from there.

Step 2 — Install gdown (only required for the Google Drive fallback)

pip install gdown

If the primary HTTPS mirror works, gdown is not invoked. Installing it up-front is still recommended so the fallback works on machines where the primary mirror is unreachable.

Step 3 — Run the script

python3 scripts/refdrone/prep_refdrone_data.py

Flags:

Flag Effect
--force Re-download annotation and image archive even if outputs exist
--keep-zip Keep the downloaded VisDrone2019-DET-test-dev.zip after extraction (default: deleted)

What the script does

  1. Downloads RefDrone_test_mdetr.json from HuggingFace (~3.3 MB).
  2. Parses the annotation and writes refdrone_data.jsonl (3,276 rows, one per referring expression — no GT bboxes are ever written to this file).
  3. Downloads VisDrone2019-DET-test-dev.zip (~297 MB) from the first working mirror, validates size, validates SHA-256, extracts the 1,503 required images into images/, then deletes the zip (unless --keep-zip).
  4. Writes a data_summary.json report under reports/ recording which mirror was used.
  5. Hard-validates final counts: exactly 3,276 JSONL rows and exactly 1,503 .jpg files. Any mismatch aborts with a non-zero exit code.

Output structure

Outputs are written under the repo root at:

iib/LMUData/Spatial/2d_referring_expressions/refdrone/
├── refdrone_data.jsonl                       # 3,276 rows
├── annotations_raw/
│   └── RefDrone_test_mdetr.json              # ~3.3 MB
├── images/                                    # 1,503 .jpg files
└── reports/
    └── data_summary.json

Step 4 — Verify success

ROOT=iib/LMUData/Spatial/2d_referring_expressions/refdrone

# Annotation row count — must be 3276
wc -l "$ROOT/refdrone_data.jsonl"

# Image count — must be 1503
find "$ROOT/images" -name '*.jpg' | wc -l

# Confirm files_missing is 0 in the summary report
python3 -c "import json; print(json.load(open('$ROOT/reports/data_summary.json'))['images']['files_missing'])"

A run is successful only when:

  • refdrone_data.jsonl has exactly 3,276 lines, and
  • images/ contains exactly 1,503 .jpg files, and
  • The script ended with Preparation complete (web-only download verified).

The script enforces these checks itself in step 5 of its own flow and exits non-zero on any deviation.

Re-running

python3 scripts/refdrone/prep_refdrone_data.py            # normal run; reuses prior outputs from this script
python3 scripts/refdrone/prep_refdrone_data.py --force    # ignore prior outputs, re-download from scratch
python3 scripts/refdrone/prep_refdrone_data.py --keep-zip # keep the 297 MB zip after extraction

A re-run after a partial failure is safe: if a previous run already extracted all 1,503 images into images/, the script logs that fact explicitly and skips re-downloading. This is the only on-disk reuse the script performs; external local copies of VisDrone or RefDrone elsewhere on the machine are never consulted.

Troubleshooting

Primary HTTPS mirror unreachable

[images] Mirror FAILED: ultralytics-github-release — <error>
[images] Trying mirror: google-drive-original (gdrive)

The script automatically advances to the Google Drive fallback. No action needed unless both mirrors fail.

Google Drive quota error

[images] Mirror FAILED: google-drive-original — Google Drive refused the
request (likely quota / 'too many users' throttling): ...

If this is the only failure (primary mirror succeeded), the run already finished correctly. If both mirrors failed, the script aborts with ERROR: All image-zip mirrors failed. Wait an hour and retry, or download the zip in a browser from https://github.com/ultralytics/assets/releases/download/v0.0.0/VisDrone2019-DET-test-dev.zip and re-run; the script will resume.

Behind a firewall that blocks GitHub release assets

Allowlist release-assets.githubusercontent.com (Microsoft Azure Blob backend for GitHub Releases). With that domain blocked, the primary mirror cannot download even the redirected URL.

gdown not installed

ERROR: gdown is required ... Install it with: pip install gdown

Run pip install gdown. The fallback Google Drive mirror needs it; the primary HTTPS mirror does not.

Checksum mismatch on the downloaded zip

ERROR: Checksum mismatch for VisDrone2019-DET-test-dev.zip
  The file has been deleted.  Re-run the script to download again.

The download was truncated or the upstream file changed. The corrupt zip is already removed; just re-run the script. The script tries each mirror's own SHA-256, so a successful download from any mirror is byte-verified.

Final-validation failure

[validate] FAILED:
  - <one or more issues>
ERROR: Output validation failed.

The dataset is incomplete or corrupted. Run with --force to redo the downloads from scratch.