Datasets:

Modalities:
Image
Video
ArXiv:
Libraries:
Datasets
License:
File size: 6,457 Bytes
99c42e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
# How to run `prep_refdrone_data.py`

This script prepares the **RefDrone test split** for public inference. It is
strictly **web-only**: every annotation and image is downloaded from a public
mirror on each run. The script never reads any pre-existing local copy of
RefDrone or VisDrone elsewhere on disk.

## Prerequisites

- Python 3.8 or later
- Internet access (for HuggingFace and the VisDrone image mirrors)
- About **1 GB** of free disk for the temporary archive plus the extracted images

## Web sources

| What | Source | Size |
| --- | --- | --- |
| Annotation JSON (`RefDrone_test_mdetr.json`) | `https://huggingface.co/datasets/sunzc-sunny/RefDrone/resolve/main/RefDrone_test_mdetr.json` | ~3.3 MB |
| Image archive (`VisDrone2019-DET-test-dev.zip`) — primary | `https://github.com/ultralytics/assets/releases/download/v0.0.0/VisDrone2019-DET-test-dev.zip` (Ultralytics GitHub Release) | ~297 MB |
| Image archive — fallback | Google Drive id `1PFdW_VFSCfZ_sTSZAGjQdifF_Xd5mf0V` via `gdown` (canonical link from `https://github.com/VisDrone/VisDrone-Dataset`) | ~297 MB |

The image archive is tried in the order listed. The script falls through to the
next mirror automatically if one fails (including the well-known Google Drive
"too many users" quota error).

## Step 1 — Navigate to the repo root

```bash
cd /path/to/iib
```

The script's repo root is the `iib/` directory. All commands below are run from
there.

## Step 2 — Install `gdown` (only required for the Google Drive fallback)

```bash
pip install gdown
```

If the primary HTTPS mirror works, `gdown` is not invoked. Installing it
up-front is still recommended so the fallback works on machines where the
primary mirror is unreachable.

## Step 3 — Run the script

```bash
python3 scripts/refdrone/prep_refdrone_data.py
```

Flags:

| Flag | Effect |
| --- | --- |
| `--force` | Re-download annotation and image archive even if outputs exist |
| `--keep-zip` | Keep the downloaded `VisDrone2019-DET-test-dev.zip` after extraction (default: deleted) |

## What the script does

1. Downloads `RefDrone_test_mdetr.json` from HuggingFace (~3.3 MB).
2. Parses the annotation and writes `refdrone_data.jsonl` (3,276 rows, one per
   referring expression — no GT bboxes are ever written to this file).
3. Downloads `VisDrone2019-DET-test-dev.zip` (~297 MB) from the first working
   mirror, validates size, validates SHA-256, extracts the 1,503 required
   images into `images/`, then deletes the zip (unless `--keep-zip`).
4. Writes a `data_summary.json` report under `reports/` recording which
   mirror was used.
5. Hard-validates final counts: exactly **3,276** JSONL rows and exactly
   **1,503** `.jpg` files. Any mismatch aborts with a non-zero exit code.

## Output structure

Outputs are written under the repo root at:

```
iib/LMUData/Spatial/2d_referring_expressions/refdrone/
├── refdrone_data.jsonl                       # 3,276 rows
├── annotations_raw/
│   └── RefDrone_test_mdetr.json              # ~3.3 MB
├── images/                                    # 1,503 .jpg files
└── reports/
    └── data_summary.json
```

## Step 4 — Verify success

```bash
ROOT=iib/LMUData/Spatial/2d_referring_expressions/refdrone

# Annotation row count — must be 3276
wc -l "$ROOT/refdrone_data.jsonl"

# Image count — must be 1503
find "$ROOT/images" -name '*.jpg' | wc -l

# Confirm files_missing is 0 in the summary report
python3 -c "import json; print(json.load(open('$ROOT/reports/data_summary.json'))['images']['files_missing'])"
```

A run is successful only when:

- `refdrone_data.jsonl` has **exactly 3,276 lines**, and
- `images/` contains **exactly 1,503 `.jpg` files**, and
- The script ended with `Preparation complete (web-only download verified).`

The script enforces these checks itself in step 5 of its own flow and exits
non-zero on any deviation.

## Re-running

```
python3 scripts/refdrone/prep_refdrone_data.py            # normal run; reuses prior outputs from this script
python3 scripts/refdrone/prep_refdrone_data.py --force    # ignore prior outputs, re-download from scratch
python3 scripts/refdrone/prep_refdrone_data.py --keep-zip # keep the 297 MB zip after extraction
```

A re-run after a partial failure is safe: if a previous run already extracted
all 1,503 images into `images/`, the script logs that fact explicitly and
skips re-downloading. This is the only on-disk reuse the script performs;
external local copies of VisDrone or RefDrone elsewhere on the machine are
never consulted.

## Troubleshooting

### Primary HTTPS mirror unreachable

```
[images] Mirror FAILED: ultralytics-github-release — <error>
[images] Trying mirror: google-drive-original (gdrive)
```

The script automatically advances to the Google Drive fallback. No action
needed unless both mirrors fail.

### Google Drive quota error

```
[images] Mirror FAILED: google-drive-original — Google Drive refused the
request (likely quota / 'too many users' throttling): ...
```

If this is the *only* failure (primary mirror succeeded), the run already
finished correctly. If both mirrors failed, the script aborts with `ERROR: All
image-zip mirrors failed.` Wait an hour and retry, or download the zip in a
browser from
`https://github.com/ultralytics/assets/releases/download/v0.0.0/VisDrone2019-DET-test-dev.zip`
and re-run; the script will resume.

### Behind a firewall that blocks GitHub release assets

Allowlist `release-assets.githubusercontent.com` (Microsoft Azure Blob backend
for GitHub Releases). With that domain blocked, the primary mirror cannot
download even the redirected URL.

### `gdown` not installed

```
ERROR: gdown is required ... Install it with: pip install gdown
```

Run `pip install gdown`. The fallback Google Drive mirror needs it; the
primary HTTPS mirror does not.

### Checksum mismatch on the downloaded zip

```
ERROR: Checksum mismatch for VisDrone2019-DET-test-dev.zip
  The file has been deleted.  Re-run the script to download again.
```

The download was truncated or the upstream file changed. The corrupt zip is
already removed; just re-run the script. The script tries each mirror's own
SHA-256, so a successful download from any mirror is byte-verified.

### Final-validation failure

```
[validate] FAILED:
  - <one or more issues>
ERROR: Output validation failed.
```

The dataset is incomplete or corrupted. Run with `--force` to redo the
downloads from scratch.