Datasets:

Modalities:
Image
Video
ArXiv:
Libraries:
Datasets
License:
File size: 9,333 Bytes
52d9551
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
# prepare_sot_dataset.py

Prepares an evaluator-ready **Single Object Tracking (SOT)** dataset from a
benchmark JSON/JSONL file and source camera videos hosted on Hugging Face.

---

## Files

| File | Description |
|---|---|
| `sot_benchmark.jsonl` | Benchmark file defining all evaluation sequences — scene, camera, object, init bounding box, and canonical 8-frame IDs. |
| `prepare_sot_dataset.py` | Script that downloads source videos and extracts frames into an evaluator-ready directory. |

---

## Overview

The script:
1. Reads `sot_benchmark.jsonl` (or a custom benchmark file) describing SOT sequences (scene, camera, target bounding box, frame IDs).
2. Downloads the corresponding source `.mp4` files from the `nvidia/PhysicalAI-SmartSpaces` Hugging Face dataset (or a custom repo).
3. Extracts the requested frames with `ffmpeg`.
4. Annotates the initialization frame with a bounding-box overlay (`f00_ann.png`) and saves a cropped target thumbnail (`crop.png`).
5. Writes a `sequence_meta.json` per sequence.
6. Produces a `gt_requests.json` manifest. **Ground-truth bounding boxes are never written to the output directory** — see [GT Request Submission](#gt-request-submission) below.

---

## Prerequisites

| Requirement | Notes |
|---|---|
| Python 3.8+ | |
| `ffmpeg` | Must be on `PATH` (or in a few well-known locations). |
| `huggingface_hub` | `pip install huggingface_hub` |
| `opencv-python` **or** `Pillow` | Used for bbox drawing and cropping. `opencv-python` is preferred. |
| HF access token | Read access to the source video dataset. |

```bash

pip install huggingface_hub opencv-python

```

---

## Hugging Face Authentication

A token with **read** access to `nvidia/PhysicalAI-SmartSpaces` is required.
Pass it in one of two ways:

```bash

# Option A – command-line flag

--hf-token hf_xxxxxxxxxxxx



# Option B – environment variable (recommended for scripts/CI)

export HF_TOKEN=hf_xxxxxxxxxxxx

```

---

## Input Formats

The script accepts four flavors of JSON / JSONL input.

### 1. Standard benchmark JSON/JSONL *(most common)*

```json

[

  {

    "seq_id": "Warehouse_016__Camera_11__5600__obj353",

    "scene": "Warehouse_016",

    "camera": "Camera_11",

    "object_id": "353",

    "object_type": "Robot",

    "init_frame_id": 5600,

    "init_bbox": [799.0, 601.9, 918.8, 956.5],

    "canonical_frame_ids": [5600, 5615, 5630, 5645, 5660, 5675, 5690, 5705],

    "clip_fps": 30.0

  }

]

```

- `init_bbox` — normalized coordinates in **thousandths** of image dimensions
  (`[x1, y1, x2, y2]` where 1000 = full width/height).
- `canonical_frame_ids` — preferred source-video frame indices to extract.
  When provided and long enough, they take priority over the stride calculation.

### 2. Benchmark JSONL with explicit GT

```json

{"seq_id": "...", "canonical_frame_ids": [...], "gt_bboxes": {"5600": [...]}}

```

### 3. Dataset JSONL (metadata + conversations)

```json

{

  "id": "...",

  "metadata": {"scene": "...", "camera": "...", "init_bbox": [...], ...},

  "conversations": [{"role": "user", "value": "..."}, {"role": "assistant", "value": "{\"5600\": [...]}"}]

}

```

### 4. Sequence-only JSON/JSONL

```json

{"id": "...", "scene": "...", "camera": "...", "source_frame_ids": [...], "init_bbox": [...]}

```

---

## Output Structure

```

<output_dir>/

  gt_requests.json            # submit this to us to receive GT annotations (see below)

  <seq_id>/

    frames/

      f00.png                 # initialization frame

      f00_ann.png             # initialization frame with target bbox drawn

      crop.png                # cropped target region

      f01.png

      f02.png

      ...

      f{N-1}.png

    sequence_meta.json        # per-sequence metadata

```

### `sequence_meta.json` fields



| Field | Description |

|---|---|

| `frame_ids` | Source-video frame indices that were extracted |
| `init_bbox` | Target bounding box (thousandths) |
| `label` | Human-readable sequence label |
| `scene` / `camera` | Source identifiers |
| `object_id` / `object_type` | Target object metadata |
| `stride` | Frame stride used during extraction |
| `nframes` | Number of frames extracted |
| `clip_fps` | Frame rate of the source video |
| `gt_available` | Always `false` (GT is private) |

---

## Usage

### Minimal

```bash

python scripts/tracking/prepare_sot_dataset.py \

    --benchmark sot_benchmark.jsonl \

    --output-dir ./SOT_prepared_8f

```

### Extract 32 frames per sequence

```bash

python scripts/tracking/prepare_sot_dataset.py \

    --benchmark sot_benchmark.jsonl \

    --output-dir ./SOT_prepared_32f \

    --nframes 32

```

### Custom frame stride

```bash

python scripts/tracking/prepare_sot_dataset.py \

    --benchmark sot_benchmark.jsonl \

    --output-dir ./SOT_prepared \

    --nframes 16 \

    --stride 10

```

### Process only specific sequences

```bash

python scripts/tracking/prepare_sot_dataset.py \

    --benchmark sot_benchmark.jsonl \

    --output-dir ./SOT_prepared \

    --sequences Warehouse_016__Camera_11__5600__obj353 Warehouse_016__Camera_05__704__obj352

```

### Use a custom Hugging Face cache directory

```bash

python scripts/tracking/prepare_sot_dataset.py \

    --benchmark sot_benchmark.jsonl \

    --output-dir ./SOT_prepared \

    --hf-token hf_xxxxxxxxxxxx \

    --hf-cache-dir /data/hf_cache

```

### Windows (PowerShell)

```powershell

python scripts/tracking/prepare_sot_dataset.py `

    --benchmark sot_benchmark.jsonl `

    --output-dir .\SOT_prepared_8f `

    --hf-token hf_xxxxxxxxxxxx

```

---

## Command-Line Reference

| Argument | Required | Default | Description |
|---|---|---|---|
| `--benchmark` | Yes | — | Path to the input benchmark JSON or JSONL file. |
| `--output-dir` | Yes | — | Directory where prepared sequences are written. Created if it does not exist. |
| `--nframes` | No | `8` | Number of frames to extract per sequence. |
| `--stride` | No | auto | Source-video frame stride. When omitted, auto-computed from `clip_end_frame` or taken directly from `canonical_frame_ids`. |
| `--hf-token` | No* | `$HF_TOKEN` | Hugging Face access token. Falls back to the `HF_TOKEN` environment variable. |
| `--hf-cache-dir` | No | HF default | Cache directory for downloaded videos. |
| `--repo-id` | No | `nvidia/PhysicalAI-SmartSpaces` | Hugging Face dataset repository. |
| `--repo-subdir` | No | `MTMC_Tracking_2025` | Subdirectory inside the repository. |
| `--sequences` | No | all | Space-separated list of `seq_id` values to prepare. All others are skipped. |

\* Required in practice unless `HF_TOKEN` is set in the environment.

---

## Resuming an Interrupted Run

The script checks how many frames already exist in each sequence's `frames/`
directory. If the count equals or exceeds `--nframes`, that sequence is skipped
automatically. You can safely re-run the command after an interruption; only
incomplete sequences will be processed.

---

## Frame Stride Logic

Frames are selected using this priority order:

1. **`canonical_frame_ids`** in the benchmark — used directly when the list
   has at least `--nframes` entries and `--stride` is not explicitly set.
2. **`--stride`** — fixed stride supplied by the user.
3. **Auto** — computed as `min(15, (clip_end_frame - init_frame_id) / (nframes - 1))`,
   capped at the default stride of 15.

---

## Train / Val Split

The script infers the dataset split from the scene name:

- `Warehouse_000``Warehouse_014`**train**
- `Warehouse_015` and above → **val**

This determines which subdirectory (`train/` or `val/`) is used when
constructing the download path on Hugging Face.

---

## GT Request Submission

The `sot_benchmark.jsonl` file defines the canonical **8-frame** evaluation sequences with fixed frame IDs. If you use the default settings (`--nframes 8` without `--stride`), no submission is needed — the benchmark frames are already known.

If you run a **custom variant** (e.g. `--nframes 16`, `--nframes 32`, `--nframes 64`, or a custom `--stride`), the script will produce a `gt_requests.json` file in your output directory once preparation is complete. **Submit this file back to us** so we can look up and return the ground-truth annotations for your chosen frames.

Send `gt_requests.json` to: *(benchmark contact TBD)*

---

## Evaluator Integration

After preparation, point your evaluator config at the output directory:

```json

{

  "prepared_data_dir": "./SOT_prepared_8f"

}

```

---

## Troubleshooting

| Problem | Fix |
|---|---|
| `ERROR: ffmpeg not found` | Install ffmpeg and add it to `PATH`, or place it in `/usr/bin/ffmpeg`. |
| `401 Unauthorized` from Hugging Face | Check that `--hf-token` / `HF_TOKEN` is set and has read access to the repo. |
| Download retries / timeouts | The script retries up to 4 times with back-off. Check your network connection. |
| Bounding boxes not drawn | Install `opencv-python` or `Pillow`. Without either, the script copies the raw frame without annotation. |
| Sequence not found in output | Verify the `seq_id` value; use `--sequences <seq_id>` to test a single entry. |