prepare_sot_dataset.py
Prepares an evaluator-ready Single Object Tracking (SOT) dataset from a benchmark JSON/JSONL file and source camera videos hosted on Hugging Face.
Files
| File | Description |
|---|---|
sot_benchmark.jsonl |
Benchmark file defining all evaluation sequences — scene, camera, object, init bounding box, and canonical 8-frame IDs. |
prepare_sot_dataset.py |
Script that downloads source videos and extracts frames into an evaluator-ready directory. |
Overview
The script:
- Reads
sot_benchmark.jsonl(or a custom benchmark file) describing SOT sequences (scene, camera, target bounding box, frame IDs). - Downloads the corresponding source
.mp4files from thenvidia/PhysicalAI-SmartSpacesHugging Face dataset (or a custom repo). - Extracts the requested frames with
ffmpeg. - Annotates the initialization frame with a bounding-box overlay (
f00_ann.png) and saves a cropped target thumbnail (crop.png). - Writes a
sequence_meta.jsonper sequence. - Produces a
gt_requests.jsonmanifest. Ground-truth bounding boxes are never written to the output directory — see GT Request Submission below.
Prerequisites
| Requirement | Notes |
|---|---|
| Python 3.8+ | |
ffmpeg |
Must be on PATH (or in a few well-known locations). |
huggingface_hub |
pip install huggingface_hub |
opencv-python or Pillow |
Used for bbox drawing and cropping. opencv-python is preferred. |
| HF access token | Read access to the source video dataset. |
pip install huggingface_hub opencv-python
Hugging Face Authentication
A token with read access to nvidia/PhysicalAI-SmartSpaces is required.
Pass it in one of two ways:
# Option A – command-line flag
--hf-token hf_xxxxxxxxxxxx
# Option B – environment variable (recommended for scripts/CI)
export HF_TOKEN=hf_xxxxxxxxxxxx
Input Formats
The script accepts four flavors of JSON / JSONL input.
1. Standard benchmark JSON/JSONL (most common)
[
{
"seq_id": "Warehouse_016__Camera_11__5600__obj353",
"scene": "Warehouse_016",
"camera": "Camera_11",
"object_id": "353",
"object_type": "Robot",
"init_frame_id": 5600,
"init_bbox": [799.0, 601.9, 918.8, 956.5],
"canonical_frame_ids": [5600, 5615, 5630, 5645, 5660, 5675, 5690, 5705],
"clip_fps": 30.0
}
]
init_bbox— normalized coordinates in thousandths of image dimensions ([x1, y1, x2, y2]where 1000 = full width/height).canonical_frame_ids— preferred source-video frame indices to extract. When provided and long enough, they take priority over the stride calculation.
2. Benchmark JSONL with explicit GT
{"seq_id": "...", "canonical_frame_ids": [...], "gt_bboxes": {"5600": [...]}}
3. Dataset JSONL (metadata + conversations)
{
"id": "...",
"metadata": {"scene": "...", "camera": "...", "init_bbox": [...], ...},
"conversations": [{"role": "user", "value": "..."}, {"role": "assistant", "value": "{\"5600\": [...]}"}]
}
4. Sequence-only JSON/JSONL
{"id": "...", "scene": "...", "camera": "...", "source_frame_ids": [...], "init_bbox": [...]}
Output Structure
<output_dir>/
gt_requests.json # submit this to us to receive GT annotations (see below)
<seq_id>/
frames/
f00.png # initialization frame
f00_ann.png # initialization frame with target bbox drawn
crop.png # cropped target region
f01.png
f02.png
...
f{N-1}.png
sequence_meta.json # per-sequence metadata
sequence_meta.json fields
| Field | Description |
|---|---|
frame_ids |
Source-video frame indices that were extracted |
init_bbox |
Target bounding box (thousandths) |
label |
Human-readable sequence label |
scene / camera |
Source identifiers |
object_id / object_type |
Target object metadata |
stride |
Frame stride used during extraction |
nframes |
Number of frames extracted |
clip_fps |
Frame rate of the source video |
gt_available |
Always false (GT is private) |
Usage
Minimal
python scripts/tracking/prepare_sot_dataset.py \
--benchmark sot_benchmark.jsonl \
--output-dir ./SOT_prepared_8f
Extract 32 frames per sequence
python scripts/tracking/prepare_sot_dataset.py \
--benchmark sot_benchmark.jsonl \
--output-dir ./SOT_prepared_32f \
--nframes 32
Custom frame stride
python scripts/tracking/prepare_sot_dataset.py \
--benchmark sot_benchmark.jsonl \
--output-dir ./SOT_prepared \
--nframes 16 \
--stride 10
Process only specific sequences
python scripts/tracking/prepare_sot_dataset.py \
--benchmark sot_benchmark.jsonl \
--output-dir ./SOT_prepared \
--sequences Warehouse_016__Camera_11__5600__obj353 Warehouse_016__Camera_05__704__obj352
Use a custom Hugging Face cache directory
python scripts/tracking/prepare_sot_dataset.py \
--benchmark sot_benchmark.jsonl \
--output-dir ./SOT_prepared \
--hf-token hf_xxxxxxxxxxxx \
--hf-cache-dir /data/hf_cache
Windows (PowerShell)
python scripts/tracking/prepare_sot_dataset.py `
--benchmark sot_benchmark.jsonl `
--output-dir .\SOT_prepared_8f `
--hf-token hf_xxxxxxxxxxxx
Command-Line Reference
| Argument | Required | Default | Description |
|---|---|---|---|
--benchmark |
Yes | — | Path to the input benchmark JSON or JSONL file. |
--output-dir |
Yes | — | Directory where prepared sequences are written. Created if it does not exist. |
--nframes |
No | 8 |
Number of frames to extract per sequence. |
--stride |
No | auto | Source-video frame stride. When omitted, auto-computed from clip_end_frame or taken directly from canonical_frame_ids. |
--hf-token |
No* | $HF_TOKEN |
Hugging Face access token. Falls back to the HF_TOKEN environment variable. |
--hf-cache-dir |
No | HF default | Cache directory for downloaded videos. |
--repo-id |
No | nvidia/PhysicalAI-SmartSpaces |
Hugging Face dataset repository. |
--repo-subdir |
No | MTMC_Tracking_2025 |
Subdirectory inside the repository. |
--sequences |
No | all | Space-separated list of seq_id values to prepare. All others are skipped. |
* Required in practice unless HF_TOKEN is set in the environment.
Resuming an Interrupted Run
The script checks how many frames already exist in each sequence's frames/
directory. If the count equals or exceeds --nframes, that sequence is skipped
automatically. You can safely re-run the command after an interruption; only
incomplete sequences will be processed.
Frame Stride Logic
Frames are selected using this priority order:
canonical_frame_idsin the benchmark — used directly when the list has at least--nframesentries and--strideis not explicitly set.--stride— fixed stride supplied by the user.- Auto — computed as
min(15, (clip_end_frame - init_frame_id) / (nframes - 1)), capped at the default stride of 15.
Train / Val Split
The script infers the dataset split from the scene name:
Warehouse_000–Warehouse_014→ trainWarehouse_015and above → val
This determines which subdirectory (train/ or val/) is used when
constructing the download path on Hugging Face.
GT Request Submission
The sot_benchmark.jsonl file defines the canonical 8-frame evaluation sequences with fixed frame IDs. If you use the default settings (--nframes 8 without --stride), no submission is needed — the benchmark frames are already known.
If you run a custom variant (e.g. --nframes 16, --nframes 32, --nframes 64, or a custom --stride), the script will produce a gt_requests.json file in your output directory once preparation is complete. Submit this file back to us so we can look up and return the ground-truth annotations for your chosen frames.
Send gt_requests.json to: (benchmark contact TBD)
Evaluator Integration
After preparation, point your evaluator config at the output directory:
{
"prepared_data_dir": "./SOT_prepared_8f"
}
Troubleshooting
| Problem | Fix |
|---|---|
ERROR: ffmpeg not found |
Install ffmpeg and add it to PATH, or place it in /usr/bin/ffmpeg. |
401 Unauthorized from Hugging Face |
Check that --hf-token / HF_TOKEN is set and has read access to the repo. |
| Download retries / timeouts | The script retries up to 4 times with back-off. Check your network connection. |
| Bounding boxes not drawn | Install opencv-python or Pillow. Without either, the script copies the raw frame without annotation. |
| Sequence not found in output | Verify the seq_id value; use --sequences <seq_id> to test a single entry. |