Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Invalid value. in row 0
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 270, in _generate_tables
df = pandas_read_json(f)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 34, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
return json_reader.read()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
obj = self._get_object_parser(self.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
self._parse()
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
ujson_loads(json, precise_float=self.precise_float), dtype=None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4195, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
for key, pa_table in ex_iterable.iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 273, in _generate_tables
raise e
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 236, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ScenePilot-4K: A Large-Scale First-Person Dataset and Benchmark for Vision-Language Models in Autonomous Driving
Figure 1: Overview of the ScenePilot-Bench benchmark and evaluation metrics.
π Introduction
ScenePilot-4K is a large-scale first-person driving dataset for safety-aware vision-language learning and evaluation in autonomous driving. Built from public online driving videos, ScenePilot-4K contains 3,847 hours of video and 27.7M front-view frames spanning 63 countries/regions and 1,210 cities. It jointly provides scene-level natural-language descriptions, risk assessment labels, key-participant annotations, ego trajectories, and camera parameters through a unified multi-stage annotation pipeline.
Building on this dataset, we establish ScenePilot-Bench, a standardized benchmark that evaluates vision-language models along four complementary axes: scene understanding, spatial perception, motion planning, and GPT-based semantic alignment. The benchmark includes fine-grained metrics and geographic generalization settings that expose model robustness under cross-region and cross-traffic domain shifts.
Baseline results on representative open-source and proprietary vision-language models show that current models remain competitive in high-level scene semantics but still exhibit substantial limitations in geometry-aware perception and planning-oriented reasoning.
π¦ Contents Overview
The released files in this repository can be grouped into the following categories.
1. Model Weight Files
- ScenePilot_2.5_3b_200k_merged.zip
- ScenePilot_2_2b_200k_merged.zip
These two compressed files contain pretrained model weights obtained by training on a 200k-scale VQA training set constructed in this work.
- ScenePilot_2.5_3b_200k_merged.zip corresponds to Qwen2.5-VL-3B
- ScenePilot_2_2b_200k_merged.zip corresponds to Qwen2-VL-2B
Both models are trained using the same dataset and unified training pipeline, and are used in the main experiments and comparison studies.
2. Annotation and Perception Data
VGGT.zip
Contains annotation data related to spatial perception and geometric reasoning, including:- Ego-vehicle trajectory information
- Depth-related information
- Camera intrinsic and extrinsic parameters
This file is not the raw output of VGGT, but a post-processed version after trajectory cleaning.
Specifically, the annotation pipeline is as follows:
- Raw trajectory, depth, and camera parameters are first generated using
VGGT.pyfrompipeline_code.zip - The generated trajectories are then processed using
traj_clean.pyto remove physically implausible or noisy trajectories
The final annotations in this archive therefore correspond to cleaned and quality-controlled trajectory data, suitable for downstream tasks such as trajectory prediction and spatial reasoning.
YOLO.zip
Provides 2D object detection results for major traffic participants. All detections are generated by a unified detection model and are used as perception inputs for downstream VQA and risk assessment tasks.scene_description.zip
Contains scene description results generated from the original data, including:- Weather conditions
- Road types
- Other environmental and semantic attributes
These descriptions are used for scene understanding and for constructing balanced dataset splits.
3. Dataset Split Definition
- split_train_test_val.zip
This file contains the original video-level dataset split, including:
- Training set
- Validation set
- Test set
All VQA datasets of different scales are constructed strictly based on this video-level split to avoid scene-level information leakage.
4. VQA Datasets
4.1 All-VQA
- All-VQA.zip
This archive contains all VQA data in JSON format. Files are organized according to training, validation, and test splits.
Examples include:
Deleted_2D_train_vqa_add_new.jsonDeleted_2D_train_vqa_new.json
The VQA data in this archive is generated using the original VQA generation pipeline and includes a total of 22 VQA categories (Q1βQ22):
After initial generation, parts of the dataset were refined and regenerated due to:
- Data cleaning
- Format standardization
- Improved annotation consistency
To support flexible usage, we provide:
classify.py(inpipeline_code.zip)
β A utility script that allows users to:- Classify VQA samples into categories
- Select specific subsets of interest
- Combine old and newly refined VQA samples
Therefore, this archive contains a mixture of original and partially updated VQA data, and users are encouraged to use the provided tools to construct task-specific subsets.
4.2 Test-VQA
- Test-VQA.zip
This archive contains the 100k-scale VQA test datasets used in the experiments.
Deleted_2D_test_selected_vqa_100k_final.json
Used as the main test set in the primary experiments.
Additional test sets are provided for generalization studies:
- Files ending with
europe,japan-and-korea,us, andothercorrespond to geographic generalization experiments. - Files ending with
leftcorrespond to left-hand traffic country experiments.
Each test set contains 100k VQA samples.
4.3 Train-VQA
- Train-VQA.zip
This archive contains training datasets of different scales:
- 200k VQA
- 2000k VQA
Additional subsets include:
- Files ending with
china, used for geographic generalization experiments. - Files ending with
right, used for right-hand traffic country experiments.
4.4 Spatial VQA
- spatial_vqa.zip
This archive contains the updated VQA dataset with explicitly grounded target objects, focusing exclusively on spatial perception tasks.
It includes the following seven question categories:
- Q1
- Q6
- Q10
- Q11
- Q20
- Q21
- Q22
These samples are designed to support more precise evaluation and training for object-grounded spatial perception in autonomous driving scenarios.
4.5 Trajectory VQA
- trajectory_vqa.zip
This archive contains a curated set of high-quality trajectory-related VQA samples obtained after trajectory filtering and cleaning.
It covers the following five trajectory-centric categories:
- Q15
- Q16
- Q17
- Q18
- Q19
These samples are intended for motion planning and trajectory reasoning tasks, with improved annotation quality after trajectory validation and filtering.
5. Pipeline Code and Utilities
- pipeline_code.zip
This archive contains the full implementation of the dataset construction pipeline. The components cover data preprocessing, perception annotation, trajectory generation, VQA construction, and post-processing.
The main scripts are listed below:
clip.py
Extracts image frames from raw videos:- Removes fixed durations at the beginning and end
- Samples frames at a fixed rate
- Organizes outputs into structured directories
mask.py
Generates image masks based on 2D bounding boxes:- Takes YOLO detection results as input
- Produces masked images for each detected object
- Supports region-based grounding in VQA tasks
Old_vqa_Q1-19.py
Original VQA generation script:- Produces full set of 19 question categories (Q1βQ19)
- Forms the initial version of the VQA dataset
Q1-6-10-11_new.py
Updated VQA generation logic for selected categories:- Focuses on Q1, Q6, Q10, Q11
- Replaces ambiguous object references with explicit region-based grounding
- Introduces region-id representations (e.g.,
Region[0]) - Each region is associated with a precise mask
Q20-21-22_new.py
Updated generation for additional spatial reasoning categories:- Applies the same region-id grounding strategy
- Improves clarity and consistency in spatial relationship reasoning
scene_description.py
Generates scene-level descriptions:- Operates on the 4th frame of each clip
- Produces structured descriptions including environment and context
VGGT.py
Core perception annotation module:- Generates ego trajectories
- Prouces depth informadtion
- Outputs camera intrinsic and extrinsic parameters
traj_clean.py
Trajectory post-processing module:- Filters out noisy or physically implausible trajectories
- Improves annotation quality for planning-related tasks
classify.py
VQA classification and selection tool:- Supports both old and new VQA formats
- Allows users to:
- Filter by question category
- Select task-specific subsets
- Construct customized training datasets
These scripts together define the complete and reproducible pipeline for building the ScenePilot-4K dataset, from raw video processing to structured multimodal annotations.
6. Video Index and Download Information
- video_name_all.xlsx
This file lists all videos used in the dataset along with their corresponding download links. It is provided to support dataset reproduction and access to the original video resources.
π Citation
@misc{wang2026scenepilotbenchlargescaledatasetbenchmark,
title={ScenePilot-Bench: A Large-Scale Dataset and Benchmark for Evaluation of Vision-Language Models in Autonomous Driving},
author={Yujin Wang and Yutong Zheng and Wenxian Fan and Tianyi Wang and Hongqing Chu and Li Zhang and Bingzhao Gao and Daxin Tian and Hong Chen},
year={2026},
eprint={2601.19582},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.19582},
}
- Downloads last month
- 207