Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    IndexError
Message:      tuple index out of range
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 80, in _generate_tables
                  num_rows = _check_dataset_lengths(h5, self.info.features)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 358, in _check_dataset_lengths
                  if dset.shape[0] != num_rows:
                     ~~~~~~~~~~^^^
              IndexError: tuple index out of range

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

VLABench Data Prep

This directory contains an independent, non-Docker data conversion workflow for turning lerobot/libero into an episode-based HDF5 format that is easier for RABench agents to consume.

Goal

The source lerobot/libero dataset is distributed as:

  • parquet tables for numeric columns
  • mp4 video shards for image streams
  • separate metadata parquet files for tasks and episode boundaries

That structure is compact, but it is awkward for an agent to discover and use inside RABench. The workflow here converts it into one HDF5 file per episode with decoded frames and aligned metadata.

Output Layout

The converter writes:

  • output/meta/tasks.json
  • output/meta/dataset_info.json
  • output/train/task_000/episode_000000.hdf5
  • output/train/task_001/episode_000123.hdf5

Each HDF5 file follows schema.json.

Source Dataset Assumptions

Expected source root:

  • /data/jiajunxu/RABench/vlabench_manipulation/vlabench_train/libero

Expected files:

  • meta/info.json
  • meta/tasks.parquet
  • meta/episodes/chunk-000/file-000.parquet
  • data/chunk-000/file-000.parquet
  • videos/observation.images.image/chunk-000/file-000.mp4
  • videos/observation.images.image2/chunk-000/file-000.mp4

Environment

This workflow is intended to run on the host machine, not in Docker.

Recommended conda env:

  • data_process

Required Python packages:

  • h5py
  • datasets
  • pyarrow
  • opencv-python
  • imageio
  • numpy

Usage

Convert a small validation subset first:

conda run -n data_process python convert_libero_to_unified.py \
  --source-root /data/jiajunxu/RABench/vlabench_manipulation/vlabench_train/libero \
  --output-root ./output \
  --max-episodes 5

Validate generated files:

conda run -n data_process python validate_unified_dataset.py \
  --dataset-root ./output \
  --max-files 5

Notes

  • The converted format preserves LIBERO's native 2-view setup. It does not try to fake the 4-camera VLABench runtime observation contract.
  • Later, RABench can download the published converted dataset directly from your Hugging Face dataset repo during task preparation.
Downloads last month
1,124