Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'rename_map' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 712, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 757, in _build_writer
                  self.pa_writer = pq.ParquetWriter(
                                   ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
                  self.writer = _parquet.ParquetWriter(
                                ^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'rename_map' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1847, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                                            ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 731, in finalize
                  self._build_writer(self.schema)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 757, in _build_writer
                  self.pa_writer = pq.ParquetWriter(
                                   ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
                  self.writer = _parquet.ParquetWriter(
                                ^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'rename_map' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1450, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 993, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

dataset
dict
env
null
policy
dict
output_dir
string
job_name
string
resume
bool
seed
int64
num_workers
int64
batch_size
int64
steps
int64
eval_freq
int64
log_freq
int64
save_checkpoint
bool
save_freq
int64
use_policy_training_preset
bool
optimizer
dict
scheduler
dict
eval
dict
wandb
dict
checkpoint_path
null
rename_map
dict
{ "repo_id": "tinkhireeva/so101_pnp_merged", "root": null, "episodes": null, "image_transforms": { "enable": false, "max_num_transforms": 3, "random_order": false, "tfs": { "brightness": { "weight": 1, "type": "ColorJitter", "kwargs": { "brightness": [ ...
null
{ "type": "pi05", "n_obs_steps": 1, "input_features": { "observation.state": { "type": "STATE", "shape": [ 6 ] }, "observation.images.wrist": { "type": "VISUAL", "shape": [ 3, 480, 640 ] }, "observation.images.up": { "ty...
outputs/so101_pnp_merged_pi05
so101_pnp_merged_pi05
false
1,000
4
64
20,000
20,000
200
true
1,000
true
{ "type": "adamw", "lr": 0.000025, "weight_decay": 0.01, "grad_clip_norm": 1, "betas": [ 0.9, 0.95 ], "eps": 1e-8 }
{ "type": "cosine_decay_with_warmup", "num_warmup_steps": 1000, "num_decay_steps": 30000, "peak_lr": 0.000025, "decay_lr": 0.0000025 }
{ "n_episodes": 50, "batch_size": 50, "use_async_envs": false }
{ "enable": false, "disable_artifact": false, "project": "lerobot", "entity": null, "notes": null, "run_id": null, "mode": null }
null
{}

No dataset card yet

Downloads last month
6