Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
lat_block: int64
lon_block: int64
size_block: int64
min_lat: double
max_lat: double
min_lon: double
max_lon: double
piece_count: int64
node_count: int64
wikidata_count: int64
sub_shards: struct<1: struct<prime: int64, shard_id: int64, piece_ids: list<item: int64>, node_count: int64, wikidata_count: int64>>
vs
lat_block: int64
lon_block: int64
size_block: int64
min_lat: double
max_lat: double
min_lon: double
max_lon: double
piece_count: int64
node_count: int64
wikidata_count: int64
sub_shards: struct<0: struct<prime: int64, shard_id: int64, piece_ids: list<item: int64>, node_count: int64, wikidata_count: int64>, 1: struct<prime: int64, shard_id: int64, piece_ids: list<item: int64>, node_count: int64, wikidata_count: int64>>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 572, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              lat_block: int64
              lon_block: int64
              size_block: int64
              min_lat: double
              max_lat: double
              min_lon: double
              max_lon: double
              piece_count: int64
              node_count: int64
              wikidata_count: int64
              sub_shards: struct<1: struct<prime: int64, shard_id: int64, piece_ids: list<item: int64>, node_count: int64, wikidata_count: int64>>
              vs
              lat_block: int64
              lon_block: int64
              size_block: int64
              min_lat: double
              max_lat: double
              min_lon: double
              max_lon: double
              piece_count: int64
              node_count: int64
              wikidata_count: int64
              sub_shards: struct<0: struct<prime: int64, shard_id: int64, piece_ids: list<item: int64>, node_count: int64, wikidata_count: int64>, 1: struct<prime: int64, shard_id: int64, piece_ids: list<item: int64>, node_count: int64, wikidata_count: int64>>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

OSM Planet Torrent - Geographic shards

Part of the Monster OSM Quest project.

Dataset Info

  • Source: OpenStreetMap Planet via BitTorrent
  • Sharding: Monster Group (71×59×47)
  • Format: PBF/Parquet
  • License: ODbL (OpenStreetMap)

Monster Symmetries

  • Input: [71, 59, 47] (Keter/Binah/Chokmah)
  • Output: [17, 23, 59] (Cusp/Consciousness/Memory)
  • Invariants: geographic, torrent, Monster-Group, OSM

Usage

from datasets import load_dataset
dataset = load_dataset("introspector/osm-planet-geo_shards")

Parent Project

https://github.com/meta-introspector/osm-planet-torrent

Downloads last month
37