Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
creation_date: large_string
dataproduct_type: large_string
granule_gid: large_string
granule_uid: large_string
instrument_host_name: large_string
instrument_name: large_string
modification_date: large_string
obs_id: large_string
processing_level: large_string
release_date: large_string
service_title: large_string
spatial_frame_type: large_string
time_max: double
time_min: double
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 1815
to
{'creation_date': Value('large_string'), 'dataproduct_type': Value('large_string'), 'granule_gid': Value('large_string'), 'granule_uid': Value('large_string'), 'instrument_host_name': Value('large_string'), 'instrument_name': Value('large_string'), 'modification_date': Value('large_string'), 'obs_id': Value('large_string'), 'processing_level': Value('large_string'), 'release_date': Value('large_string'), 'service_title': Value('large_string'), 'target_class': Value('large_string'), 'target_name': Value('large_string'), 'time_max': Value('float64'), 'time_min': Value('float64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 209, in _generate_tables
yield Key(file_idx, batch_idx), self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 147, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
creation_date: large_string
dataproduct_type: large_string
granule_gid: large_string
granule_uid: large_string
instrument_host_name: large_string
instrument_name: large_string
modification_date: large_string
obs_id: large_string
processing_level: large_string
release_date: large_string
service_title: large_string
spatial_frame_type: large_string
time_max: double
time_min: double
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 1815
to
{'creation_date': Value('large_string'), 'dataproduct_type': Value('large_string'), 'granule_gid': Value('large_string'), 'granule_uid': Value('large_string'), 'instrument_host_name': Value('large_string'), 'instrument_name': Value('large_string'), 'modification_date': Value('large_string'), 'obs_id': Value('large_string'), 'processing_level': Value('large_string'), 'release_date': Value('large_string'), 'service_title': Value('large_string'), 'target_class': Value('large_string'), 'target_name': Value('large_string'), 'time_max': Value('float64'), 'time_min': Value('float64')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ESA BepiColombo Observations
Credit: NASA/Johns Hopkins APL/Carnegie Institution of Washington
Part of a dataset collection on Hugging Face.
Dataset description
Complete observation metadata catalog from the ESA/JAXA BepiColombo mission to Mercury.
BepiColombo is a joint ESA/JAXA mission to Mercury, launched October 2018. It consists of two orbiters: the Mercury Planetary Orbiter (MPO) and the Mercury Magnetospheric Orbiter (Mio/MMO). After a 7-year cruise with gravity assists at Earth, Venus (x2), and Mercury (x6), it will enter Mercury orbit in late 2025. During cruise, instruments have been collecting calibration and flyby science data.
Key instruments include MORE (radio science for gravity), MPO-MAG (magnetometer), SIXS (solar X-ray/particle spectrometer), BERM (radiation monitor), MIXS (X-ray spectrometer), SERENA (neutral/ion analyzer), PHEBUS (UV spectrometer), MCAM (monitoring cameras), MGNS (gamma/neutron spectrometer), MERTIS (thermal IR), and BELA (laser altimeter).
The cruise phase data is scientifically valuable in its own right. MPO-MAG has mapped the interplanetary magnetic field along the spacecraft's trajectory, including measurements during Venus and Mercury flybys that provide unique geometry for studying planetary magnetospheres. MORE has conducted superior solar conjunction experiments to test general relativity. SIXS and BERM have monitored the solar particle environment, building a multi-year record of solar energetic particle events and cosmic ray flux variations along the inner heliosphere trajectory. The Mercury flyby observations from MERTIS, MIXS, and MGNS provide early science returns and instrument calibration data ahead of orbital operations.
Each row represents one observation or data granule from the ESA Planetary Science Archive (PSA), conforming to the EPN-TAP standard, with timing, spatial coverage, instrument parameters, and access URLs.
Instruments
- MORE: 89,038 observations
- MPO-MAG: 40,804 observations
- SIXS: 9,642 observations
- BERM: 9,063 observations
- MIXS: 6,905 observations
- SERENA: 5,775 observations
- PHEBUS: 5,553 observations
- MCAM: 5,428 observations
- MGNS: 3,588 observations
- MERTIS: 838 observations
- BELA: 36 observations
Quick stats
- 176,670 total observations
- 11 instruments
- 3 distinct targets
- Time span: JD 2458411.5 -- 0.0
Usage
from datasets import load_dataset
ds = load_dataset("juliensimon/esa-bepicolombo-observations", split="train")
df = ds.to_pandas()
# Observations per instrument
print(df["instrument_name"].value_counts())
# MORE radio science observations
more = df[df["instrument_name"] == "MORE"]
print(f"{len(more):,} MORE observations")
# Timeline of observations
import matplotlib.pyplot as plt
df["year"] = ((df["time_min"] - 2451545.0) / 365.25 + 2000).astype(int)
df.groupby(["year", "instrument_name"]).size().unstack().plot(kind="bar", stacked=True)
plt.title("BepiColombo observations per year")
plt.ylabel("Count")
plt.show()
Data source
ESA Planetary Science Archive -- EPN-TAP service at
https://psa.esa.int/psa-tap/tap/sync.
Update schedule
Weekly (Monday at 09:30 UTC)
Related datasets
- juliensimon/esa-mars-express-observations
- juliensimon/esa-exomars-tgo-observations
- juliensimon/esa-venus-express-observations
Citation
@dataset{esa_bepicolombo_observations,
title = {ESA BepiColombo Observations},
author = {juliensimon},
year = {2026},
url = {https://huggingface.co/datasets/juliensimon/esa-bepicolombo-observations},
publisher = {Hugging Face}
}
License
- Downloads last month
- 127