Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
model_name: string
dataset_name: string
embedding_dim: int64
n_subjects: int64
pooled: struct<accuracy: double, macro_f1: double, kappa: double, per_class_f1: struct<W: double, N1: double (... 160 chars omitted)
  child 0, accuracy: double
  child 1, macro_f1: double
  child 2, kappa: double
  child 3, per_class_f1: struct<W: double, N1: double, N2: double, N3: double, REM: double>
      child 0, W: double
      child 1, N1: double
      child 2, N2: double
      child 3, N3: double
      child 4, REM: double
  child 4, support: struct<W: int64, N1: int64, N2: int64, N3: int64, REM: int64>
      child 0, W: int64
      child 1, N1: int64
      child 2, N2: int64
      child 3, N3: int64
      child 4, REM: int64
  child 5, confusion_matrix: list<item: list<item: int64>>
      child 0, item: list<item: int64>
          child 0, item: int64
probe_config: struct<max_epochs: int64, lr: double, weight_decay: double, batch_size: int64>
  child 0, max_epochs: int64
  child 1, lr: double
  child 2, weight_decay: double
  child 3, batch_size: int64
n_classes: int64
n_folds: int64
per_fold: list<item: struct<fold: int64, accuracy: double, macro_f1: double, kappa: double, per_class_f1: stru (... 279 chars omitted)
  child 0, item: struct<fold: int64, accuracy: double, macro_f1: double, kappa: double, per_class_f1: struct<W: doubl (... 267 chars omitted)
      child 0, fold: int64
      child 1, accuracy: double
      child 2, macro_f1: double
      child 3, kappa: double
      child 4, per_class_f1: struct<W: double, N1: double, N2: double, N3: double, REM: double>
          child 0, W: double
          child 1, N1: double
          child 2, N2: double
          child 3, N3: double
          child 4, REM: double
      child 5, support: struct<W: int64, N1: int64, N2: int64, N3: int64, REM: int64>
          child 0, W: int64
          child 1, N1: int64
          child 2, N2: int64
          child 3, N3: int64
          child 4, REM: int64
      child 6, n_train_subjects: int64
      child 7, n_test_subjects: int64
      child 8, n_train_epochs: int64
      child 9, n_test_epochs: int64
      child 10, confusion_matrix: list<item: list<item: int64>>
          child 0, item: list<item: int64>
              child 0, item: int64
class_names: list<item: string>
  child 0, item: string
mean_std: struct<accuracy: struct<mean: double, std: double>, macro_f1: struct<mean: double, std: double>, kap (... 38 chars omitted)
  child 0, accuracy: struct<mean: double, std: double>
      child 0, mean: double
      child 1, std: double
  child 1, macro_f1: struct<mean: double, std: double>
      child 0, mean: double
      child 1, std: double
  child 2, kappa: struct<mean: double, std: double>
      child 0, mean: double
      child 1, std: double
to
{'model_name': Value('string'), 'dataset_name': Value('string'), 'n_folds': Value('int64'), 'n_classes': Value('int64'), 'class_names': List(Value('string')), 'n_subjects': Value('int64'), 'probe_config': {'max_epochs': Value('int64'), 'lr': Value('float64'), 'weight_decay': Value('float64'), 'batch_size': Value('int64')}, 'per_fold': List({'fold': Value('int64'), 'accuracy': Value('float64'), 'macro_f1': Value('float64'), 'kappa': Value('float64'), 'per_class_f1': {'W': Value('float64'), 'N1': Value('float64'), 'N2': Value('float64'), 'N3': Value('float64'), 'REM': Value('float64')}, 'support': {'W': Value('int64'), 'N1': Value('int64'), 'N2': Value('int64'), 'N3': Value('int64'), 'REM': Value('int64')}, 'n_train_subjects': Value('int64'), 'n_test_subjects': Value('int64'), 'n_train_epochs': Value('int64'), 'n_test_epochs': Value('int64'), 'confusion_matrix': List(List(Value('int64')))}), 'pooled': {'accuracy': Value('float64'), 'macro_f1': Value('float64'), 'kappa': Value('float64'), 'per_class_f1': {'W': Value('float64'), 'N1': Value('float64'), 'N2': Value('float64'), 'N3': Value('float64'), 'REM': Value('float64')}, 'support': {'W': Value('int64'), 'N1': Value('int64'), 'N2': Value('int64'), 'N3': Value('int64'), 'REM': Value('int64')}, 'confusion_matrix': List(List(Value('int64')))}, 'mean_std': {'accuracy': {'mean': Value('float64'), 'std': Value('float64')}, 'macro_f1': {'mean': Value('float64'), 'std': Value('float64')}, 'kappa': {'mean': Value('float64'), 'std': Value('float64')}}}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              model_name: string
              dataset_name: string
              embedding_dim: int64
              n_subjects: int64
              pooled: struct<accuracy: double, macro_f1: double, kappa: double, per_class_f1: struct<W: double, N1: double (... 160 chars omitted)
                child 0, accuracy: double
                child 1, macro_f1: double
                child 2, kappa: double
                child 3, per_class_f1: struct<W: double, N1: double, N2: double, N3: double, REM: double>
                    child 0, W: double
                    child 1, N1: double
                    child 2, N2: double
                    child 3, N3: double
                    child 4, REM: double
                child 4, support: struct<W: int64, N1: int64, N2: int64, N3: int64, REM: int64>
                    child 0, W: int64
                    child 1, N1: int64
                    child 2, N2: int64
                    child 3, N3: int64
                    child 4, REM: int64
                child 5, confusion_matrix: list<item: list<item: int64>>
                    child 0, item: list<item: int64>
                        child 0, item: int64
              probe_config: struct<max_epochs: int64, lr: double, weight_decay: double, batch_size: int64>
                child 0, max_epochs: int64
                child 1, lr: double
                child 2, weight_decay: double
                child 3, batch_size: int64
              n_classes: int64
              n_folds: int64
              per_fold: list<item: struct<fold: int64, accuracy: double, macro_f1: double, kappa: double, per_class_f1: stru (... 279 chars omitted)
                child 0, item: struct<fold: int64, accuracy: double, macro_f1: double, kappa: double, per_class_f1: struct<W: doubl (... 267 chars omitted)
                    child 0, fold: int64
                    child 1, accuracy: double
                    child 2, macro_f1: double
                    child 3, kappa: double
                    child 4, per_class_f1: struct<W: double, N1: double, N2: double, N3: double, REM: double>
                        child 0, W: double
                        child 1, N1: double
                        child 2, N2: double
                        child 3, N3: double
                        child 4, REM: double
                    child 5, support: struct<W: int64, N1: int64, N2: int64, N3: int64, REM: int64>
                        child 0, W: int64
                        child 1, N1: int64
                        child 2, N2: int64
                        child 3, N3: int64
                        child 4, REM: int64
                    child 6, n_train_subjects: int64
                    child 7, n_test_subjects: int64
                    child 8, n_train_epochs: int64
                    child 9, n_test_epochs: int64
                    child 10, confusion_matrix: list<item: list<item: int64>>
                        child 0, item: list<item: int64>
                            child 0, item: int64
              class_names: list<item: string>
                child 0, item: string
              mean_std: struct<accuracy: struct<mean: double, std: double>, macro_f1: struct<mean: double, std: double>, kap (... 38 chars omitted)
                child 0, accuracy: struct<mean: double, std: double>
                    child 0, mean: double
                    child 1, std: double
                child 1, macro_f1: struct<mean: double, std: double>
                    child 0, mean: double
                    child 1, std: double
                child 2, kappa: struct<mean: double, std: double>
                    child 0, mean: double
                    child 1, std: double
              to
              {'model_name': Value('string'), 'dataset_name': Value('string'), 'n_folds': Value('int64'), 'n_classes': Value('int64'), 'class_names': List(Value('string')), 'n_subjects': Value('int64'), 'probe_config': {'max_epochs': Value('int64'), 'lr': Value('float64'), 'weight_decay': Value('float64'), 'batch_size': Value('int64')}, 'per_fold': List({'fold': Value('int64'), 'accuracy': Value('float64'), 'macro_f1': Value('float64'), 'kappa': Value('float64'), 'per_class_f1': {'W': Value('float64'), 'N1': Value('float64'), 'N2': Value('float64'), 'N3': Value('float64'), 'REM': Value('float64')}, 'support': {'W': Value('int64'), 'N1': Value('int64'), 'N2': Value('int64'), 'N3': Value('int64'), 'REM': Value('int64')}, 'n_train_subjects': Value('int64'), 'n_test_subjects': Value('int64'), 'n_train_epochs': Value('int64'), 'n_test_epochs': Value('int64'), 'confusion_matrix': List(List(Value('int64')))}), 'pooled': {'accuracy': Value('float64'), 'macro_f1': Value('float64'), 'kappa': Value('float64'), 'per_class_f1': {'W': Value('float64'), 'N1': Value('float64'), 'N2': Value('float64'), 'N3': Value('float64'), 'REM': Value('float64')}, 'support': {'W': Value('int64'), 'N1': Value('int64'), 'N2': Value('int64'), 'N3': Value('int64'), 'REM': Value('int64')}, 'confusion_matrix': List(List(Value('int64')))}, 'mean_std': {'accuracy': {'mean': Value('float64'), 'std': Value('float64')}, 'macro_f1': {'mean': Value('float64'), 'std': Value('float64')}, 'kappa': {'mean': Value('float64'), 'std': Value('float64')}}}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

SleepTransformer-Phan Embeddings

Pre-extracted contextualized per-epoch embeddings from SleepTransformer (Phan et al. 2022), trained on SHHS via PhysioEx.

Each subject directory contains:

  • embeddings.npy(n_epochs, 128) contextualized epoch embeddings (bfloat16)
  • labels.npy(n_epochs,) AASM sleep stage labels

Usage

from physioex.models import load_embeddings
path = load_embeddings("sleeptransformer-phan", "hmc", verbose=True)

Linear Probe Results (5-fold subject-wise CV)

Dataset Subjects ACC MF1 κ F1-W F1-N1 F1-N2 F1-N3 F1-REM
shhs_visit1 5793 0.8861 0.8197 0.8380 0.92 0.51 0.90 0.85 0.91
shhs_visit2 2651 0.8824 0.8068 0.8336 0.92 0.47 0.90 0.85 0.90
dcsm 255 0.8767 0.7855 0.8202 0.95 0.45 0.84 0.83 0.85
mass_ss05 26 0.8750 0.8138 0.8183 0.85 0.55 0.90 0.87 0.89
mass_ss02 19 0.8714 0.8065 0.8111 0.83 0.55 0.92 0.87 0.87
mass_ss04 40 0.8638 0.8124 0.8048 0.85 0.58 0.91 0.85 0.88
mass_ss03 62 0.8596 0.8063 0.7918 0.86 0.56 0.90 0.82 0.89
sleepedf 153 0.8394 0.7828 0.7770 0.93 0.50 0.86 0.81 0.82
mass_ss01 53 0.8234 0.7741 0.7503 0.91 0.53 0.87 0.70 0.87
stages_GSDV 232 0.8167 0.6701 0.7015 0.84 0.24 0.87 0.58 0.82
hpap 247 0.8150 0.7755 0.7476 0.88 0.48 0.84 0.80 0.87
stages_GSBB 30 0.8124 0.7168 0.7149 0.90 0.37 0.85 0.66 0.81
stages_MSMI 63 0.7974 0.7276 0.6967 0.83 0.37 0.85 0.75 0.84
stages_GSSW 105 0.7908 0.6312 0.6617 0.79 0.18 0.86 0.50 0.83
stages_GSLH 45 0.7902 0.6714 0.6688 0.85 0.36 0.85 0.54 0.77
stages_MSQW 153 0.7839 0.7064 0.6817 0.82 0.47 0.85 0.56 0.83
hmc 151 0.7836 0.7532 0.7141 0.84 0.46 0.80 0.83 0.83
mesa 2056 0.7772 0.6932 0.6810 0.84 0.40 0.82 0.61 0.79
stages_GSSA 26 0.7767 0.5691 0.6179 0.75 0.10 0.84 0.38 0.77
stages_STLK 158 0.7737 0.6601 0.6563 0.80 0.33 0.83 0.52 0.82
stages_STNF 460 0.6409 0.5803 0.5182 0.71 0.18 0.65 0.74 0.61
stages_MSTR 285 0.5862 0.4573 0.3601 0.51 0.05 0.69 0.52 0.51
stages_MSNF 38 0.5448 0.4035 0.2983 0.55 0.10 0.66 0.38 0.33
stages_BOGN 85 0.5311 0.4027 0.2819 0.49 0.14 0.64 0.34 0.39
stages_MSTH 31 0.4917 0.2057 0.0492 0.22 0.00 0.66 0.09 0.06

Model Details

  • Architecture: SleepTransformer (Phan et al. 2022) — epoch transformer + sequence transformer
  • Training data: SHHS visit 1 (5793 subjects, single EEG channel)
  • Pipeline: seqsleepnet (STFT spectrogram, T=29, F=129)
  • Sequence length: L=21 epochs
  • Embedding dim: 128

Datasets

Citations

@article{gagliardi2025physioex,
    author={Gagliardi, Guido and Alfeo, Luca and Cimino, Mario G C A and Valenza, Gaetano and De Vos, Maarten},
    title={PhysioEx, a new Python library for explainable sleep staging through deep learning},
    journal={Physiological Measurement},
    url={http://iopscience.iop.org/article/10.1088/1361-6579/adaf73},
    year={2025},
}

@article{phan2022sleeptransformer,
    title={SleepTransformer: Automatic Sleep Staging with Interpretability and Uncertainty Quantification},
    author={Phan, Huy and Mikkelsen, Kaare and Ch\'en, Oliver Y and Koch, Philipp and Mertins, Alfred and De Vos, Maarten},
    journal={IEEE Transactions on Biomedical Engineering},
    year={2022},
}
Downloads last month
23