Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
SpikeProphecy / Steinmetz 2019 (processed)
Processed 50 ms-binned spike-count tensors derived from the publicly released Steinmetz 2019 dataset, used as the primary substrate of the SpikeProphecy benchmark (NeurIPS 2026 Datasets & Benchmarks track).
This dataset is a deterministic preprocessing of the publicly released Steinmetz 2019 recordings. The source data (raw spike times, behavioral covariates, NWB files) remain at their canonical Figshare home and are not redistributed here. We share only the binned integer-count tensors plus the session metadata that the benchmark depends on.
Files
| File | Shape | Dtype | Notes |
|---|---|---|---|
session_NNN.npy (×39) |
[n_units, n_bins] |
uint8 |
One file per session. n_units varies 228–840; n_bins is recording-length-dependent at 50 ms bin width. Counts are clipped to 255 before storage; max observed value across all sessions is 43, so no clipping occurs in practice. |
metadata.json |
— | — | Per-session metadata: num_units, num_bins, duration_s, split_boundaries (train_end, val_end), and per-unit brain_regions (Allen CCF acronyms). Top-level: m_max=1240, bin_width_ms=50, history_bins=10. |
Provenance
Source data: Steinmetz et al. (2019), Distributed coding of choice, action and engagement across the mouse brain, Nature 576:266–273. CC-BY-4.0. https://doi.org/10.6084/m9.figshare.9598406.v2
This dataset is the output of running the SpikeProphecy
preprocessing pipeline (scripts/run_ibl_cache.py-equivalent for
Steinmetz NWB files; see the code repo) over the public NWB files.
The pipeline is deterministic and seedless:
- Read NWB spike times.
- Bin at Δt = 50 ms into integer count vectors (cast to uint8).
- Drop near-silent units below 0.1 Hz mean firing rate.
- Emit per-session [n_units, n_bins] tensors and a 70/15/15 train/val/test temporal-split boundary in metadata.
The processing code (and a leakage-audit suite verifying the splits) is released in the SpikeProphecy benchmark code repo; see the paper for the URL.
Splits
We use a 70/15/15 train / val / test temporal split within
each session: ordered first / middle / last in time. The split
boundaries are encoded in metadata.json under each session's
split_boundaries key as raw bin indices, not pre-sliced
arrays. Random or interleaved splits would introduce trivial
leakage on autoregressive forecasting because adjacent 50 ms
bins are temporally correlated.
A 14-test automated leakage audit suite (released with the code) verifies five concrete leakage vectors against this split: train/test bin overlap, sliding-window boundary crossing, cross-session spillover, normalization using future statistics, and history-feature leakage.
Use
import numpy as np
import json
from pathlib import Path
from huggingface_hub import snapshot_download
local = snapshot_download(repo_id="mysteriousauthor/spikeprophecy-steinmetz",
repo_type="dataset")
local = Path(local)
meta = json.loads((local / "metadata.json").read_text())
print(meta["num_sessions"], "sessions, M_max =", meta["m_max"])
# Session 4 (n=703 units, used as the canonical near-median session
# in the paper's Figure 1 small multiples)
counts = np.load(local / "session_004.npy") # [703, 60887], uint8
sb = meta["sessions"][4]["split_boundaries"]
train = counts[:, :sb["train_end"]]
val = counts[:, sb["train_end"]:sb["val_end"]]
test = counts[:, sb["val_end"]:]
Citation
If you use this processed dataset, please cite both the original Steinmetz paper and the SpikeProphecy benchmark:
@article{steinmetz2019distributed,
title = {Distributed coding of choice, action and engagement
across the mouse brain},
author = {Steinmetz, Nicholas A. and Zatka-Haas, Peter and
Carandini, Matteo and Harris, Kenneth D.},
journal = {Nature},
volume = {576},
pages = {266--273},
year = {2019},
doi = {10.1038/s41586-019-1787-x}
}
@inproceedings{spikeprophecy2026,
title = {SpikeProphecy: A Large-Scale Benchmark for
Autoregressive Neural Population Forecasting},
author = {Anonymous},
booktitle = {NeurIPS 2026 Datasets and Benchmarks Track},
year = {2026}
}
Croissant metadata
A Croissant 1.0 JSON-LD
metadata file is included as croissant.json at the root of this
dataset. It declares the file inventory, the sessions RecordSet
(per-session metadata extracted from metadata.json via JSONPath),
the dataset license, and Responsible-AI fields (rai:dataCollection,
rai:dataPreprocessingProtocol, rai:dataAnnotationProtocol,
rai:dataReleaseMaintenancePlan, rai:personalSensitiveInformation).
The file validates with the mlcroissant reference implementation
and supports live record iteration:
import mlcroissant as mlc
ds = mlc.Dataset(jsonld="https://huggingface.co/datasets/"
"mysteriousauthor/spikeprophecy-steinmetz/"
"resolve/main/croissant.json")
for record in ds.records("sessions"):
print(record["sessions/index"], record["sessions/num_units"])
License
CC-BY-4.0 (matching the source dataset's license).
- Downloads last month
- 111