The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SPE-1 β Paired patch-clamp + Neuropixels recordings (Arrow conversion)
This dataset is an Arrow / π€ datasets re-packaging of the SPE-1 ground-truth electrophysiology dataset by Marques-Smith et al. (2018), preprocessed and split into per-cell configurations for use with the spike-localization benchmarks by Zhao et al. (2026).
The original raw recordings (~270 GB of int16 binaries) are not included here, but can be downloaded using the included scripts if needed.
License
The original SPE-1 release is distributed under CC-BY-SA 4.0 (see Licence CC BYSA 4.0.pdf). This Arrow conversion inherits the same license.
Original sources:
- CRCNS: https://crcns.org/data-sets/methods/spe-1
- Google Drive mirror: https://drive.google.com/drive/folders/13GCOuWN4QMW6vQmlNIolUrxPy-4Wv1BC
Citation
If you use this dataset, please cite the original recordings and the benchmark methodology:
- The original publication describing the SPE-1 dataset: Marques-Smith, A., Neto, J.P., Lopes, G., Nogueira, J., Calcaterra, L., FrazΓ£o, J., Kim, D., Phillips, M., Dimitriadis, G., Kampff, A.R. (2018). Recording from the same neuron with high-density CMOS probes and patch-clamp: a ground-truth dataset and an experiment in collaboration. bioRxiv 370080; doi: https://doi.org/10.1101/370080
- The SPE-1 dataset itself: AndrΓ© Marques-Smith, Joana P. Neto, GonΓ§alo Lopes, Joana Nogueira, Lorenza Calcaterra, JoΓ£o FrazΓ£o, Danbee Kim, Matthew G. Phillips, George Dimitriadis and Adam R. Kampff (2018); Simultaneous patch-clamp and dense CMOS probe extracellular recordings from the same cortical neuron in anaesthetized rats. CRCNS.org http://dx.doi.org/10.6080/K0J67F4T
- The spike-localization benchmark: Zhao, H., Zhang, X., et al. (2026). Benchmarking spike source localization algorithms in high density probes. https://doi.org/10.1371/journal.pcbi.1014059
Cells included
12 cells from the SPE-1 release are converted:
| Cell | Notes |
|---|---|
| c5 | Longest WC-IC recording β used as the whole-cell V(t) reference |
| c14, c15, c16, c19, c24, c26, c28, c29, c37, c45, c46 | The 11 juxtacellular cells selected by Zhao et al. 2026 |
For each cell, the first 10 s (settling period) are skipped and 300 s of recording are kept.
Repository layout
frthjf/spe1-zhao2026-benchmark
βββ README.md (this file)
βββ chanMap.mat 2-D probe geometry (384 Γ 2 Β΅m, x and y)
βββ Data Summary.xlsx per-cell ground-truth electrode (chan_predicted)
βββ Data Summary.csv (same converted to plain text csv)
βββ Recording Catalogue.pdf original SPE-1 catalogue
βββ Licence CC BYSA 4.0.pdf original SPE-1 license
βββ localization_benchmark.json COM / MT / GC template-level numbers
β produced by scripts/reproduce_zhao2026.py
βββ scripts/ self-contained reproduction pipeline
β βββ prepare.sh download (CRCNS) + run convert_to_arrow
β βββ convert_to_arrow.py SPE-1 raw β Arrow conversion
β βββ reproduce_zhao2026.py COM / MT / GC baselines (Zhao et al. 2026)
βββ c{ID}/ one HF DatasetDict per cell
βββ dataset_dict.json
βββ conversion_metadata.json (see schema below)
βββ dataset_stats.json per-channel median / IQR / abs-IQR
βββ templates.npz peak-aligned mean template + probe geometry
βββ patch.npz ground-truth patch trace + spike times
βββ patch_quicklook.png sanity-check plot of patch.npz (where shipped)
βββ train/
β βββ dataset_info.json
β βββ state.json
β βββ data-*.arrow ~ 138 MB per 3-s segment
βββ test/
βββ data-*.arrow
Per-cell file schemas
Arrow segments (train/, test/)
Each row = one ~3-second segment of preprocessed Neuropixels recording (bandpass 300-3000 Hz + global common-median reference, gain -> Β΅V applied).
80 % of segments -> train, 20 % -> test (all segments are in order, so concatenating train+test yields original data).
| Column | Type | Shape / units | Description |
|---|---|---|---|
sample_id |
string |
scalar | Zero-padded NPX sample index of the segment start |
cp |
list<list<float32>> |
[384][T] (T = ~90 000 samples β 3 s) |
Per-channel preprocessed voltage traces (Β΅V) |
cit |
list<list<int32>> |
[384][n_spikes_on_channel] |
Per-channel unit indices for each ground-truth spike (always 0 since single patched neuron) |
ctt |
list<list<float64>> |
[384][n_spikes_on_channel] (ms) |
Spike times within the segment, in milliseconds |
Spike events are only assigned to the unit's peak channel (unit_peak_channel); the other 383 channels carry empty cit/ctt lists for that segment. Sampling frequency is 30 kHz.
templates.npz
Mean spike template extracted with the Zhao et al. 2026 method (1 ms before + 2 ms after the patch-clamp peak; raw mean over snippets).
import numpy as np
tpl = np.load("c14/templates.npz")
tpl["templates"] # float32 [1, 90, 384] (units x time x channels)
tpl["sampling_frequency"] # 30000.0
tpl["probe_positions"] # float32 [384, 2] (x, y in um)
tpl["nbefore"], tpl["nafter"] # 30, 60
patch.npz
Authoritative patch-clamp recording for the same window used by the Arrow splits.
patch = np.load("c5/patch.npz", allow_pickle=False)
patch["patch_v"] # float32 [N] patch-clamp voltage (mV)
patch["t_patch_s"] # float64 [N] timestamps (s, NPX-aligned)
patch["spike_times_s"] # float64 [K] GT spike times (s, NPX-aligned)
patch["patch_type"] # 0-d str "wc-ic" / "juxta" / etc.
patch["cell_id"] # 0-d str e.g. "c14"
conversion_metadata.json
{
"source_metadata": {
"mode": "spe1_paired_recording",
"dataset": "Marques-Smith et al. (2018) \u2014 CRCNS spe-1",
"cell_id": "c14",
"skip_seconds": 10.0,
"duration_s": 300.0,
"patch_sample_rate_hz": 50023.0,
"preprocessing": "bandpass_300_3000Hz + global_CMR",
"spike_source": "wc_spike_samples.npy (authoritative GT)"
},
"template_metadata": {
"extraction_method": "hao2026_raw_mean_template",
"ms_before": 1.0,
"ms_after": 2.0,
"nbefore": 30,
"nafter": 60,
"n_spikes_total": 814,
"n_spikes_used": 814,
"n_out_of_bounds_discarded": 0
},
"sampling_frequency": 30000.0,
"num_units": 1,
"num_channels": 384,
"unit_peak_channel": {
"0": 161
},
"unit_locations_um": {
"0": [
27.0,
1600.0,
0.0
]
},
"gt_electrode_chan_idx": 159,
"bad_channel_ids": [
36,
75,
112,
151,
188,
303
],
"n_spikes_total": 814,
"n_spikes_used_for_templates": 814,
"segment_duration_s": 3.0,
"train_fraction": 0.8
}
unit_locations_um[0] is the (x, y, z=0) Β΅m position of the ground-truth electrode (chan_predicted from Data Summary.xlsx), and serves as the regression target for all localization metrics. Channel indices listed in bad_channel_ids were flagged by SpikeInterface's detect_bad_channels on the bandpass-filtered (pre-CMR) recording and should be masked before running amplitude-weighted localizers (COM / MT / GC).
Quick start
Load one cell
from datasets import load_dataset
ds = load_dataset("path/to/spe1", name="c14") # -> {'train', 'test'}
row = ds["train"][0]
print(row["sample_id"], len(row["cp"]), len(row["cp"][0])) # 384 channels x ~90k samples
Or directly from disk after a clone / huggingface-cli download:
from datasets import load_from_disk
dsd = load_from_disk("spe1/c14")
Reproduce Hao et al. 2026 baselines (template-level localization)
The self-contained systems/datasets/spe1/scripts/reproduce_zhao2026.py runs Center-of-Mass, Monopolar-Triangulation and Grid-Convolution directly on the shipped templates.npz files:
python scripts/reproduce_zhao2026.py # all 11 Hao cells
python scripts/reproduce_zhao2026.py --cells c14 c45 # subset
python scripts/reproduce_zhao2026.py --with-spikes # also spike-level Acc/MAE (slower)
Results are printed and written to localization_benchmark.json. Bad channels (bad_channel_ids in conversion_metadata.json) are masked automatically before localization, exactly as in the published baseline.
Reproducing the Arrow conversion from scratch
Only needed if you want to re-derive the Arrow files from the ~270 GB raw binaries (e.g. to change duration_s, segment_duration_s, or the preprocessing pipeline). The dataset ships a self-contained pipeline in scripts/ that depends only on standard PyPI packages (requests, scipy, pandas, openpyxl, spikeinterface, datasets, numpy):
# 1. Register a free CRCNS account at https://crcns.org/register
# 2. From the dataset root:
CRCNS_USERNAME=<user> CRCNS_PASSWORD=<pass> bash scripts/prepare.sh
prepare.sh will:
- Download
chanMap.matandData Summary.xlsxfrom CRCNS into.raw/(and mirror them at the dataset root so consumers don't need.raw/). - Download and extract
c{ID}.tar.gzfor each requested cell into.raw/Recordings/c{ID}/. - Run
scripts/convert_to_arrow.pyonce per cell to produce the per-cell Arrowtrain//test/splits +templates.npz+conversion_metadata.json.
Environment overrides (see prepare.sh header for full list):
| Variable | Default | Description |
|---|---|---|
SPE1_RAW_DIR |
./.raw |
Where raw downloads land |
SPE1_OUT_DIR |
. (dataset root) |
Where Arrow datasets are written |
SPE1_CELLS |
All 12 cells | Space-separated cell IDs |
SPE1_DURATION |
300 |
Seconds of recording per cell |
PYTHON |
python3 |
Python interpreter |
Provenance & preprocessing summary
- Probe: Neuropixels 1.0, 384 channels, 30 kHz, int16 raw β Β΅V (gain 2.34375 Β΅V/bit), 2-D probe geometry from
chanMap.mat. - Filtering: causal bandpass 300β3000 Hz (Butterworth) followed by global common-median reference, applied via SpikeInterface (
spre.bandpass_filter+spre.common_reference). - Bad-channel detection:
spre.detect_bad_channelson the bandpass output (before CMR) β channels listed inbad_channel_idsshould be masked before running amplitude-weighted localizers. - Ground-truth spikes: from
wc_spike_samples.npy(authoritative GT shipped by SPE-1) when available, otherwise threshold detection on the patch trace (12 x MAD below median, 1 ms refractory). - Templates: peak-aligned mean over all GT snippets, 1 ms before + 2 ms after peak, no per-spike realignment.
- Splits: contiguous 3-s segments, first 80 % β
train, last 20 % βtest.
Total size
~145 GB across 12 cells (β 13 GB / cell, c37 is ~5 GB and c29 ~12 GB because of shorter usable recording windows).
- Downloads last month
- 31