Philosophers-Stone โ€” Brain-Health Inference from Single-Channel Sleep EEG

Philosophers-Stone is a lightweight inference tool that converts a single-channel overnight sleep EEG into a quantitative index of brain health.
It applies a validated multi-cohort deep-learning model trained on 36,000 sleep recordings to estimate cognitive performance, disease likelihoods, and mortality-related physiological patterns.
The tool runs in seconds and outputs both a single Brain Health Score and a 1024-dimensional latent embedding suitable for research and biomarker discovery.

Code repository: https://github.com/wgbrain/Philosophers-Stone

Scientific study

If you use or reference this tool, please cite the peer-reviewed study:

Ganglberger, W., Sun, H., Turley, N., et al. and Westover, M.B. (2026) "Brain Health from Sleep EEG: A Multicohort, Deep Learning Biomarker for Cognition, Disease, and Mortality", NEJM AI, 3(3), DOI: 10.1056/AIoa2500487.

Available here.


Intended use

This model is intended for research use in sleep science, neurology, aging, and biomarker discovery.
It is not intended for clinical diagnosis, treatment decisions, or emergency use.

Who is this for?

  • Sleep scientists
  • Neurologists and dementia researchers
  • Aging and cognitive-decline investigators
  • Psychiatry researchers
  • Data scientists working with physiological signals
  • Clinical-trial teams exploring EEG-based biomarkers

What you get

  • Brain Health Score (single interpretable metric)
  • 1ร—1024 latent brain-health embedding (AI-derived sleep features)
  • Predictions for cognition, disease risk, and mortality-related physiology
  • Optional outputs: spectrograms and per-recording JSON summaries

Model provenance

This checkpoint implements the multi-task deep-learning framework described in:

Ganglberger, W., Sun, H., Turley, N., et al. and Westover, M.B. (2026) "Brain Health from Sleep EEG: A Multicohort, Deep Learning Biomarker for Cognition, Disease, and Mortality", NEJM AI, 3(3), DOI: 10.1056/AIoa2500487.


Requirements

  • Python >= 3.10
  • PyTorch 2.x (CUDA recommended)
  • pandas, numpy, mne (for EDF), h5py, matplotlib, tqdm, psutil

Install dependencies:

pip install torch pandas numpy mne h5py matplotlib tqdm psutil

Model file

This Hugging Face repository hosts the checkpoint file used by the Philosophers-Stone codebase.
The current GitHub project auto-downloads it from this model repo when the local file is missing.


Inputs

Manifest CSV

A CSV with columns:

  • filepath
  • age (years)
  • sex (0=female, 1=male)

EEG file requirements

Philosophers-Stone accepts single-channel overnight EEG in HDF5 (.h5) or EDF (.edf) format.
Preferred channel: C4-M1.

Format Requirements
HDF5 (.h5) Dataset: signals/c4-m1 (1-D float array, full night)
Attributes: sampling_rate=200, unit_voltage="uV"
Extra channels and annotations are ignored
Manifest uses absolute paths
EDF (.edf) Must contain a C4-M1 channel (label variants allowed)
Any sampling rate accepted; auto-resampled to 200 Hz with anti-aliasing

Sample full-night EEG data is included in the GitHub repository under ./sample-data/.


Quick start

python philosopher.py \
  --manifest_csv phi_manifest.csv

Outputs

  • Summary CSV (phi_out/phi_results.csv)
  • Brain Health Score
  • Disease and cognition-related outputs
  • Latent embedding (lhl_1 to lhl_1024)
  • Optional JSON files under phi_out/json/
  • Optional spectrograms under phi_out/figures/

Limitations

  • The model is designed for single-channel overnight sleep EEG, primarily C4-M1.
  • Performance may degrade when acquisition hardware, montages, preprocessing, or cohort characteristics differ from the development setting.
  • Outputs should be interpreted as research biomarkers, not as standalone medical conclusions.

Performance tips

  • Use a GPU if available
  • Keep batch_size=1

License

CC BY-NC 4.0 โ€” Attribution-NonCommercial 4.0 International.
See the GitHub repository LICENSE file for details.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support