Cognitive Fatigue Detector

Two-stage transfer learning pipeline for real-time fatigue detection from eye-tracking signals.

Architecture

SharedEncoder: Linear(in→128) → LayerNorm → GELU → Dropout → Linear(128→128) → LayerNorm → GELU → Dropout → Linear(128→64) → LayerNorm

  • Stage 1 β€” GazeBase pretraining: regression on continuous fatigue score (881 subjects, 12,334 recordings, 1000 Hz)
  • Stage 2 β€” SEED-VIG LOSO: binary alert/drowsy classification (12 subjects, 4,566 EEG windows, 200 Hz)

Features (16 clean oculomotor biomarkers)

Group Features
Blink blink_rate_pm, blink_count, mean_blink_ms
Fixation num_fixations, mean_fix_ms, std_fix_ms
Saccade num_saccades, mean_sac_ms, mean_sac_vel, peak_sac_vel, vel_std
Pupil mean_pupil, std_pupil, pupil_range
Gaze dispersion gaze_x_std, gaze_y_std

Evaluation

Metric All subjects (n=12) Excl. S6 outlier (n=11)
AUC-ROC 0.866 Β± 0.137 0.902
F1-macro 0.833 Β± 0.110 β€”
GazeBase RΒ² -0.225 β€”

Validated with Leave-One-Subject-Out cross-validation (12 folds).

Usage

import torch
from huggingface_hub import hf_hub_download

ck_path = hf_hub_download(repo_id="tdnathmlenthusiast/cognitive-fatigue-detector", filename="gaze_clean_checkpoint.pt")
ck = torch.load(ck_path, map_location="cpu")

model = GazeModel(ck["model_config"]["in_dim"])
model.load_state_dict(ck["model_state_dict"])
model.eval()

Datasets

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using tdnathmlenthusiast/cognitive-fatigue-detector 1