final_test / notebooks /README.md
Abdelrahman Almatrooshi
Deploy snapshot from main b7a59b11809483dfc959f196f1930240f2662c49
22a6915

notebooks

Jupyter notebooks for interactive model training and evaluation. These mirror the CLI training scripts but with inline visualisations for loss curves, LOPO results, and confusion matrices.

Notebooks

Notebook Purpose
mlp.ipynb MLP training with per-epoch loss/accuracy plots, LOPO evaluation, confusion matrix, and ROC curve generation
xgboost.ipynb XGBoost training with feature importance visualisation, LOPO evaluation, per-person metrics table, and ROC curve generation

When to use

Use the notebooks for exploratory work and visualisation during development. For reproducible training runs with ClearML logging and checkpoint saving, use the CLI scripts (python -m models.mlp.train and python -m models.xgboost.train).

Data dependency

Both notebooks call into data_preparation.prepare_dataset to load and split the dataset. Collected .npz files must be present under data/collected_<participant>/. See data_preparation/README.md for details on the data format and collection protocol.

Key results reproduced

The notebooks produce the same core evaluation results as the CLI scripts:

Pooled split (70/15/15)

Model Accuracy F1 ROC-AUC
XGBoost 95.87% 0.959 0.991
MLP 92.92% 0.929 0.971

LOPO (9 participants)

Model LOPO AUC Optimal threshold F1 at optimal
MLP 0.862 0.228 0.858
XGBoost 0.870 0.280 0.855