Abdelrahman Almatrooshi commited on
Commit
afda79c
Β·
1 Parent(s): e707e31

docs: README updates in subfolders

Browse files
data_preparation/README.md CHANGED
@@ -1,9 +1,9 @@
1
  # data_preparation/
2
 
3
- Load and split the .npz data. Used by all training code and notebooks.
4
 
5
  **prepare_dataset.py:** `load_all_pooled()`, `load_per_person()` for LOPO, `get_numpy_splits()` (XGBoost), `get_dataloaders()` (MLP). Cleans yaw/pitch/roll and EAR to fixed ranges. Face_orientation uses 10 features: head_deviation, s_face, s_eye, h_gaze, pitch, ear_left, ear_avg, ear_right, gaze_offset, perclos.
6
 
7
  **data_exploration.ipynb:** EDA β€” stats, class balance, histograms, correlations.
8
 
9
- You don’t run prepare_dataset directly; import it from `models.mlp.train`, `models.xgboost.train`, or the notebooks.
 
1
  # data_preparation/
2
 
3
+ Load, clean, split `.npz` data for training/notebooks. **Important:** recompute **head_deviation** from **clipped** yaw/pitch (see `prepare_dataset.py`). **10** features for `face_orientation`: `head_deviation`, `s_face`, `s_eye`, `h_gaze`, `pitch`, `ear_left`, `ear_avg`, `ear_right`, `gaze_offset`, `perclos`.
4
 
5
  **prepare_dataset.py:** `load_all_pooled()`, `load_per_person()` for LOPO, `get_numpy_splits()` (XGBoost), `get_dataloaders()` (MLP). Cleans yaw/pitch/roll and EAR to fixed ranges. Face_orientation uses 10 features: head_deviation, s_face, s_eye, h_gaze, pitch, ear_left, ear_avg, ear_right, gaze_offset, perclos.
6
 
7
  **data_exploration.ipynb:** EDA β€” stats, class balance, histograms, correlations.
8
 
9
+ Import from `models.mlp.train` / `models.xgboost.train` / notebooks β€” don’t run this module standalone.
evaluation/README.md CHANGED
@@ -1,6 +1,6 @@
1
  # evaluation/
2
 
3
- Training logs, threshold/weight analysis, and metrics.
4
 
5
  **Contents:** `logs/` (JSON from training runs), `plots/` (ROC, weight search, EAR/MAR), `justify_thresholds.py`, `feature_importance.py`, and the generated markdown reports.
6
 
 
1
  # evaluation/
2
 
3
+ Training logs, threshold/weight analysis, metrics. **LOPO** (9 folds) + **Youden’s J** + weight grid search β€” see `justify_thresholds.py`.
4
 
5
  **Contents:** `logs/` (JSON from training runs), `plots/` (ROC, weight search, EAR/MAR), `justify_thresholds.py`, `feature_importance.py`, and the generated markdown reports.
6
 
models/L2CS-Net/README.md CHANGED
@@ -1,5 +1,4 @@
1
-
2
-
3
 
4
  <p align="center">
5
  <img src="https://github.com/Ahmednull/Storage/blob/main/gaze.gif" alt="animated" />
 
1
+ Bundled for FocusGuard (optional gaze veto, Gaze360 weights). See repo root `README.md`.
 
2
 
3
  <p align="center">
4
  <img src="https://github.com/Ahmednull/Storage/blob/main/gaze.gif" alt="animated" />
models/L2CS-Net/models/README.md CHANGED
@@ -1 +1 @@
1
- # Path to pre-trained models
 
1
+ # Path to pre-trained weights
models/README.md CHANGED
@@ -1,6 +1,6 @@
1
  # models/
2
 
3
- Feature extraction + model training scripts for FocusGuard.
4
 
5
  ## What is here
6
 
 
1
  # models/
2
 
3
+ Feature extraction + training for FocusGuard (17 features β†’ 10 for MLP/XGB; geometric/hybrid/L2CS paths β€” see root `README.md`).
4
 
5
  ## What is here
6
 
notebooks/README.md CHANGED
@@ -1,7 +1,5 @@
1
  # notebooks/
2
 
3
- MLP and XGBoost training + LOPO evaluation.
4
 
5
- **Files:** `mlp.ipynb`, `xgboost.ipynb`. Same flow: config β†’ data from prepare_dataset β†’ 70/15/15 train β†’ loss curves β†’ test metrics β†’ save checkpoint + JSON log β†’ LOPO over 9 participants.
6
-
7
- Run in Jupyter with the project venv; set kernel cwd to repo root or `notebooks/`.
 
1
  # notebooks/
2
 
3
+ `mlp.ipynb`, `xgboost.ipynb`: same flow as scripted trainers (config β†’ `prepare_dataset` β†’ 70/15/15 β†’ metrics β†’ checkpoint + log β†’ **LOPO** over 9 people).
4
 
5
+ Jupyter + project venv; cwd = repo root or `notebooks/`.
 
 
ui/README.md CHANGED
@@ -1,10 +1,10 @@
1
  # ui/
2
 
3
- Live OpenCV demo and inference pipelines used by the app.
4
 
5
- **Files:** `pipeline.py` (FaceMesh, MLP, XGBoost, Hybrid pipelines), `live_demo.py` (webcam window with mesh + focus label).
6
 
7
- **Pipelines:** FaceMesh = rule-based head/eye; MLP = 10 features β†’ PyTorch MLP (checkpoints/mlp_best.pt + scaler); XGBoost = same 10 features β†’ xgboost_face_orientation_best.json. Hybrid combines ML/XGB with geometric scores.
8
 
9
  **Run demo:**
10
 
 
1
  # ui/
2
 
3
+ OpenCV demo + inference pipelines used by **`main.py`** (FastAPI WebSocket `/ws/video`, latest-frame buffer; React UI + SQLite sessions).
4
 
5
+ **Files:** `pipeline.py` (FaceMesh, MLP, XGBoost, Hybrid), `live_demo.py` (webcam + mesh + label).
6
 
7
+ **Pipelines:** Geometric / MLP (`mlp_best.pt` + scaler) / XGBoost (`xgboost_face_orientation_best.json`) / hybrid + optional L2CS veto.
8
 
9
  **Run demo:**
10