Spaces:
Sleeping
Sleeping
Abdelrahman Almatrooshi commited on
Commit Β·
afda79c
1
Parent(s): e707e31
docs: README updates in subfolders
Browse files- data_preparation/README.md +2 -2
- evaluation/README.md +1 -1
- models/L2CS-Net/README.md +1 -2
- models/L2CS-Net/models/README.md +1 -1
- models/README.md +1 -1
- notebooks/README.md +2 -4
- ui/README.md +3 -3
data_preparation/README.md
CHANGED
|
@@ -1,9 +1,9 @@
|
|
| 1 |
# data_preparation/
|
| 2 |
|
| 3 |
-
Load
|
| 4 |
|
| 5 |
**prepare_dataset.py:** `load_all_pooled()`, `load_per_person()` for LOPO, `get_numpy_splits()` (XGBoost), `get_dataloaders()` (MLP). Cleans yaw/pitch/roll and EAR to fixed ranges. Face_orientation uses 10 features: head_deviation, s_face, s_eye, h_gaze, pitch, ear_left, ear_avg, ear_right, gaze_offset, perclos.
|
| 6 |
|
| 7 |
**data_exploration.ipynb:** EDA β stats, class balance, histograms, correlations.
|
| 8 |
|
| 9 |
-
|
|
|
|
| 1 |
# data_preparation/
|
| 2 |
|
| 3 |
+
Load, clean, split `.npz` data for training/notebooks. **Important:** recompute **head_deviation** from **clipped** yaw/pitch (see `prepare_dataset.py`). **10** features for `face_orientation`: `head_deviation`, `s_face`, `s_eye`, `h_gaze`, `pitch`, `ear_left`, `ear_avg`, `ear_right`, `gaze_offset`, `perclos`.
|
| 4 |
|
| 5 |
**prepare_dataset.py:** `load_all_pooled()`, `load_per_person()` for LOPO, `get_numpy_splits()` (XGBoost), `get_dataloaders()` (MLP). Cleans yaw/pitch/roll and EAR to fixed ranges. Face_orientation uses 10 features: head_deviation, s_face, s_eye, h_gaze, pitch, ear_left, ear_avg, ear_right, gaze_offset, perclos.
|
| 6 |
|
| 7 |
**data_exploration.ipynb:** EDA β stats, class balance, histograms, correlations.
|
| 8 |
|
| 9 |
+
Import from `models.mlp.train` / `models.xgboost.train` / notebooks β donβt run this module standalone.
|
evaluation/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# evaluation/
|
| 2 |
|
| 3 |
-
Training logs, threshold/weight analysis,
|
| 4 |
|
| 5 |
**Contents:** `logs/` (JSON from training runs), `plots/` (ROC, weight search, EAR/MAR), `justify_thresholds.py`, `feature_importance.py`, and the generated markdown reports.
|
| 6 |
|
|
|
|
| 1 |
# evaluation/
|
| 2 |
|
| 3 |
+
Training logs, threshold/weight analysis, metrics. **LOPO** (9 folds) + **Youdenβs J** + weight grid search β see `justify_thresholds.py`.
|
| 4 |
|
| 5 |
**Contents:** `logs/` (JSON from training runs), `plots/` (ROC, weight search, EAR/MAR), `justify_thresholds.py`, `feature_importance.py`, and the generated markdown reports.
|
| 6 |
|
models/L2CS-Net/README.md
CHANGED
|
@@ -1,5 +1,4 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
|
| 4 |
<p align="center">
|
| 5 |
<img src="https://github.com/Ahmednull/Storage/blob/main/gaze.gif" alt="animated" />
|
|
|
|
| 1 |
+
Bundled for FocusGuard (optional gaze veto, Gaze360 weights). See repo root `README.md`.
|
|
|
|
| 2 |
|
| 3 |
<p align="center">
|
| 4 |
<img src="https://github.com/Ahmednull/Storage/blob/main/gaze.gif" alt="animated" />
|
models/L2CS-Net/models/README.md
CHANGED
|
@@ -1 +1 @@
|
|
| 1 |
-
# Path to pre-trained
|
|
|
|
| 1 |
+
# Path to pre-trained weights
|
models/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# models/
|
| 2 |
|
| 3 |
-
Feature extraction +
|
| 4 |
|
| 5 |
## What is here
|
| 6 |
|
|
|
|
| 1 |
# models/
|
| 2 |
|
| 3 |
+
Feature extraction + training for FocusGuard (17 features β 10 for MLP/XGB; geometric/hybrid/L2CS paths β see root `README.md`).
|
| 4 |
|
| 5 |
## What is here
|
| 6 |
|
notebooks/README.md
CHANGED
|
@@ -1,7 +1,5 @@
|
|
| 1 |
# notebooks/
|
| 2 |
|
| 3 |
-
|
| 4 |
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
Run in Jupyter with the project venv; set kernel cwd to repo root or `notebooks/`.
|
|
|
|
| 1 |
# notebooks/
|
| 2 |
|
| 3 |
+
`mlp.ipynb`, `xgboost.ipynb`: same flow as scripted trainers (config β `prepare_dataset` β 70/15/15 β metrics β checkpoint + log β **LOPO** over 9 people).
|
| 4 |
|
| 5 |
+
Jupyter + project venv; cwd = repo root or `notebooks/`.
|
|
|
|
|
|
ui/README.md
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
# ui/
|
| 2 |
|
| 3 |
-
|
| 4 |
|
| 5 |
-
**Files:** `pipeline.py` (FaceMesh, MLP, XGBoost, Hybrid
|
| 6 |
|
| 7 |
-
**Pipelines:**
|
| 8 |
|
| 9 |
**Run demo:**
|
| 10 |
|
|
|
|
| 1 |
# ui/
|
| 2 |
|
| 3 |
+
OpenCV demo + inference pipelines used by **`main.py`** (FastAPI WebSocket `/ws/video`, latest-frame buffer; React UI + SQLite sessions).
|
| 4 |
|
| 5 |
+
**Files:** `pipeline.py` (FaceMesh, MLP, XGBoost, Hybrid), `live_demo.py` (webcam + mesh + label).
|
| 6 |
|
| 7 |
+
**Pipelines:** Geometric / MLP (`mlp_best.pt` + scaler) / XGBoost (`xgboost_face_orientation_best.json`) / hybrid + optional L2CS veto.
|
| 8 |
|
| 9 |
**Run demo:**
|
| 10 |
|