docs(plan): swap OASIS branch for EEG stub plan; soften MRI prerequisite
Browse filesUser clarified intent:
1. MRI training is final — best_model.pt is the artifact, just load+predict.
Removed the ambiguous 'state_dict vs full model' blocker framing; tests
already use a synthetic dummy resnet18 so TDD works before the real .pt
lands. Real-artifact sanity test auto-skips when absent.
2. EEG pretrained model assumed present for the hackathon demo. Replaced
the OASIS-tabular fork with a stub-able EEG plan: src/models/eeg_model.py
loads any sklearn predict_proba classifier from joblib, with a synthetic
stub fixture so tests pass today. Real artifact swaps in at
data/processed/eeg_clf.joblib with zero code changes.
Roadmap, MRI plan, and the new EEG plan all preserve independence
guarantees and align on the same loader+stub pattern.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
@@ -0,0 +1,542 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# EEG Pretrained Classifier — Stub Integration Plan
|
| 2 |
+
|
| 3 |
+
> **For agentic workers:** REQUIRED SUB-SKILL: `superpowers:subagent-driven-development`. TDD throughout.
|
| 4 |
+
|
| 5 |
+
**Goal.** Add an EEG classifier to the decision layer that flows into the fusion engine as the `eeg` modality. The real pretrained artifact will arrive later; for the hackathon demo we ship a stub-able contract so the entire flow (Streamlit → API → fusion) works **today**, and swapping in the real `.joblib` later is a one-file drop with **zero** code changes.
|
| 6 |
+
|
| 7 |
+
**Architecture.** New module `src/models/eeg_model.py` parallel to `src/models/mri_dl_2d.py`. Loads a sklearn-style classifier from `joblib`, runs `predict_proba` on a feature row produced by the existing `src/pipelines/eeg_pipeline.py` (which already extracts band-power features). Output dict shape mirrors the other model surfaces, so the API and fusion engine consume it without dispatch logic. A new route `POST /predict/eeg` exposes it. The fusion engine already accepts an `eeg` `ModalityPrediction` — no fusion code changes.
|
| 8 |
+
|
| 9 |
+
**Contract for the eventual real artifact.**
|
| 10 |
+
|
| 11 |
+
```
|
| 12 |
+
- Path: data/processed/eeg_clf.joblib (override via EEG_CLF_ARTIFACT env)
|
| 13 |
+
- Type: any object with sklearn's predict_proba interface (e.g. RandomForest,
|
| 14 |
+
SVC with probability=True, MLPClassifier, or a thin wrapper around
|
| 15 |
+
a torch model)
|
| 16 |
+
- Input: numpy array of shape (1, n_features) where n_features matches the
|
| 17 |
+
column count of eeg_pipeline.py's parquet output
|
| 18 |
+
- Output: probability vector of length len(EEG_CLF_LABELS); default labels
|
| 19 |
+
are ("control", "alzheimers")
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
The stub fixture (`tests/fixtures/build_dummy_eeg_clf.py`) writes a `RandomForestClassifier` with the same interface, so the entire pipeline is testable before the real model arrives.
|
| 23 |
+
|
| 24 |
+
**Tech stack.** scikit-learn (already in deps), joblib, numpy, pandas. No new dependencies.
|
| 25 |
+
|
| 26 |
+
---
|
| 27 |
+
|
| 28 |
+
## Asset note
|
| 29 |
+
|
| 30 |
+
For the demo we **assume the real EEG model exists**. Tests use the stub fixture so they pass regardless. When the real artifact arrives:
|
| 31 |
+
|
| 32 |
+
1. Save it to `data/processed/eeg_clf.joblib`.
|
| 33 |
+
2. If its label order isn't `("control", "alzheimers")`, set `EEG_CLF_LABELS=label0,label1,...` env (comma-separated). The fusion engine's `signal_for_disease` already case-insensitively matches labels, so as long as one of them is `"alzheimers"` (or `"parkinsons"`), it flows.
|
| 34 |
+
3. If `n_features` doesn't match the pipeline's parquet output, update the EEG pipeline's feature contract — out of scope for this plan, separate sub-plan if needed.
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
## File structure
|
| 39 |
+
|
| 40 |
+
| Path | Responsibility |
|
| 41 |
+
|---|---|
|
| 42 |
+
| Create `src/models/eeg_model.py` | sklearn-style classifier loader + `predict_features()` |
|
| 43 |
+
| Modify `src/api/routes.py` | new route `POST /predict/eeg` |
|
| 44 |
+
| Modify `src/api/schemas.py` | `EEGPredictRequest` / `EEGPredictResponse` |
|
| 45 |
+
| Create `tests/fixtures/build_dummy_eeg_clf.py` | stub joblib-pickled RF for tests |
|
| 46 |
+
| Create `tests/models/test_eeg_model.py` | unit tests for loader + predict |
|
| 47 |
+
| Create `tests/api/test_eeg_predict_route.py` | integration test through `POST /predict/eeg` |
|
| 48 |
+
| Create `tests/fusion/test_eeg_modality_flow.py` | confirms an EEG prediction flows into fusion as the `eeg` modality |
|
| 49 |
+
| Create `tests/models/test_eeg_model_real.py` | real-artifact sanity (skips when absent — same pattern as MRI Task 4) |
|
| 50 |
+
| Modify `README.md` | document the contract + how to swap the real artifact in |
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
|
| 54 |
+
## Tasks
|
| 55 |
+
|
| 56 |
+
### Task 1: EEG model module + dummy fixture
|
| 57 |
+
|
| 58 |
+
**Files:**
|
| 59 |
+
- Create: `src/models/eeg_model.py`
|
| 60 |
+
- Create: `tests/fixtures/build_dummy_eeg_clf.py`
|
| 61 |
+
- Create: `tests/models/test_eeg_model.py`
|
| 62 |
+
|
| 63 |
+
- [ ] **Step 1: Dummy fixture.**
|
| 64 |
+
|
| 65 |
+
`tests/fixtures/build_dummy_eeg_clf.py`:
|
| 66 |
+
|
| 67 |
+
```python
|
| 68 |
+
"""Build a stub EEG classifier (sklearn RF) for tests.
|
| 69 |
+
|
| 70 |
+
Demo-time placeholder — produces a 2-class probability output matching the
|
| 71 |
+
eeg_model.predict_features contract. Replace with the real artifact when
|
| 72 |
+
the user provides it; tests don't change.
|
| 73 |
+
"""
|
| 74 |
+
from __future__ import annotations
|
| 75 |
+
|
| 76 |
+
from pathlib import Path
|
| 77 |
+
|
| 78 |
+
import joblib
|
| 79 |
+
import numpy as np
|
| 80 |
+
from sklearn.ensemble import RandomForestClassifier
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
def build(path: Path, n_features: int = 16, seed: int = 0) -> Path:
|
| 84 |
+
"""Save a fitted RandomForestClassifier at `path` and return the path."""
|
| 85 |
+
path = Path(path)
|
| 86 |
+
if path.exists():
|
| 87 |
+
return path
|
| 88 |
+
path.parent.mkdir(parents=True, exist_ok=True)
|
| 89 |
+
|
| 90 |
+
rng = np.random.default_rng(seed)
|
| 91 |
+
n = 200
|
| 92 |
+
n_alz = n // 2
|
| 93 |
+
# Synthetic separable features: alzheimers half has higher mean.
|
| 94 |
+
X_ctrl = rng.normal(0.0, 1.0, size=(n - n_alz, n_features))
|
| 95 |
+
X_alz = rng.normal(2.0, 1.0, size=(n_alz, n_features))
|
| 96 |
+
X = np.vstack([X_ctrl, X_alz])
|
| 97 |
+
y = np.array([0] * (n - n_alz) + [1] * n_alz)
|
| 98 |
+
|
| 99 |
+
clf = RandomForestClassifier(n_estimators=12, max_depth=6, random_state=seed)
|
| 100 |
+
clf.fit(X, y)
|
| 101 |
+
joblib.dump(clf, str(path))
|
| 102 |
+
return path
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
- [ ] **Step 2: Failing test.**
|
| 106 |
+
|
| 107 |
+
`tests/models/test_eeg_model.py`:
|
| 108 |
+
|
| 109 |
+
```python
|
| 110 |
+
"""Tests for src.models.eeg_model."""
|
| 111 |
+
from __future__ import annotations
|
| 112 |
+
|
| 113 |
+
from pathlib import Path
|
| 114 |
+
|
| 115 |
+
import numpy as np
|
| 116 |
+
import pytest
|
| 117 |
+
|
| 118 |
+
from src.models import eeg_model
|
| 119 |
+
from tests.fixtures.build_dummy_eeg_clf import build as build_dummy_eeg
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
class TestEEGModel:
|
| 123 |
+
def test_load_missing_artifact_raises(self, tmp_path: Path) -> None:
|
| 124 |
+
with pytest.raises(FileNotFoundError, match="EEG classifier artifact not found"):
|
| 125 |
+
eeg_model.load(tmp_path / "nope.joblib")
|
| 126 |
+
|
| 127 |
+
def test_predict_returns_full_dict(self, tmp_path: Path) -> None:
|
| 128 |
+
ckpt = build_dummy_eeg(tmp_path / "eeg.joblib", n_features=16)
|
| 129 |
+
clf = eeg_model.load(ckpt)
|
| 130 |
+
features = np.zeros((16,), dtype=np.float32)
|
| 131 |
+
|
| 132 |
+
out = eeg_model.predict_features(clf, features)
|
| 133 |
+
|
| 134 |
+
assert set(out) == {"label", "label_text", "confidence", "probabilities"}
|
| 135 |
+
assert out["label"] in {0, 1}
|
| 136 |
+
assert out["label_text"] in eeg_model.DEFAULT_LABELS
|
| 137 |
+
assert 0.0 <= out["confidence"] <= 1.0
|
| 138 |
+
probs = out["probabilities"]
|
| 139 |
+
assert len(probs) == 2
|
| 140 |
+
assert abs(sum(p["probability"] for p in probs) - 1.0) < 1e-5
|
| 141 |
+
|
| 142 |
+
def test_alzheimers_separation_with_synthetic_features(self, tmp_path: Path) -> None:
|
| 143 |
+
# Synthetic stub clusters alzheimers around mean=2.0, control around 0.0.
|
| 144 |
+
ckpt = build_dummy_eeg(tmp_path / "eeg.joblib", n_features=16)
|
| 145 |
+
clf = eeg_model.load(ckpt)
|
| 146 |
+
alz_features = np.full((16,), 2.0, dtype=np.float32)
|
| 147 |
+
ctrl_features = np.zeros((16,), dtype=np.float32)
|
| 148 |
+
|
| 149 |
+
alz_pred = eeg_model.predict_features(clf, alz_features)
|
| 150 |
+
ctrl_pred = eeg_model.predict_features(clf, ctrl_features)
|
| 151 |
+
|
| 152 |
+
assert alz_pred["label_text"] == "alzheimers"
|
| 153 |
+
assert ctrl_pred["label_text"] == "control"
|
| 154 |
+
|
| 155 |
+
def test_label_override_via_env(self, tmp_path: Path, monkeypatch) -> None:
|
| 156 |
+
monkeypatch.setenv("EEG_CLF_LABELS", "no_disease,alzheimers")
|
| 157 |
+
ckpt = build_dummy_eeg(tmp_path / "eeg.joblib", n_features=16)
|
| 158 |
+
clf = eeg_model.load(ckpt)
|
| 159 |
+
out = eeg_model.predict_features(clf, np.zeros((16,), dtype=np.float32))
|
| 160 |
+
assert out["label_text"] in {"no_disease", "alzheimers"}
|
| 161 |
+
|
| 162 |
+
def test_feature_count_mismatch_raises(self, tmp_path: Path) -> None:
|
| 163 |
+
ckpt = build_dummy_eeg(tmp_path / "eeg.joblib", n_features=16)
|
| 164 |
+
clf = eeg_model.load(ckpt)
|
| 165 |
+
with pytest.raises(ValueError, match="feature count"):
|
| 166 |
+
eeg_model.predict_features(clf, np.zeros((8,), dtype=np.float32))
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
Run → `ModuleNotFoundError: No module named 'src.models.eeg_model'`.
|
| 170 |
+
|
| 171 |
+
- [ ] **Step 3: Minimal impl.**
|
| 172 |
+
|
| 173 |
+
`src/models/eeg_model.py`:
|
| 174 |
+
|
| 175 |
+
```python
|
| 176 |
+
"""EEG classifier inference utilities.
|
| 177 |
+
|
| 178 |
+
Loads any sklearn-style classifier (object with `predict_proba`) from joblib
|
| 179 |
+
and emits the same dict shape as src.models.mri_model.predict_with_proba so
|
| 180 |
+
the API surface and fusion engine treat MRI and EEG predictions identically.
|
| 181 |
+
|
| 182 |
+
The real pretrained artifact swaps in at data/processed/eeg_clf.joblib (or
|
| 183 |
+
override via EEG_CLF_ARTIFACT env). Tests use a stub fixture; the real model
|
| 184 |
+
drops in without code changes.
|
| 185 |
+
"""
|
| 186 |
+
from __future__ import annotations
|
| 187 |
+
|
| 188 |
+
import os
|
| 189 |
+
from pathlib import Path
|
| 190 |
+
from typing import Any, Sequence
|
| 191 |
+
|
| 192 |
+
import joblib
|
| 193 |
+
import numpy as np
|
| 194 |
+
|
| 195 |
+
from src.core.logger import get_logger
|
| 196 |
+
|
| 197 |
+
logger = get_logger(__name__)
|
| 198 |
+
|
| 199 |
+
DEFAULT_LABELS: tuple[str, ...] = ("control", "alzheimers")
|
| 200 |
+
|
| 201 |
+
|
| 202 |
+
def _resolve_labels() -> tuple[str, ...]:
|
| 203 |
+
raw = os.environ.get("EEG_CLF_LABELS")
|
| 204 |
+
if not raw:
|
| 205 |
+
return DEFAULT_LABELS
|
| 206 |
+
return tuple(s.strip() for s in raw.split(",") if s.strip())
|
| 207 |
+
|
| 208 |
+
|
| 209 |
+
def load(path: Path) -> Any:
|
| 210 |
+
path = Path(path)
|
| 211 |
+
if not path.exists():
|
| 212 |
+
raise FileNotFoundError(f"EEG classifier artifact not found: {path}")
|
| 213 |
+
return joblib.load(str(path))
|
| 214 |
+
|
| 215 |
+
|
| 216 |
+
def predict_features(
|
| 217 |
+
model: Any,
|
| 218 |
+
features: np.ndarray,
|
| 219 |
+
labels: Sequence[str] | None = None,
|
| 220 |
+
) -> dict[str, Any]:
|
| 221 |
+
"""Run inference on one row of EEG features.
|
| 222 |
+
|
| 223 |
+
Args:
|
| 224 |
+
model: sklearn-style classifier (must expose `predict_proba`).
|
| 225 |
+
features: 1-D numpy array of shape (n_features,) matching the
|
| 226 |
+
classifier's training-time feature count.
|
| 227 |
+
labels: optional label tuple. Defaults to env-derived or ("control",
|
| 228 |
+
"alzheimers").
|
| 229 |
+
"""
|
| 230 |
+
arr = np.asarray(features, dtype=np.float32).reshape(-1)
|
| 231 |
+
expected = int(getattr(model, "n_features_in_", arr.size))
|
| 232 |
+
if arr.size != expected:
|
| 233 |
+
raise ValueError(
|
| 234 |
+
f"EEG feature count mismatch: model expects {expected}, got {arr.size}"
|
| 235 |
+
)
|
| 236 |
+
|
| 237 |
+
proba = np.asarray(model.predict_proba(arr.reshape(1, -1))[0], dtype=np.float32)
|
| 238 |
+
label_names = tuple(labels or _resolve_labels())
|
| 239 |
+
if len(label_names) != proba.shape[0]:
|
| 240 |
+
logger.warning(
|
| 241 |
+
"EEG label count (%d) != model output dim (%d); falling back to class_0..N",
|
| 242 |
+
len(label_names), proba.shape[0],
|
| 243 |
+
)
|
| 244 |
+
label_names = tuple(f"class_{i}" for i in range(proba.shape[0]))
|
| 245 |
+
|
| 246 |
+
label_idx = int(np.argmax(proba))
|
| 247 |
+
return {
|
| 248 |
+
"label": label_idx,
|
| 249 |
+
"label_text": label_names[label_idx],
|
| 250 |
+
"confidence": float(proba[label_idx]),
|
| 251 |
+
"probabilities": [
|
| 252 |
+
{"label": i, "label_text": label_names[i], "probability": float(p)}
|
| 253 |
+
for i, p in enumerate(proba)
|
| 254 |
+
],
|
| 255 |
+
}
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
+
Run tests → expect 5 passed.
|
| 259 |
+
|
| 260 |
+
- [ ] **Step 4:** `pytest -q` no regressions.
|
| 261 |
+
|
| 262 |
+
- [ ] **Step 5:** commit:
|
| 263 |
+
|
| 264 |
+
```bash
|
| 265 |
+
git add src/models/eeg_model.py tests/fixtures/build_dummy_eeg_clf.py tests/models/test_eeg_model.py
|
| 266 |
+
git commit -m "feat(models): EEG classifier loader + predict (stub-able for hackathon demo)"
|
| 267 |
+
```
|
| 268 |
+
|
| 269 |
+
---
|
| 270 |
+
|
| 271 |
+
### Task 2: `POST /predict/eeg` route
|
| 272 |
+
|
| 273 |
+
**Files:**
|
| 274 |
+
- Modify: `src/api/schemas.py` (add `EEGPredictRequest` / `EEGPredictResponse`)
|
| 275 |
+
- Modify: `src/api/routes.py`
|
| 276 |
+
- Create: `tests/api/test_eeg_predict_route.py`
|
| 277 |
+
|
| 278 |
+
- [ ] **Step 1: Failing test.**
|
| 279 |
+
|
| 280 |
+
`tests/api/test_eeg_predict_route.py`:
|
| 281 |
+
|
| 282 |
+
```python
|
| 283 |
+
"""Integration: POST /predict/eeg."""
|
| 284 |
+
from __future__ import annotations
|
| 285 |
+
|
| 286 |
+
from pathlib import Path
|
| 287 |
+
|
| 288 |
+
import pytest
|
| 289 |
+
from fastapi.testclient import TestClient
|
| 290 |
+
|
| 291 |
+
from src.api.main import app
|
| 292 |
+
from tests.fixtures.build_dummy_eeg_clf import build as build_dummy_eeg
|
| 293 |
+
|
| 294 |
+
|
| 295 |
+
@pytest.fixture()
|
| 296 |
+
def client(monkeypatch, tmp_path):
|
| 297 |
+
artifact = build_dummy_eeg(tmp_path / "eeg.joblib", n_features=16)
|
| 298 |
+
monkeypatch.setenv("EEG_CLF_ARTIFACT", str(artifact))
|
| 299 |
+
return TestClient(app)
|
| 300 |
+
|
| 301 |
+
|
| 302 |
+
def test_predict_eeg_happy_path(client):
|
| 303 |
+
body = {"features": [0.0] * 16}
|
| 304 |
+
r = client.post("/predict/eeg", json=body)
|
| 305 |
+
assert r.status_code == 200, r.text
|
| 306 |
+
data = r.json()
|
| 307 |
+
assert data["label_text"] in {"control", "alzheimers"}
|
| 308 |
+
assert 0.0 <= data["confidence"] <= 1.0
|
| 309 |
+
assert len(data["probabilities"]) == 2
|
| 310 |
+
|
| 311 |
+
|
| 312 |
+
def test_predict_eeg_alzheimers_profile(client):
|
| 313 |
+
body = {"features": [2.0] * 16}
|
| 314 |
+
r = client.post("/predict/eeg", json=body)
|
| 315 |
+
assert r.status_code == 200, r.text
|
| 316 |
+
data = r.json()
|
| 317 |
+
assert data["label_text"] == "alzheimers"
|
| 318 |
+
|
| 319 |
+
|
| 320 |
+
def test_predict_eeg_feature_mismatch_returns_500(client):
|
| 321 |
+
# Stub was trained on 16 features; sending 8 must surface as a 500 (or 400).
|
| 322 |
+
body = {"features": [0.0] * 8}
|
| 323 |
+
r = client.post("/predict/eeg", json=body)
|
| 324 |
+
assert r.status_code in {400, 500}
|
| 325 |
+
```
|
| 326 |
+
|
| 327 |
+
- [ ] **Step 2: Schemas.**
|
| 328 |
+
|
| 329 |
+
In `src/api/schemas.py`, append (before the fusion re-export block):
|
| 330 |
+
|
| 331 |
+
```python
|
| 332 |
+
class EEGPredictRequest(BaseModel):
|
| 333 |
+
features: list[float] = Field(
|
| 334 |
+
..., min_length=1,
|
| 335 |
+
description="EEG features matching the classifier's training-time feature count.",
|
| 336 |
+
)
|
| 337 |
+
|
| 338 |
+
|
| 339 |
+
class EEGClassProbability(BaseModel):
|
| 340 |
+
label: int
|
| 341 |
+
label_text: str
|
| 342 |
+
probability: float
|
| 343 |
+
|
| 344 |
+
|
| 345 |
+
class EEGPredictResponse(BaseModel):
|
| 346 |
+
label: int
|
| 347 |
+
label_text: str
|
| 348 |
+
confidence: float
|
| 349 |
+
probabilities: list[EEGClassProbability]
|
| 350 |
+
```
|
| 351 |
+
|
| 352 |
+
- [ ] **Step 3: Route.**
|
| 353 |
+
|
| 354 |
+
In `src/api/routes.py`, add near the existing predict routes:
|
| 355 |
+
|
| 356 |
+
```python
|
| 357 |
+
@predict_router.post("/eeg", response_model=EEGPredictResponse)
|
| 358 |
+
def predict_eeg(req: EEGPredictRequest) -> EEGPredictResponse:
|
| 359 |
+
import os
|
| 360 |
+
from pathlib import Path
|
| 361 |
+
import numpy as np
|
| 362 |
+
from src.models import eeg_model
|
| 363 |
+
|
| 364 |
+
artifact = Path(os.environ.get("EEG_CLF_ARTIFACT", "data/processed/eeg_clf.joblib"))
|
| 365 |
+
clf = eeg_model.load(artifact)
|
| 366 |
+
features = np.asarray(req.features, dtype=np.float32)
|
| 367 |
+
out = eeg_model.predict_features(clf, features)
|
| 368 |
+
return EEGPredictResponse(**out)
|
| 369 |
+
```
|
| 370 |
+
|
| 371 |
+
Add `EEGPredictRequest`, `EEGPredictResponse` to the schema imports at the top of `routes.py`.
|
| 372 |
+
|
| 373 |
+
- [ ] **Step 4:** `pytest tests/api/test_eeg_predict_route.py -v` → 3 passed.
|
| 374 |
+
|
| 375 |
+
- [ ] **Step 5:** commit: `feat(api): add POST /predict/eeg route (stub-able for demo)`.
|
| 376 |
+
|
| 377 |
+
---
|
| 378 |
+
|
| 379 |
+
### Task 3: End-to-end fusion flow with EEG
|
| 380 |
+
|
| 381 |
+
**Files:**
|
| 382 |
+
- Create: `tests/fusion/test_eeg_modality_flow.py`
|
| 383 |
+
|
| 384 |
+
This task validates that an EEG prediction (via `predict_features` or via `/predict/eeg`) plugs into the fusion engine's `eeg` modality without any code change in the engine.
|
| 385 |
+
|
| 386 |
+
- [ ] **Step 1: Test.**
|
| 387 |
+
|
| 388 |
+
`tests/fusion/test_eeg_modality_flow.py`:
|
| 389 |
+
|
| 390 |
+
```python
|
| 391 |
+
"""End-to-end: EEG classifier output flows into fusion as the `eeg` modality."""
|
| 392 |
+
from __future__ import annotations
|
| 393 |
+
|
| 394 |
+
from pathlib import Path
|
| 395 |
+
|
| 396 |
+
import numpy as np
|
| 397 |
+
import pytest
|
| 398 |
+
|
| 399 |
+
from src.fusion import engine
|
| 400 |
+
from src.fusion.types import (
|
| 401 |
+
FusionInput,
|
| 402 |
+
ModalityClassProb,
|
| 403 |
+
ModalityPrediction,
|
| 404 |
+
)
|
| 405 |
+
from src.models import eeg_model
|
| 406 |
+
from tests.fixtures.build_dummy_eeg_clf import build as build_dummy_eeg
|
| 407 |
+
|
| 408 |
+
|
| 409 |
+
def _eeg_pred_from_features(model, features: np.ndarray) -> ModalityPrediction:
|
| 410 |
+
raw = eeg_model.predict_features(model, features)
|
| 411 |
+
return ModalityPrediction(
|
| 412 |
+
label_text=raw["label_text"],
|
| 413 |
+
label=raw["label"],
|
| 414 |
+
confidence=raw["confidence"],
|
| 415 |
+
probabilities=[
|
| 416 |
+
ModalityClassProb(label_text=p["label_text"], probability=p["probability"])
|
| 417 |
+
for p in raw["probabilities"]
|
| 418 |
+
],
|
| 419 |
+
)
|
| 420 |
+
|
| 421 |
+
|
| 422 |
+
class TestEEGFusionFlow:
|
| 423 |
+
def test_alzheimers_eeg_lifts_alzheimers_disease_score(self, tmp_path: Path) -> None:
|
| 424 |
+
ckpt = build_dummy_eeg(tmp_path / "eeg.joblib", n_features=16)
|
| 425 |
+
model = eeg_model.load(ckpt)
|
| 426 |
+
eeg_pred = _eeg_pred_from_features(model, np.full((16,), 2.0, dtype=np.float32))
|
| 427 |
+
|
| 428 |
+
out = engine.fuse(FusionInput(eeg=eeg_pred))
|
| 429 |
+
|
| 430 |
+
alz = next(d for d in out.diseases if d.disease == "alzheimers")
|
| 431 |
+
assert alz.probability > 0.5
|
| 432 |
+
assert any(c.modality == "eeg" for c in alz.contributions)
|
| 433 |
+
# Missing-MRI list should mention mri (it wasn't supplied).
|
| 434 |
+
assert "mri" in out.missing_inputs
|
| 435 |
+
|
| 436 |
+
def test_control_eeg_does_not_inflate_alzheimers(self, tmp_path: Path) -> None:
|
| 437 |
+
ckpt = build_dummy_eeg(tmp_path / "eeg.joblib", n_features=16)
|
| 438 |
+
model = eeg_model.load(ckpt)
|
| 439 |
+
eeg_pred = _eeg_pred_from_features(model, np.zeros((16,), dtype=np.float32))
|
| 440 |
+
|
| 441 |
+
out = engine.fuse(FusionInput(eeg=eeg_pred))
|
| 442 |
+
|
| 443 |
+
alz = next(d for d in out.diseases if d.disease == "alzheimers")
|
| 444 |
+
assert alz.probability < 0.5
|
| 445 |
+
```
|
| 446 |
+
|
| 447 |
+
- [ ] **Step 2:** Run → 2 passed (engine and types unchanged; the test exercises the existing `eeg` modality path with real predictions instead of hand-built fakes).
|
| 448 |
+
|
| 449 |
+
- [ ] **Step 3:** commit: `test(fusion): EEG classifier output flows into fusion modality end-to-end`.
|
| 450 |
+
|
| 451 |
+
---
|
| 452 |
+
|
| 453 |
+
### Task 4: Real-artifact sanity (skips when absent)
|
| 454 |
+
|
| 455 |
+
**Files:**
|
| 456 |
+
- Create: `tests/models/test_eeg_model_real.py`
|
| 457 |
+
|
| 458 |
+
- [ ] **Step 1: Test.**
|
| 459 |
+
|
| 460 |
+
```python
|
| 461 |
+
"""Real-artifact EEG sanity. Skipped unless data/processed/eeg_clf.joblib exists."""
|
| 462 |
+
from __future__ import annotations
|
| 463 |
+
|
| 464 |
+
from pathlib import Path
|
| 465 |
+
|
| 466 |
+
import numpy as np
|
| 467 |
+
import pytest
|
| 468 |
+
|
| 469 |
+
from src.models import eeg_model
|
| 470 |
+
|
| 471 |
+
|
| 472 |
+
REAL_CKPT = Path("data/processed/eeg_clf.joblib")
|
| 473 |
+
|
| 474 |
+
|
| 475 |
+
@pytest.mark.skipif(not REAL_CKPT.exists(), reason="real EEG checkpoint not present")
|
| 476 |
+
def test_real_eeg_checkpoint_loads_and_predicts():
|
| 477 |
+
model = eeg_model.load(REAL_CKPT)
|
| 478 |
+
n_features = int(getattr(model, "n_features_in_", 16))
|
| 479 |
+
features = np.zeros((n_features,), dtype=np.float32)
|
| 480 |
+
out = eeg_model.predict_features(model, features)
|
| 481 |
+
s = sum(p["probability"] for p in out["probabilities"])
|
| 482 |
+
assert abs(s - 1.0) < 1e-5
|
| 483 |
+
assert out["label_text"] # not empty
|
| 484 |
+
```
|
| 485 |
+
|
| 486 |
+
- [ ] **Step 2:** `pytest tests/models/test_eeg_model_real.py -v` → **skipped** today (expected). Will run automatically once the user drops the real artifact in.
|
| 487 |
+
|
| 488 |
+
- [ ] **Step 3:** commit: `test(models): EEG real-artifact sanity (skips when absent)`.
|
| 489 |
+
|
| 490 |
+
---
|
| 491 |
+
|
| 492 |
+
### Task 5: Streamlit form + README
|
| 493 |
+
|
| 494 |
+
**Files:**
|
| 495 |
+
- Modify: `src/frontend/app.py` (add an EEG features input — number array or file upload of a parquet row from `eeg_pipeline`)
|
| 496 |
+
- Modify: `README.md`
|
| 497 |
+
|
| 498 |
+
- [ ] **Step 1:** Streamlit. The simplest demo path: a `st.text_area` accepting comma-separated floats, parsed and POSTed to `/predict/eeg`. Place it in the doctor-view tab next to the existing MRI predict form.
|
| 499 |
+
|
| 500 |
+
```python
|
| 501 |
+
eeg_csv = st.text_area("EEG features (comma-separated)", placeholder="0.1,0.2,...")
|
| 502 |
+
if st.button("Predict (EEG)"):
|
| 503 |
+
try:
|
| 504 |
+
features = [float(x.strip()) for x in eeg_csv.split(",") if x.strip()]
|
| 505 |
+
except ValueError:
|
| 506 |
+
st.error("EEG features must be numeric.")
|
| 507 |
+
else:
|
| 508 |
+
r = httpx.post(f"{API_BASE}/predict/eeg", json={"features": features}, timeout=10.0)
|
| 509 |
+
st.json(r.json())
|
| 510 |
+
```
|
| 511 |
+
|
| 512 |
+
(Integrate with the existing `httpx`/`requests` style the file already uses. If the file uses `requests`, follow that.)
|
| 513 |
+
|
| 514 |
+
- [ ] **Step 2:** README. Append:
|
| 515 |
+
|
| 516 |
+
```markdown
|
| 517 |
+
### EEG Pretrained Classifier
|
| 518 |
+
|
| 519 |
+
`POST /predict/eeg` runs an sklearn-style classifier (any `predict_proba` interface) on a feature vector and returns probability + attribution. The artifact loads from `data/processed/eeg_clf.joblib` (override via `EEG_CLF_ARTIFACT` env). Default labels are `("control", "alzheimers")` — override via `EEG_CLF_LABELS=label0,label1,...`.
|
| 520 |
+
|
| 521 |
+
For the hackathon demo a synthetic stub (`tests/fixtures/build_dummy_eeg_clf.py`) is used — drop the real `.joblib` at the artifact path to swap in production weights. The fusion engine consumes this prediction as the `eeg` modality automatically; no fusion-side code changes.
|
| 522 |
+
```
|
| 523 |
+
|
| 524 |
+
- [ ] **Step 3:** `pytest -q` → no regressions.
|
| 525 |
+
|
| 526 |
+
- [ ] **Step 4:** commit: `feat(frontend,docs): EEG predict form + README contract`.
|
| 527 |
+
|
| 528 |
+
---
|
| 529 |
+
|
| 530 |
+
## Self-review checklist
|
| 531 |
+
|
| 532 |
+
1. **Spec coverage.** User asked: "for the demo assume the EEG pretrained model exists; we can find it and put it into the project later." This plan ships a working demo today (stub fixture) and a documented swap-in path for the real artifact. ✓
|
| 533 |
+
2. **Independence.** EEG plumbing uses only `src.core.logger`, sklearn (already in deps), joblib, numpy. No coupling to MRI / BBB / fusion-internal code. The fusion engine consumes the EEG `ModalityPrediction` through its existing public API only. ✓
|
| 534 |
+
3. **No re-training.** The plan loads a classifier and runs `predict_proba` — never trains anything at runtime. ✓
|
| 535 |
+
4. **Demo ready without the real artifact.** Tests pass green using only the stub fixture; the real-artifact sanity test auto-skips. ���
|
| 536 |
+
5. **No placeholders.** Every step has full code blocks. ✓
|
| 537 |
+
|
| 538 |
+
---
|
| 539 |
+
|
| 540 |
+
## Execution handoff
|
| 541 |
+
|
| 542 |
+
Save and choose: subagent-driven (recommended) or inline executing-plans.
|
|
@@ -8,7 +8,7 @@
|
|
| 8 |
|---|---|---|
|
| 9 |
| Pretrained MRI 2D classifier | PyTorch resnet18 trained on Kaggle's 4-class Alzheimer's MRI dataset (`MildDemented` / `ModerateDemented` / `NonDemented` / `VeryMildDemented`) | The dummy ONNX model in `tests/fixtures/build_dummy_mri_onnx.py`; the placeholder behaviour in `src/models/mri_model.py` |
|
| 10 |
| TF-IDF RAG corpus | 14 medical PDFs (Alzheimer + Parkinson + lifestyle/nutrition/exercise) with a pre-built TF-IDF index and Turkish query expansion | The existing FAISS+fastembed RAG in `src/rag/` (or runs alongside it) |
|
| 11 |
-
|
|
| 12 |
|
| 13 |
---
|
| 14 |
|
|
@@ -18,7 +18,7 @@
|
|
| 18 |
|---|---|---|---|---|
|
| 19 |
| 1 | `2026-05-02-mri-dl-2d-integration.md` | Real MRI deep-learning model in production path | — (parallel to fusion) | yes (Streamlit + curl) |
|
| 20 |
| 2 | `2026-05-02-tfidf-rag-integration.md` | Lifestyle / clinical-paper RAG with Turkish support | — | yes (CLI + agent tool) |
|
| 21 |
-
| 3 | `2026-05-02-
|
| 22 |
|
| 23 |
---
|
| 24 |
|
|
@@ -33,13 +33,13 @@
|
|
| 33 |
└──────────────┬───────────────────┘
|
| 34 |
│
|
| 35 |
┌──────▼─────────┐
|
| 36 |
-
│ #3
|
| 37 |
-
│
|
| 38 |
-
│
|
| 39 |
└────────────────┘
|
| 40 |
```
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
---
|
| 45 |
|
|
@@ -47,9 +47,9 @@
|
|
| 47 |
|
| 48 |
These are **not** dev gaps — they are inputs we need from outside this codebase. Each sub-plan calls them out explicitly in its preamble, but listing here so they are in one place.
|
| 49 |
|
| 50 |
-
### A. MRI checkpoint
|
| 51 |
|
| 52 |
-
The
|
| 53 |
|
| 54 |
```python
|
| 55 |
CLASS_TO_IDX = {
|
|
@@ -60,18 +60,11 @@ CLASS_TO_IDX = {
|
|
| 60 |
}
|
| 61 |
```
|
| 62 |
|
| 63 |
-
|
| 64 |
|
| 65 |
-
### B.
|
| 66 |
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
Sub-plan #3 has **two branches**:
|
| 70 |
-
|
| 71 |
-
- **Branch 3a (default).** Treat the OASIS biomarker model as a clinical-tests extension to the fusion engine (already accepts MMSE etc. as features — this just adds eTIV/nWBV/ASF and re-runs the trained sklearn model in-process).
|
| 72 |
-
- **Branch 3b.** If the user has a real EEG model elsewhere (a checkpoint file that consumes raw FIF / EDF data and emits class probabilities), the user must point us to it and we re-scope sub-plan #3 around that artifact.
|
| 73 |
-
|
| 74 |
-
The user must pick the branch before sub-plan #3 starts.
|
| 75 |
|
| 76 |
### C. RAG corpus location
|
| 77 |
|
|
|
|
| 8 |
|---|---|---|
|
| 9 |
| Pretrained MRI 2D classifier | PyTorch resnet18 trained on Kaggle's 4-class Alzheimer's MRI dataset (`MildDemented` / `ModerateDemented` / `NonDemented` / `VeryMildDemented`) | The dummy ONNX model in `tests/fixtures/build_dummy_mri_onnx.py`; the placeholder behaviour in `src/models/mri_model.py` |
|
| 10 |
| TF-IDF RAG corpus | 14 medical PDFs (Alzheimer + Parkinson + lifestyle/nutrition/exercise) with a pre-built TF-IDF index and Turkish query expansion | The existing FAISS+fastembed RAG in `src/rag/` (or runs alongside it) |
|
| 11 |
+
| EEG pretrained classifier (assumed for demo) | Any classifier with a `predict_proba` interface that emits Alzheimer's-related class probabilities. **The real artifact is not in the repo yet** — for the hackathon demo we ship a stub-able contract; swap the real `.joblib` (or `.onnx` / `.pt`) in later. | The current EEG path which is signal-processing only (no classifier yet in `src/models/`) |
|
| 12 |
|
| 13 |
---
|
| 14 |
|
|
|
|
| 18 |
|---|---|---|---|---|
|
| 19 |
| 1 | `2026-05-02-mri-dl-2d-integration.md` | Real MRI deep-learning model in production path | — (parallel to fusion) | yes (Streamlit + curl) |
|
| 20 |
| 2 | `2026-05-02-tfidf-rag-integration.md` | Lifestyle / clinical-paper RAG with Turkish support | — | yes (CLI + agent tool) |
|
| 21 |
+
| 3 | `2026-05-02-eeg-stub-integration.md` | Stub-able EEG classifier contract that flows into fusion as the `eeg` modality. Real artifact swaps in later without code changes. | fusion engine (already shipped) | yes (POST /predict/eeg + fusion) |
|
| 22 |
|
| 23 |
---
|
| 24 |
|
|
|
|
| 33 |
└──────────────┬───────────────────┘
|
| 34 |
│
|
| 35 |
┌──────▼─────────┐
|
| 36 |
+
│ #3 EEG stub │
|
| 37 |
+
│ (real artifact │
|
| 38 |
+
│ drops in later)│
|
| 39 |
└────────────────┘
|
| 40 |
```
|
| 41 |
|
| 42 |
+
All three are independent on file boundaries — they can be built in parallel by different subagents. The diagram shows demo flow priority, not a build dependency.
|
| 43 |
|
| 44 |
---
|
| 45 |
|
|
|
|
| 47 |
|
| 48 |
These are **not** dev gaps — they are inputs we need from outside this codebase. Each sub-plan calls them out explicitly in its preamble, but listing here so they are in one place.
|
| 49 |
|
| 50 |
+
### A. MRI checkpoint drop-in
|
| 51 |
|
| 52 |
+
The artifact lives at `outputs\checkpoints\best_model.pt` on the trainer machine. Drop it at `data/processed/mri_dl_2d/best_model.pt` in this repo (gitignored — never commit a model binary). The user's BEST_PARAMS are final: `image_size=160`, `model_name=resnet18`, 4-class head with the index order below. The integration code does not retrain or second-guess; it loads and predicts.
|
| 53 |
|
| 54 |
```python
|
| 55 |
CLASS_TO_IDX = {
|
|
|
|
| 60 |
}
|
| 61 |
```
|
| 62 |
|
| 63 |
+
Sub-plan #1 ships a real-artifact sanity test that runs only when the file is present (skipped otherwise) — catches any class-order or input-shape drift the trainer might surprise us with later.
|
| 64 |
|
| 65 |
+
### B. EEG artifact is intentionally a stub for the demo
|
| 66 |
|
| 67 |
+
Real EEG checkpoint will land later. For the hackathon, sub-plan #3 ships a stub artifact (`tests/fixtures/build_dummy_eeg_clf.py` produces a synthetic joblib-pickled `RandomForestClassifier`) and a clear contract: **input** = numpy array of shape `(n_features,)` matching the existing `eeg_pipeline.py` feature output; **output** = class probabilities for `("control", "alzheimers")`. Swapping in the real artifact later requires zero code changes — just drop the file at `data/processed/eeg_clf.joblib` and update labels in env if the real classes differ.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
| 69 |
### C. RAG corpus location
|
| 70 |
|
|
@@ -32,18 +32,13 @@ CLASS_TO_IDX = {
|
|
| 32 |
|
| 33 |
---
|
| 34 |
|
| 35 |
-
##
|
| 36 |
|
| 37 |
-
The artifact `best_model.pt` is **not**
|
| 38 |
|
| 39 |
-
|
| 40 |
-
2. Confirm with `python -c "import torch; sd = torch.load('data/processed/mri_dl_2d/best_model.pt', map_location='cpu'); print(type(sd), list(sd.keys())[:5] if isinstance(sd, dict) else sd)"`. Two possible structures:
|
| 41 |
-
- **`state_dict` only** (most common): `dict[str, Tensor]`. Task 1 builds the resnet18 architecture and `load_state_dict`s.
|
| 42 |
-
- **Full model** (`torch.save(model, ...)`): a pickled `nn.Module`. Task 1 just calls `torch.load(...)`.
|
| 43 |
-
- The plan defaults to **state_dict** (more portable). If the file turns out to be a full model, Task 1 has a fallback branch.
|
| 44 |
-
3. Add the artifact path to `.gitignore` if it isn't already covered (`data/processed/` should already be ignored — verify).
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
---
|
| 49 |
|
|
|
|
| 32 |
|
| 33 |
---
|
| 34 |
|
| 35 |
+
## Asset note
|
| 36 |
|
| 37 |
+
The user's trained checkpoint (`outputs\checkpoints\best_model.pt` on the trainer machine) is the artifact this plan loads. Drop it at `data/processed/mri_dl_2d/best_model.pt`. The training is finished — this plan does **not** retrain or re-tune; it loads the user's exact `state_dict` (or pickled `nn.Module`) and runs inference with the documented preprocessing contract (resize to `image_size=160`, ImageNet normalisation, 4-class head).
|
| 38 |
|
| 39 |
+
`data/processed/` is already gitignored — never commit the binary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
+
The plan's tests use a synthetic dummy resnet18 (built on demand in `tests/fixtures/build_dummy_resnet18_2d.py`), so every TDD step runs green even before the real artifact arrives. Task 4 adds a real-artifact sanity test that auto-skips when the checkpoint is absent and runs once the user drops it in.
|
| 42 |
|
| 43 |
---
|
| 44 |
|
|
@@ -1,641 +0,0 @@
|
|
| 1 |
-
# OASIS Tabular Classifier — Fusion Integration Plan
|
| 2 |
-
|
| 3 |
-
> **For agentic workers:** REQUIRED SUB-SKILL: `superpowers:subagent-driven-development`. TDD throughout.
|
| 4 |
-
|
| 5 |
-
## ⚠️ Important context — read before executing
|
| 6 |
-
|
| 7 |
-
The user said "I have the pretrained model for eeg, integrate it into the eeg pipeline. its the ipynb file named detecting-early-alzheimers...".
|
| 8 |
-
|
| 9 |
-
The notebook (`/Users/mertgungor/Downloads/rag/detecting-early-alzheimer-s (1).ipynb`) is **NOT an EEG model**. It is an sklearn ensemble (LogReg / SVM / DT / RF / AdaBoost) trained on the OASIS longitudinal **tabular** dataset — features are MMSE, eTIV, nWBV, ASF, EDUC, SES, M/F, Age. Zero EEG signal processing. Zero saved model artifact (the notebook trains in-memory only).
|
| 10 |
-
|
| 11 |
-
This plan therefore has **two branches**. Pick one with the user before executing.
|
| 12 |
-
|
| 13 |
-
### Branch 3a — Train + integrate the OASIS *tabular* classifier as a fusion feature
|
| 14 |
-
|
| 15 |
-
We re-train the best variant (Random Forest, AUC 84.4 % per the notebook) from the OASIS CSV, save a `joblib` artifact, and expose it as a fusion-engine modality named `tabular_oasis`. The fusion engine already handles arbitrary modality keys; this plugs in cleanly.
|
| 16 |
-
|
| 17 |
-
**Demo value:** When a doctor has only OASIS-style biomarkers (MMSE / eTIV / nWBV / ASF / Age / EDUC / SES / M/F) but no MRI image, the fusion engine still produces an Alzheimer's confidence with attribution.
|
| 18 |
-
|
| 19 |
-
### Branch 3b — User has a real EEG model elsewhere
|
| 20 |
-
|
| 21 |
-
If the user can point us to a checkpoint that consumes raw FIF / EDF EEG data (e.g., a `.pt`, `.pth`, `.h5`, `.onnx`, or `.joblib` file) and emits Alzheimer's class probabilities, this plan is rewritten around that artifact: signature, expected input shape, label order. We replace `src/models/eeg_model.py` (currently absent — `eeg_pipeline.py` only does signal processing) with a new module similar to `mri_dl_2d.py`.
|
| 22 |
-
|
| 23 |
-
**The user must pick a branch** before any task starts. The default below is **Branch 3a**, because the notebook is what's actually on disk.
|
| 24 |
-
|
| 25 |
-
---
|
| 26 |
-
|
| 27 |
-
## Branch 3a (default): OASIS tabular classifier as fusion modality
|
| 28 |
-
|
| 29 |
-
**Goal.** Save a Random Forest trained on OASIS biomarkers; wire it into the fusion engine as a new modality `tabular_oasis`. The doctor enters MMSE/eTIV/nWBV/ASF (fusion already takes MMSE; this extends to the other three) and gets an Alzheimer's signal that flows through the existing logit/sigmoid combiner.
|
| 30 |
-
|
| 31 |
-
**Architecture.** New module `src/models/tabular_oasis.py` trains-or-loads a `joblib`-pickled `Pipeline(scaler -> RandomForestClassifier)`. The fusion engine grows one entry in `_CLINICAL_FNS` (or, more cleanly, a sibling `_TABULAR_FNS`) so the model's class probability for `Demented=1` becomes a signed signal. New API route `POST /predict/tabular_oasis` lets the frontend call it directly. All optional — if the OASIS CSV is absent, the module degrades gracefully and fusion ignores the modality.
|
| 32 |
-
|
| 33 |
-
**Tech stack.** scikit-learn (already in deps), pandas, joblib (likely in deps via sklearn).
|
| 34 |
-
|
| 35 |
-
---
|
| 36 |
-
|
| 37 |
-
## Prerequisite (controller blocker)
|
| 38 |
-
|
| 39 |
-
The OASIS dataset is not in this repo. Two acquisition options:
|
| 40 |
-
|
| 41 |
-
1. **Download from Kaggle** (https://www.kaggle.com/datasets/jboysen/mri-and-alzheimers, file `oasis_longitudinal.csv`). Save to `data/external/oasis_longitudinal.csv`. Gitignore (already covered by `data/external_rag/` if you broaden it; otherwise add `data/external/`).
|
| 42 |
-
|
| 43 |
-
2. **Use a local copy** if the user already downloaded it for the notebook. Same destination.
|
| 44 |
-
|
| 45 |
-
If the dataset is unavailable, **stop and surface to the user**. The classifier cannot be trained without it; we will not fabricate synthetic OASIS-shaped data for a clinical demo.
|
| 46 |
-
|
| 47 |
-
---
|
| 48 |
-
|
| 49 |
-
## File structure
|
| 50 |
-
|
| 51 |
-
| Path | Responsibility |
|
| 52 |
-
|---|---|
|
| 53 |
-
| Modify `requirements.txt` | confirm `joblib` (sklearn pulls it transitively but pin explicitly is safer) |
|
| 54 |
-
| Modify `.gitignore` | ensure `data/external/` is ignored |
|
| 55 |
-
| Create `src/models/tabular_oasis.py` | train + persist + load + predict the OASIS RF classifier |
|
| 56 |
-
| Create `scripts/train_oasis.py` | one-shot CLI: trains and saves the model artifact |
|
| 57 |
-
| Modify `src/fusion/types.py` | extend `ClinicalScores` with `etiv`, `nwbv`, `asf`, `educ`, `ses`, `is_male` |
|
| 58 |
-
| Modify `src/fusion/weights.py` | add `tabular_oasis` weight key for `alzheimers` |
|
| 59 |
-
| Modify `src/fusion/engine.py` | add `tabular_oasis` to the modality dispatch |
|
| 60 |
-
| Modify `src/api/routes.py` | new route `POST /predict/tabular_oasis` |
|
| 61 |
-
| Modify `src/api/schemas.py` | request/response for the new route |
|
| 62 |
-
| Create `tests/models/test_tabular_oasis.py` | training + persistence + prediction tests |
|
| 63 |
-
| Create `tests/fixtures/build_synthetic_oasis.py` | synthetic OASIS-shaped CSV for tests (clearly labelled non-clinical) |
|
| 64 |
-
| Create `tests/fusion/test_tabular_oasis_modality.py` | fusion-side integration |
|
| 65 |
-
| Create `tests/api/test_tabular_oasis_route.py` | API integration |
|
| 66 |
-
| Modify `README.md` | document the modality + how to acquire the OASIS CSV |
|
| 67 |
-
|
| 68 |
-
---
|
| 69 |
-
|
| 70 |
-
## Tasks
|
| 71 |
-
|
| 72 |
-
### Task 0: Deps + ignore
|
| 73 |
-
|
| 74 |
-
**Files:** `requirements.txt`, `.gitignore`
|
| 75 |
-
|
| 76 |
-
- [ ] **Step 1:** verify `joblib` and `pandas` are in `requirements.txt`. `pandas` already is (used by every pipeline). Add `joblib>=1.3,<2.0` if not pinned.
|
| 77 |
-
|
| 78 |
-
- [ ] **Step 2:** `.gitignore` should cover `data/external/`. Add it if needed.
|
| 79 |
-
|
| 80 |
-
- [ ] **Step 3:** `pytest -q` baseline. Commit: `chore(oasis): pin joblib; gitignore external dataset dir`.
|
| 81 |
-
|
| 82 |
-
---
|
| 83 |
-
|
| 84 |
-
### Task 1: Training + persistence module
|
| 85 |
-
|
| 86 |
-
**Files:**
|
| 87 |
-
- Create: `src/models/tabular_oasis.py`
|
| 88 |
-
- Create: `scripts/train_oasis.py`
|
| 89 |
-
- Create: `tests/fixtures/build_synthetic_oasis.py`
|
| 90 |
-
- Create: `tests/models/test_tabular_oasis.py`
|
| 91 |
-
|
| 92 |
-
- [ ] **Step 1: Synthetic-fixture helper** (clearly synthetic — never confused with real clinical data):
|
| 93 |
-
|
| 94 |
-
`tests/fixtures/build_synthetic_oasis.py`:
|
| 95 |
-
|
| 96 |
-
```python
|
| 97 |
-
"""Build a synthetic OASIS-shaped CSV for tests. NON-CLINICAL data."""
|
| 98 |
-
from __future__ import annotations
|
| 99 |
-
|
| 100 |
-
from pathlib import Path
|
| 101 |
-
|
| 102 |
-
import numpy as np
|
| 103 |
-
import pandas as pd
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
def build(path: Path, n: int = 200, seed: int = 42) -> Path:
|
| 107 |
-
"""Save a synthetic CSV at `path` with the columns the trainer expects."""
|
| 108 |
-
path = Path(path)
|
| 109 |
-
if path.exists():
|
| 110 |
-
return path
|
| 111 |
-
rng = np.random.default_rng(seed)
|
| 112 |
-
n_dem = n // 2
|
| 113 |
-
|
| 114 |
-
# Demented half — lower MMSE, higher CDR, smaller nWBV.
|
| 115 |
-
dem = pd.DataFrame({
|
| 116 |
-
"Group": ["Demented"] * n_dem,
|
| 117 |
-
"M/F": rng.choice(["M", "F"], n_dem),
|
| 118 |
-
"Age": rng.integers(70, 95, n_dem),
|
| 119 |
-
"EDUC": rng.integers(8, 18, n_dem),
|
| 120 |
-
"SES": rng.integers(1, 5, n_dem),
|
| 121 |
-
"MMSE": rng.integers(15, 26, n_dem),
|
| 122 |
-
"CDR": rng.choice([0.5, 1.0], n_dem),
|
| 123 |
-
"eTIV": rng.integers(1200, 1700, n_dem),
|
| 124 |
-
"nWBV": rng.uniform(0.65, 0.74, n_dem),
|
| 125 |
-
"ASF": rng.uniform(1.0, 1.4, n_dem),
|
| 126 |
-
"Visit": 1,
|
| 127 |
-
"Hand": "R",
|
| 128 |
-
})
|
| 129 |
-
nondem = pd.DataFrame({
|
| 130 |
-
"Group": ["Nondemented"] * (n - n_dem),
|
| 131 |
-
"M/F": rng.choice(["M", "F"], n - n_dem),
|
| 132 |
-
"Age": rng.integers(60, 90, n - n_dem),
|
| 133 |
-
"EDUC": rng.integers(10, 22, n - n_dem),
|
| 134 |
-
"SES": rng.integers(1, 5, n - n_dem),
|
| 135 |
-
"MMSE": rng.integers(26, 31, n - n_dem),
|
| 136 |
-
"CDR": rng.choice([0.0], n - n_dem),
|
| 137 |
-
"eTIV": rng.integers(1300, 1900, n - n_dem),
|
| 138 |
-
"nWBV": rng.uniform(0.70, 0.83, n - n_dem),
|
| 139 |
-
"ASF": rng.uniform(0.9, 1.5, n - n_dem),
|
| 140 |
-
"Visit": 1,
|
| 141 |
-
"Hand": "R",
|
| 142 |
-
})
|
| 143 |
-
|
| 144 |
-
pd.concat([dem, nondem], ignore_index=True).to_csv(path, index=False)
|
| 145 |
-
return path
|
| 146 |
-
```
|
| 147 |
-
|
| 148 |
-
- [ ] **Step 2: Failing test.**
|
| 149 |
-
|
| 150 |
-
`tests/models/test_tabular_oasis.py`:
|
| 151 |
-
|
| 152 |
-
```python
|
| 153 |
-
"""Tests for src.models.tabular_oasis."""
|
| 154 |
-
from __future__ import annotations
|
| 155 |
-
|
| 156 |
-
from pathlib import Path
|
| 157 |
-
|
| 158 |
-
import pytest
|
| 159 |
-
|
| 160 |
-
from src.models import tabular_oasis
|
| 161 |
-
from tests.fixtures.build_synthetic_oasis import build as build_synth
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
class TestTrainAndPredict:
|
| 165 |
-
def test_train_persists_loadable_artifact(self, tmp_path: Path) -> None:
|
| 166 |
-
csv = build_synth(tmp_path / "oasis.csv")
|
| 167 |
-
artifact = tabular_oasis.train_from_csv(csv, tmp_path / "rf.joblib")
|
| 168 |
-
assert artifact.exists()
|
| 169 |
-
loaded = tabular_oasis.load(artifact)
|
| 170 |
-
assert hasattr(loaded, "predict_proba")
|
| 171 |
-
|
| 172 |
-
def test_predict_returns_full_dict(self, tmp_path: Path) -> None:
|
| 173 |
-
csv = build_synth(tmp_path / "oasis.csv")
|
| 174 |
-
artifact = tabular_oasis.train_from_csv(csv, tmp_path / "rf.joblib")
|
| 175 |
-
model = tabular_oasis.load(artifact)
|
| 176 |
-
out = tabular_oasis.predict_one(model, {
|
| 177 |
-
"is_male": 1, "age": 80, "educ": 10, "ses": 3.0,
|
| 178 |
-
"mmse": 18.0, "etiv": 1500.0, "nwbv": 0.68, "asf": 1.2,
|
| 179 |
-
})
|
| 180 |
-
assert set(out) == {"label", "label_text", "confidence", "probabilities"}
|
| 181 |
-
assert out["label"] in {0, 1}
|
| 182 |
-
assert out["label_text"] in {"Nondemented", "Demented"}
|
| 183 |
-
assert 0.0 <= out["confidence"] <= 1.0
|
| 184 |
-
probs = out["probabilities"]
|
| 185 |
-
assert len(probs) == 2
|
| 186 |
-
assert abs(sum(p["probability"] for p in probs) - 1.0) < 1e-5
|
| 187 |
-
|
| 188 |
-
def test_predict_with_synthetic_demented_profile_yields_demented_label(self, tmp_path: Path) -> None:
|
| 189 |
-
# The synthetic data has clean separation, so a clearly-demented profile
|
| 190 |
-
# (MMSE=15, low nWBV, age 88) should classify as Demented.
|
| 191 |
-
csv = build_synth(tmp_path / "oasis.csv")
|
| 192 |
-
artifact = tabular_oasis.train_from_csv(csv, tmp_path / "rf.joblib")
|
| 193 |
-
model = tabular_oasis.load(artifact)
|
| 194 |
-
out = tabular_oasis.predict_one(model, {
|
| 195 |
-
"is_male": 1, "age": 88, "educ": 8, "ses": 3.0,
|
| 196 |
-
"mmse": 15.0, "etiv": 1300.0, "nwbv": 0.66, "asf": 1.3,
|
| 197 |
-
})
|
| 198 |
-
assert out["label_text"] == "Demented"
|
| 199 |
-
|
| 200 |
-
def test_load_missing_artifact_raises(self, tmp_path: Path) -> None:
|
| 201 |
-
with pytest.raises(FileNotFoundError, match="OASIS classifier artifact not found"):
|
| 202 |
-
tabular_oasis.load(tmp_path / "missing.joblib")
|
| 203 |
-
```
|
| 204 |
-
|
| 205 |
-
Run → ImportError.
|
| 206 |
-
|
| 207 |
-
- [ ] **Step 3: Minimal impl.**
|
| 208 |
-
|
| 209 |
-
`src/models/tabular_oasis.py`:
|
| 210 |
-
|
| 211 |
-
```python
|
| 212 |
-
"""OASIS tabular Alzheimer's classifier — Random Forest with full pipeline."""
|
| 213 |
-
from __future__ import annotations
|
| 214 |
-
|
| 215 |
-
from pathlib import Path
|
| 216 |
-
from typing import Any
|
| 217 |
-
|
| 218 |
-
import joblib
|
| 219 |
-
import numpy as np
|
| 220 |
-
import pandas as pd
|
| 221 |
-
from sklearn.ensemble import RandomForestClassifier
|
| 222 |
-
from sklearn.pipeline import Pipeline
|
| 223 |
-
from sklearn.preprocessing import MinMaxScaler
|
| 224 |
-
|
| 225 |
-
from src.core.logger import get_logger
|
| 226 |
-
|
| 227 |
-
logger = get_logger(__name__)
|
| 228 |
-
|
| 229 |
-
FEATURE_ORDER: tuple[str, ...] = (
|
| 230 |
-
"is_male", "age", "educ", "ses", "mmse", "etiv", "nwbv", "asf",
|
| 231 |
-
)
|
| 232 |
-
LABEL_NAMES: tuple[str, ...] = ("Nondemented", "Demented")
|
| 233 |
-
|
| 234 |
-
|
| 235 |
-
def _df_from_oasis_csv(csv_path: Path) -> tuple[pd.DataFrame, pd.Series]:
|
| 236 |
-
"""Replicate the notebook's preprocessing: first visit only, M/F encoded,
|
| 237 |
-
Converted-as-Demented, drop unused columns, median-impute SES on EDUC."""
|
| 238 |
-
df = pd.read_csv(csv_path)
|
| 239 |
-
df = df.loc[df["Visit"] == 1].reset_index(drop=True)
|
| 240 |
-
df["M/F"] = df["M/F"].replace({"F": 0, "M": 1})
|
| 241 |
-
df["Group"] = df["Group"].replace({"Converted": "Demented"}).replace(
|
| 242 |
-
{"Demented": 1, "Nondemented": 0}
|
| 243 |
-
)
|
| 244 |
-
df = df.drop(columns=[c for c in ("MRI ID", "Visit", "Hand") if c in df.columns])
|
| 245 |
-
df["SES"] = df["SES"].fillna(df.groupby("EDUC")["SES"].transform("median"))
|
| 246 |
-
|
| 247 |
-
feature_df = pd.DataFrame({
|
| 248 |
-
"is_male": df["M/F"].astype(float),
|
| 249 |
-
"age": df["Age"].astype(float),
|
| 250 |
-
"educ": df["EDUC"].astype(float),
|
| 251 |
-
"ses": df["SES"].astype(float),
|
| 252 |
-
"mmse": df["MMSE"].astype(float),
|
| 253 |
-
"etiv": df["eTIV"].astype(float),
|
| 254 |
-
"nwbv": df["nWBV"].astype(float),
|
| 255 |
-
"asf": df["ASF"].astype(float),
|
| 256 |
-
})[list(FEATURE_ORDER)]
|
| 257 |
-
return feature_df, df["Group"].astype(int)
|
| 258 |
-
|
| 259 |
-
|
| 260 |
-
def train_from_csv(csv_path: Path, artifact_path: Path) -> Path:
|
| 261 |
-
"""Train and persist a MinMaxScaler→RandomForest pipeline. Returns artifact path."""
|
| 262 |
-
csv_path = Path(csv_path)
|
| 263 |
-
artifact_path = Path(artifact_path)
|
| 264 |
-
if not csv_path.exists():
|
| 265 |
-
raise FileNotFoundError(f"OASIS CSV not found: {csv_path}")
|
| 266 |
-
|
| 267 |
-
X, y = _df_from_oasis_csv(csv_path)
|
| 268 |
-
pipeline = Pipeline([
|
| 269 |
-
("scaler", MinMaxScaler()),
|
| 270 |
-
("rf", RandomForestClassifier(
|
| 271 |
-
n_estimators=12, max_depth=8, max_features=8,
|
| 272 |
-
n_jobs=4, random_state=0,
|
| 273 |
-
)),
|
| 274 |
-
])
|
| 275 |
-
pipeline.fit(X, y)
|
| 276 |
-
artifact_path.parent.mkdir(parents=True, exist_ok=True)
|
| 277 |
-
joblib.dump(pipeline, artifact_path)
|
| 278 |
-
logger.info("trained OASIS RF: n=%d, artifact=%s", len(X), artifact_path)
|
| 279 |
-
return artifact_path
|
| 280 |
-
|
| 281 |
-
|
| 282 |
-
def load(artifact_path: Path) -> Pipeline:
|
| 283 |
-
p = Path(artifact_path)
|
| 284 |
-
if not p.exists():
|
| 285 |
-
raise FileNotFoundError(f"OASIS classifier artifact not found: {p}")
|
| 286 |
-
return joblib.load(p)
|
| 287 |
-
|
| 288 |
-
|
| 289 |
-
def predict_one(model: Pipeline, features: dict[str, float]) -> dict[str, Any]:
|
| 290 |
-
"""Predict for a single subject. `features` must have all FEATURE_ORDER keys."""
|
| 291 |
-
missing = [k for k in FEATURE_ORDER if k not in features]
|
| 292 |
-
if missing:
|
| 293 |
-
raise ValueError(f"OASIS prediction missing features: {missing}")
|
| 294 |
-
row = pd.DataFrame([{k: float(features[k]) for k in FEATURE_ORDER}])
|
| 295 |
-
probs = np.asarray(model.predict_proba(row))[0]
|
| 296 |
-
label_idx = int(np.argmax(probs))
|
| 297 |
-
return {
|
| 298 |
-
"label": label_idx,
|
| 299 |
-
"label_text": LABEL_NAMES[label_idx],
|
| 300 |
-
"confidence": float(probs[label_idx]),
|
| 301 |
-
"probabilities": [
|
| 302 |
-
{"label": i, "label_text": LABEL_NAMES[i], "probability": float(p)}
|
| 303 |
-
for i, p in enumerate(probs)
|
| 304 |
-
],
|
| 305 |
-
}
|
| 306 |
-
```
|
| 307 |
-
|
| 308 |
-
`scripts/train_oasis.py`:
|
| 309 |
-
|
| 310 |
-
```python
|
| 311 |
-
"""CLI: train the OASIS RF classifier and save it.
|
| 312 |
-
|
| 313 |
-
Usage:
|
| 314 |
-
python scripts/train_oasis.py data/external/oasis_longitudinal.csv data/processed/oasis_rf.joblib
|
| 315 |
-
"""
|
| 316 |
-
from __future__ import annotations
|
| 317 |
-
|
| 318 |
-
import sys
|
| 319 |
-
from pathlib import Path
|
| 320 |
-
|
| 321 |
-
from src.models.tabular_oasis import train_from_csv
|
| 322 |
-
|
| 323 |
-
|
| 324 |
-
def main() -> None:
|
| 325 |
-
if len(sys.argv) != 3:
|
| 326 |
-
print(__doc__)
|
| 327 |
-
sys.exit(1)
|
| 328 |
-
csv = Path(sys.argv[1])
|
| 329 |
-
out = Path(sys.argv[2])
|
| 330 |
-
train_from_csv(csv, out)
|
| 331 |
-
print(f"saved: {out}")
|
| 332 |
-
|
| 333 |
-
|
| 334 |
-
if __name__ == "__main__":
|
| 335 |
-
main()
|
| 336 |
-
```
|
| 337 |
-
|
| 338 |
-
Run tests → 4 passed.
|
| 339 |
-
|
| 340 |
-
- [ ] **Step 4:** commit: `feat(models): OASIS tabular Alzheimer's RF classifier (joblib + train CLI)`.
|
| 341 |
-
|
| 342 |
-
---
|
| 343 |
-
|
| 344 |
-
### Task 2: Extend fusion's clinical inputs
|
| 345 |
-
|
| 346 |
-
**Files:**
|
| 347 |
-
- Modify: `src/fusion/types.py` (extend `ClinicalScores`)
|
| 348 |
-
- Modify: `src/fusion/clinical.py` (add normalisers for the new fields)
|
| 349 |
-
- Modify: `tests/fusion/test_types.py` (loosen / extend bound tests)
|
| 350 |
-
- Modify: `tests/fusion/test_clinical.py` (add new normaliser tests)
|
| 351 |
-
|
| 352 |
-
- [ ] **Step 1: Failing test for new ClinicalScores fields.**
|
| 353 |
-
|
| 354 |
-
In `tests/fusion/test_types.py`, append:
|
| 355 |
-
|
| 356 |
-
```python
|
| 357 |
-
class TestExtendedClinicalScores:
|
| 358 |
-
def test_etiv_in_range(self) -> None:
|
| 359 |
-
s = ClinicalScores(etiv=1500.0)
|
| 360 |
-
assert s.etiv == pytest.approx(1500.0)
|
| 361 |
-
|
| 362 |
-
def test_etiv_out_of_range_rejected(self) -> None:
|
| 363 |
-
with pytest.raises(ValidationError):
|
| 364 |
-
ClinicalScores(etiv=5000.0)
|
| 365 |
-
|
| 366 |
-
def test_nwbv_in_range(self) -> None:
|
| 367 |
-
s = ClinicalScores(nwbv=0.72)
|
| 368 |
-
assert s.nwbv == pytest.approx(0.72)
|
| 369 |
-
```
|
| 370 |
-
|
| 371 |
-
- [ ] **Step 2: Update `src/fusion/types.py` ClinicalScores.**
|
| 372 |
-
|
| 373 |
-
Add fields (preserve existing ones):
|
| 374 |
-
|
| 375 |
-
```python
|
| 376 |
-
class ClinicalScores(BaseModel):
|
| 377 |
-
mmse: Annotated[float, Field(ge=0.0, le=30.0)] | None = None
|
| 378 |
-
moca: Annotated[float, Field(ge=0.0, le=30.0)] | None = None
|
| 379 |
-
updrs: Annotated[float, Field(ge=0.0, le=199.0)] | None = None
|
| 380 |
-
gait_speed_m_s: Annotated[float, Field(ge=0.0, le=2.5)] | None = None
|
| 381 |
-
age_years: Annotated[float, Field(ge=0.0, le=120.0)] | None = None
|
| 382 |
-
# OASIS biomarkers — used by the tabular_oasis modality.
|
| 383 |
-
etiv: Annotated[float, Field(ge=900.0, le=2200.0)] | None = None
|
| 384 |
-
nwbv: Annotated[float, Field(ge=0.5, le=0.95)] | None = None
|
| 385 |
-
asf: Annotated[float, Field(ge=0.5, le=2.0)] | None = None
|
| 386 |
-
educ: Annotated[float, Field(ge=0.0, le=30.0)] | None = None
|
| 387 |
-
ses: Annotated[float, Field(ge=1.0, le=5.0)] | None = None
|
| 388 |
-
is_male: Annotated[int, Field(ge=0, le=1)] | None = None
|
| 389 |
-
```
|
| 390 |
-
|
| 391 |
-
- [ ] **Step 3:** the tests should pass after the type change. `pytest tests/fusion/test_types.py -v`.
|
| 392 |
-
|
| 393 |
-
- [ ] **Step 4:** commit: `feat(fusion): extend ClinicalScores with OASIS biomarker fields`.
|
| 394 |
-
|
| 395 |
-
---
|
| 396 |
-
|
| 397 |
-
### Task 3: Wire `tabular_oasis` modality into the fusion engine
|
| 398 |
-
|
| 399 |
-
**Files:**
|
| 400 |
-
- Modify: `src/fusion/weights.py`
|
| 401 |
-
- Modify: `src/fusion/engine.py`
|
| 402 |
-
- Create: `tests/fusion/test_tabular_oasis_modality.py`
|
| 403 |
-
|
| 404 |
-
- [ ] **Step 1: Update weights.**
|
| 405 |
-
|
| 406 |
-
`src/fusion/weights.py`, in the `alzheimers` table:
|
| 407 |
-
|
| 408 |
-
```python
|
| 409 |
-
"alzheimers": {
|
| 410 |
-
"mri": 0.25, # was 0.35
|
| 411 |
-
"eeg": 0.15, # was 0.20
|
| 412 |
-
"tabular_oasis": 0.20, # new
|
| 413 |
-
"clinical_mmse": 0.20,
|
| 414 |
-
"clinical_moca": 0.10, # was 0.15
|
| 415 |
-
"clinical_age": 0.10,
|
| 416 |
-
},
|
| 417 |
-
```
|
| 418 |
-
|
| 419 |
-
Re-balance so the table still sums to 1.0. Add a comment that re-balancing changed the existing tests' tolerances — verify which tests need updating.
|
| 420 |
-
|
| 421 |
-
- [ ] **Step 2: Failing fusion-modality test.**
|
| 422 |
-
|
| 423 |
-
`tests/fusion/test_tabular_oasis_modality.py`:
|
| 424 |
-
|
| 425 |
-
```python
|
| 426 |
-
"""Tests: tabular_oasis modality contributes to alzheimers fusion score."""
|
| 427 |
-
from __future__ import annotations
|
| 428 |
-
|
| 429 |
-
import os
|
| 430 |
-
from pathlib import Path
|
| 431 |
-
|
| 432 |
-
import pytest
|
| 433 |
-
|
| 434 |
-
from src.fusion import engine
|
| 435 |
-
from src.fusion.types import ClinicalScores, FusionInput
|
| 436 |
-
from src.models.tabular_oasis import train_from_csv
|
| 437 |
-
from tests.fixtures.build_synthetic_oasis import build as build_synth
|
| 438 |
-
|
| 439 |
-
|
| 440 |
-
@pytest.fixture()
|
| 441 |
-
def trained_artifact(tmp_path: Path, monkeypatch) -> Path:
|
| 442 |
-
csv = build_synth(tmp_path / "oasis.csv")
|
| 443 |
-
art = train_from_csv(csv, tmp_path / "rf.joblib")
|
| 444 |
-
monkeypatch.setenv("OASIS_RF_ARTIFACT", str(art))
|
| 445 |
-
return art
|
| 446 |
-
|
| 447 |
-
|
| 448 |
-
class TestTabularOasisModality:
|
| 449 |
-
def test_demented_profile_raises_alzheimers(self, trained_artifact: Path) -> None:
|
| 450 |
-
out = engine.fuse(FusionInput(clinical=ClinicalScores(
|
| 451 |
-
is_male=1, age_years=88, educ=8, ses=3.0,
|
| 452 |
-
mmse=15.0, etiv=1300.0, nwbv=0.66, asf=1.3,
|
| 453 |
-
)))
|
| 454 |
-
alz = next(d for d in out.diseases if d.disease == "alzheimers")
|
| 455 |
-
assert alz.probability > 0.6
|
| 456 |
-
assert any(c.modality == "tabular_oasis" for c in alz.contributions)
|
| 457 |
-
|
| 458 |
-
def test_missing_oasis_inputs_skips_modality(self, trained_artifact: Path) -> None:
|
| 459 |
-
# MMSE alone but no etiv/nwbv → tabular_oasis should be skipped, not error.
|
| 460 |
-
out = engine.fuse(FusionInput(clinical=ClinicalScores(mmse=12.0)))
|
| 461 |
-
alz = next(d for d in out.diseases if d.disease == "alzheimers")
|
| 462 |
-
names = {c.modality for c in alz.contributions}
|
| 463 |
-
assert "tabular_oasis" not in names
|
| 464 |
-
```
|
| 465 |
-
|
| 466 |
-
- [ ] **Step 3: Update the engine.**
|
| 467 |
-
|
| 468 |
-
In `src/fusion/engine.py`, add a tabular-modality dispatcher that lazy-loads the joblib artifact once and treats the OASIS classifier's `P(Demented)` as the alzheimers signal `2*P-1`:
|
| 469 |
-
|
| 470 |
-
```python
|
| 471 |
-
import os
|
| 472 |
-
|
| 473 |
-
_oasis_cache: dict[str, Any] = {}
|
| 474 |
-
|
| 475 |
-
|
| 476 |
-
def _signal_for_tabular_oasis(disease: str, clinical: ClinicalScores) -> float | None:
|
| 477 |
-
if disease != "alzheimers":
|
| 478 |
-
return None
|
| 479 |
-
required = ("is_male", "age_years", "educ", "ses", "mmse", "etiv", "nwbv", "asf")
|
| 480 |
-
if any(getattr(clinical, k, None) is None for k in required):
|
| 481 |
-
return None
|
| 482 |
-
artifact = os.environ.get("OASIS_RF_ARTIFACT", "data/processed/oasis_rf.joblib")
|
| 483 |
-
artifact_path = Path(artifact)
|
| 484 |
-
if not artifact_path.exists():
|
| 485 |
-
logger.warning("tabular_oasis artifact missing at %s; skipping modality", artifact_path)
|
| 486 |
-
return None
|
| 487 |
-
if "model" not in _oasis_cache:
|
| 488 |
-
from src.models.tabular_oasis import load
|
| 489 |
-
_oasis_cache["model"] = load(artifact_path)
|
| 490 |
-
from src.models.tabular_oasis import predict_one
|
| 491 |
-
feats = {
|
| 492 |
-
"is_male": int(clinical.is_male),
|
| 493 |
-
"age": float(clinical.age_years),
|
| 494 |
-
"educ": float(clinical.educ),
|
| 495 |
-
"ses": float(clinical.ses),
|
| 496 |
-
"mmse": float(clinical.mmse),
|
| 497 |
-
"etiv": float(clinical.etiv),
|
| 498 |
-
"nwbv": float(clinical.nwbv),
|
| 499 |
-
"asf": float(clinical.asf),
|
| 500 |
-
}
|
| 501 |
-
pred = predict_one(_oasis_cache["model"], feats)
|
| 502 |
-
p_dem = next(p["probability"] for p in pred["probabilities"] if p["label_text"] == "Demented")
|
| 503 |
-
return 2.0 * p_dem - 1.0
|
| 504 |
-
```
|
| 505 |
-
|
| 506 |
-
In `_signal_for_modality`, add the dispatch:
|
| 507 |
-
|
| 508 |
-
```python
|
| 509 |
-
if modality_key == "tabular_oasis":
|
| 510 |
-
return _signal_for_tabular_oasis(disease, clinical)
|
| 511 |
-
```
|
| 512 |
-
|
| 513 |
-
- [ ] **Step 4:** `pytest tests/fusion/ -v` — expect re-balancing to perturb a couple of existing thresholds. Adjust thresholds in the affected tests (e.g., the disagreement test) so they still hold with the new weights, OR adjust the new weights so existing tests still pass within tolerance. Prefer the latter — existing thresholds were chosen carefully.
|
| 514 |
-
|
| 515 |
-
- [ ] **Step 5:** commit: `feat(fusion): add tabular_oasis modality with lazy joblib load`.
|
| 516 |
-
|
| 517 |
-
---
|
| 518 |
-
|
| 519 |
-
### Task 4: API + Streamlit + README
|
| 520 |
-
|
| 521 |
-
**Files:**
|
| 522 |
-
- Modify: `src/api/routes.py` — add `POST /predict/tabular_oasis`
|
| 523 |
-
- Modify: `src/api/schemas.py` — request/response schemas
|
| 524 |
-
- Modify: `src/frontend/app.py` — extend the Doctor view's clinical-input form with eTIV / nWBV / ASF / EDUC / SES
|
| 525 |
-
- Modify: `README.md` — describe the new modality and the OASIS dataset path
|
| 526 |
-
|
| 527 |
-
- [ ] **Step 1: New schemas.**
|
| 528 |
-
|
| 529 |
-
`src/api/schemas.py`:
|
| 530 |
-
|
| 531 |
-
```python
|
| 532 |
-
class TabularOasisRequest(BaseModel):
|
| 533 |
-
is_male: int = Field(..., ge=0, le=1)
|
| 534 |
-
age: float = Field(..., ge=0.0, le=120.0)
|
| 535 |
-
educ: float = Field(..., ge=0.0, le=30.0)
|
| 536 |
-
ses: float = Field(..., ge=1.0, le=5.0)
|
| 537 |
-
mmse: float = Field(..., ge=0.0, le=30.0)
|
| 538 |
-
etiv: float = Field(..., ge=900.0, le=2200.0)
|
| 539 |
-
nwbv: float = Field(..., ge=0.5, le=0.95)
|
| 540 |
-
asf: float = Field(..., ge=0.5, le=2.0)
|
| 541 |
-
|
| 542 |
-
|
| 543 |
-
class TabularOasisProbability(BaseModel):
|
| 544 |
-
label: int
|
| 545 |
-
label_text: str
|
| 546 |
-
probability: float
|
| 547 |
-
|
| 548 |
-
|
| 549 |
-
class TabularOasisResponse(BaseModel):
|
| 550 |
-
label: int
|
| 551 |
-
label_text: str
|
| 552 |
-
confidence: float
|
| 553 |
-
probabilities: list[TabularOasisProbability]
|
| 554 |
-
```
|
| 555 |
-
|
| 556 |
-
- [ ] **Step 2: Route.**
|
| 557 |
-
|
| 558 |
-
`src/api/routes.py`:
|
| 559 |
-
|
| 560 |
-
```python
|
| 561 |
-
@predict_router.post("/tabular_oasis", response_model=TabularOasisResponse)
|
| 562 |
-
def predict_tabular_oasis(req: TabularOasisRequest) -> TabularOasisResponse:
|
| 563 |
-
from src.models.tabular_oasis import load, predict_one
|
| 564 |
-
artifact = Path(os.environ.get("OASIS_RF_ARTIFACT", "data/processed/oasis_rf.joblib"))
|
| 565 |
-
model = load(artifact)
|
| 566 |
-
out = predict_one(model, req.model_dump())
|
| 567 |
-
return TabularOasisResponse(**out)
|
| 568 |
-
```
|
| 569 |
-
|
| 570 |
-
- [ ] **Step 3: Test (`tests/api/test_tabular_oasis_route.py`).**
|
| 571 |
-
|
| 572 |
-
```python
|
| 573 |
-
"""Integration: POST /predict/tabular_oasis."""
|
| 574 |
-
from __future__ import annotations
|
| 575 |
-
|
| 576 |
-
from pathlib import Path
|
| 577 |
-
|
| 578 |
-
import pytest
|
| 579 |
-
from fastapi.testclient import TestClient
|
| 580 |
-
|
| 581 |
-
from src.api.main import app
|
| 582 |
-
from src.models.tabular_oasis import train_from_csv
|
| 583 |
-
from tests.fixtures.build_synthetic_oasis import build as build_synth
|
| 584 |
-
|
| 585 |
-
|
| 586 |
-
@pytest.fixture()
|
| 587 |
-
def client(monkeypatch, tmp_path):
|
| 588 |
-
csv = build_synth(tmp_path / "oasis.csv")
|
| 589 |
-
artifact = train_from_csv(csv, tmp_path / "rf.joblib")
|
| 590 |
-
monkeypatch.setenv("OASIS_RF_ARTIFACT", str(artifact))
|
| 591 |
-
return TestClient(app)
|
| 592 |
-
|
| 593 |
-
|
| 594 |
-
def test_predict_tabular_oasis_demented_profile(client):
|
| 595 |
-
body = {
|
| 596 |
-
"is_male": 1, "age": 88, "educ": 8, "ses": 3.0,
|
| 597 |
-
"mmse": 15.0, "etiv": 1300.0, "nwbv": 0.66, "asf": 1.3,
|
| 598 |
-
}
|
| 599 |
-
r = client.post("/predict/tabular_oasis", json=body)
|
| 600 |
-
assert r.status_code == 200, r.text
|
| 601 |
-
data = r.json()
|
| 602 |
-
assert data["label_text"] == "Demented"
|
| 603 |
-
```
|
| 604 |
-
|
| 605 |
-
- [ ] **Step 4:** Streamlit form extension. In `src/frontend/app.py`, find the clinical-inputs section the doctor view exposes (likely under a "Clinical scores" expander; if absent, add it under the fusion tab). Add number_input widgets for the seven new fields (`is_male`, `age`, `educ`, `ses`, `etiv`, `nwbv`, `asf`) that flow into the existing `/fusion/predict` payload's `clinical` block.
|
| 606 |
-
|
| 607 |
-
- [ ] **Step 5:** README. Append:
|
| 608 |
-
|
| 609 |
-
```markdown
|
| 610 |
-
### OASIS Tabular Alzheimer's Classifier
|
| 611 |
-
|
| 612 |
-
A scikit-learn Random Forest trained on the OASIS longitudinal dataset (https://www.oasis-brains.org/) classifies Demented vs Nondemented from 8 biomarkers (sex, age, education, SES, MMSE, eTIV, nWBV, ASF). It contributes to the fusion engine as modality `tabular_oasis` (weight 0.20 for Alzheimer's).
|
| 613 |
-
|
| 614 |
-
To use: download `oasis_longitudinal.csv` from Kaggle, save to `data/external/oasis_longitudinal.csv`, then:
|
| 615 |
-
|
| 616 |
-
```bash
|
| 617 |
-
python scripts/train_oasis.py data/external/oasis_longitudinal.csv data/processed/oasis_rf.joblib
|
| 618 |
-
export OASIS_RF_ARTIFACT=data/processed/oasis_rf.joblib
|
| 619 |
-
```
|
| 620 |
-
|
| 621 |
-
The fusion engine and `POST /predict/tabular_oasis` will pick it up. If the artifact is missing, the modality is skipped — fusion still works.
|
| 622 |
-
```
|
| 623 |
-
|
| 624 |
-
- [ ] **Step 6:** commit: `feat(oasis): /predict/tabular_oasis route + Streamlit form + README`.
|
| 625 |
-
|
| 626 |
-
---
|
| 627 |
-
|
| 628 |
-
## Self-review checklist
|
| 629 |
-
|
| 630 |
-
1. **Independence.** OASIS classifier and fusion remain decoupled when the artifact is absent (`OASIS_RF_ARTIFACT` unset → modality skipped). ✓
|
| 631 |
-
2. **No real-data fabrication.** Tests use a clearly-labelled synthetic CSV. The real OASIS dataset is never committed. ✓
|
| 632 |
-
3. **Backward compatibility.** Existing `ClinicalScores` fields untouched. New fields are all `Optional`. ✓
|
| 633 |
-
4. **Branch 3a vs 3b.** This plan is Branch 3a. If the user picks Branch 3b, this plan is replaced wholesale.
|
| 634 |
-
|
| 635 |
-
---
|
| 636 |
-
|
| 637 |
-
## Execution handoff
|
| 638 |
-
|
| 639 |
-
Save and choose: subagent-driven (recommended) or inline executing-plans.
|
| 640 |
-
|
| 641 |
-
**Reminder to controller:** before starting any task, confirm with the user: "Do you have a real EEG checkpoint I'm missing, or shall I proceed with Branch 3a (OASIS tabular Alzheimer's classifier)?"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|