File size: 4,076 Bytes
d076360 e3566c9 b0bcfd5 e3566c9 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 e3566c9 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 65e6c8d b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 8b1df1a b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 65e6c8d b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 65e6c8d b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 b0bcfd5 b74bbb6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | ---
title: ICH Detection Pipeline
emoji: 🏥
colorFrom: blue
colorTo: pink
sdk: docker
pinned: false
---
# AI Medical Intelligence Pipeline for CT Scan Analysis
AI medical intelligence pipeline for intracranial hemorrhage (ICH) analysis from head CT (DICOM) images.
This project provides a Flask web interface for:
- uploading single or batch DICOM scans,
- running model inference,
- viewing Grad-CAM visualizations,
- browsing past reports and logs,
- reviewing calibration and evaluation summaries.
## Project Overview
Intracranial hemorrhage is a time-critical emergency finding in neuroimaging. This repository focuses on a practical intelligence pipeline with explainability and structured report output.
The system is built for decision support and triage assistance, not standalone diagnosis.
## Model and Artifacts
Model weights and related inference artifacts are hosted on Hugging Face:
- [Hugging Face Model Repository](https://huggingface.co/HarshCode/eff_b4_brain)
When model files are not present locally (for example on Render), the app can
download required artifacts from this Hugging Face repository at runtime.
## Detailed Performance Report
Detailed performance and B4-specific analysis are documented separately in:
- [B4_Performance_Report.md](B4_Performance_Report.md)
## GitHub Pages Setup
For step-by-step GitHub Pages setup (project site and username.github.io site), see:
- [GITHUB_PAGES_DOCUMENT.md](GITHUB_PAGES_DOCUMENT.md)
## Repository Structure
- `app.py`: Flask application entry point
- `run_interface.py`: adapter layer between app and inference implementation
- `download_imp/`: inference code and local artifact layout
- `templates/`: HTML templates (Jinja2)
- `static/`: styles and static assets
- `docs/`: GitHub Pages content
## Requirements
- Python 3.10+ (3.12 works)
- pip
- virtual environment (recommended)
Install dependencies:
```bash
pip install -r requirements.txt
```
## Environment Setup
Create local environment file from template:
```bash
cp .env.example .env
```
Important variables in `.env`:
- `ICH_APP_DEBUG`: run Flask in debug mode (`1` or `0`)
- `ICH_APP_PORT`: app port (default `7860`)
- `ICH_SECRET_KEY`: Flask secret key
- `ICH_MAX_UPLOAD_MB`: max upload size in MB
- `ICH_FOLD_SELECTION`: `ensemble`, `best`, or fold id (`0` to `4`)
- `ICH_LOCAL_MODE`: enables local directory scanning mode
- `ICH_LOG_LEVEL`: `DEBUG`, `INFO`, `WARNING`, `ERROR`
- `ICH_HF_MODEL_REPO`: Hugging Face model repo used for runtime artifact download
- `ICH_HF_TOKEN`: optional token (required only if the Hugging Face repo is private)
## Run the Application
```bash
python app.py
```
Open in browser:
```text
http://127.0.0.1:7860
```
## Deploy on Render
This repository includes `render.yaml` for Render deployment.
1. Push the repository to GitHub.
2. In Render, create a new Blueprint/Web Service from the repository.
3. Ensure these environment variables are set in Render:
- `ICH_HF_MODEL_REPO=HarshCode/eff_b4_brain`
- `ICH_HF_TOKEN` (only if repo is private)
- `ICH_SECRET_KEY` (recommended custom value)
4. Deploy. The service will start with:
```bash
gunicorn app:app --bind 0.0.0.0:$PORT --workers 1 --timeout 180
```
Note: first startup can take longer because model artifacts may be downloaded
from Hugging Face.
## Basic Usage
1. Go to the upload page.
2. Upload one `.dcm`, multiple `.dcm` files, or batch input.
3. Wait for inference and report generation.
4. Review:
- screening outcome,
- calibrated probability,
- confidence band,
- triage action,
- Grad-CAM overlay.
5. Use Reports / Logs / Evaluation pages for history and analysis.
## Notes
- Keep heavy model binaries out of GitHub (managed via `.gitignore`).
- Generated report outputs are created during runtime.
- If required artifacts are missing locally, fetch them from the Hugging Face repository linked above.
## Disclaimer
This system is an AI-assisted screening and decision-support tool.
It does **not** provide a medical diagnosis and must be used with qualified clinical review.
|