Dev Nagaich commited on
Commit
0cdb35f
·
1 Parent(s): 479fb67

Deploy VREyeSAM

Browse files
Files changed (9) hide show
  1. .dockerignore +60 -0
  2. .gitignore +62 -0
  3. Dockerfile +38 -9
  4. README.md +114 -12
  5. app.py +326 -0
  6. requirements.txt +39 -3
  7. src/streamlit_app.py +0 -40
  8. test_app_local.py +205 -0
  9. windows.bat +91 -0
.dockerignore ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Git
2
+ .git
3
+ .gitignore
4
+
5
+ # Python cache
6
+ __pycache__/
7
+ *.py[cod]
8
+ *$py.class
9
+ *.so
10
+ .Python
11
+
12
+ # Virtual environments
13
+ vreyesam_env/
14
+ venv/
15
+ env/
16
+ ENV/
17
+
18
+ # Build artifacts
19
+ build/
20
+ dist/
21
+ *.egg-info/
22
+
23
+ # Data and models (downloaded during build)
24
+ VRBiomSegM/
25
+ segment-anything-2/
26
+
27
+ # Outputs
28
+ *.jpg
29
+ *.png
30
+ *.jpeg
31
+ loss_plots/
32
+ predictions/
33
+ results/
34
+ output/
35
+ VREyeSAM_results/
36
+
37
+ # Jupyter
38
+ .ipynb_checkpoints/
39
+ *.ipynb
40
+
41
+ # IDE
42
+ .vscode/
43
+ .idea/
44
+ *.swp
45
+ *.swo
46
+ *~
47
+
48
+ # OS
49
+ .DS_Store
50
+ Thumbs.db
51
+
52
+ # Documentation
53
+ docs/`
54
+ *.md
55
+ !README.md
56
+
57
+ # Training scripts (not needed for deployment)
58
+ Training.py
59
+ Test.py
60
+ Inference.py
.gitignore ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Virtual Environment
2
+ vreyesam_env/
3
+ venv/
4
+ env/
5
+ ENV/
6
+
7
+ # Python cache and compiled files
8
+ __pycache__/
9
+ *.py[cod]
10
+ *$py.class
11
+ *.so
12
+ .Python
13
+ build/
14
+ develop-eggs/
15
+ dist/
16
+ downloads/
17
+ eggs/
18
+ .eggs/
19
+ lib/
20
+ lib64/
21
+ parts/
22
+ sdist/
23
+ var/
24
+ wheels/
25
+ *.egg-info/
26
+ .installed.cfg
27
+ *.egg
28
+
29
+ # Large external directories
30
+ segment-anything-2/
31
+
32
+ # Model checkpoints and weights
33
+ *.torch
34
+ *.pth
35
+ *.pt
36
+
37
+ # Data directories
38
+ VRBiomSegM/
39
+
40
+ # Output files
41
+ *.jpg
42
+ *.png
43
+ *.jpeg
44
+ loss_plots/
45
+ predictions/
46
+ results/
47
+ output/
48
+
49
+ # Jupyter Notebook
50
+ .ipynb_checkpoints/
51
+ *.ipynb
52
+
53
+ # IDE
54
+ .vscode/
55
+ .idea/
56
+ *.swp
57
+ *.swo
58
+ *~
59
+
60
+ # OS
61
+ .DS_Store
62
+ Thumbs.db
Dockerfile CHANGED
@@ -1,20 +1,49 @@
1
- FROM python:3.13.5-slim
2
 
3
  WORKDIR /app
4
 
 
5
  RUN apt-get update && apt-get install -y \
6
- build-essential \
7
- curl \
8
  git \
 
 
 
 
 
 
9
  && rm -rf /var/lib/apt/lists/*
10
 
11
- COPY requirements.txt ./
12
- COPY src/ ./src/
13
 
14
- RUN pip3 install -r requirements.txt
 
15
 
16
- EXPOSE 8501
 
 
 
17
 
18
- HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
 
 
 
19
 
20
- ENTRYPOINT ["streamlit", "run", "src/streamlit_app.py", "--server.port=8501", "--server.address=0.0.0.0"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
 
3
  WORKDIR /app
4
 
5
+ # Install system dependencies
6
  RUN apt-get update && apt-get install -y \
 
 
7
  git \
8
+ wget \
9
+ libgl1-mesa-glx \
10
+ libglib2.0-0 \
11
+ libsm6 \
12
+ libxext6 \
13
+ libxrender-dev \
14
  && rm -rf /var/lib/apt/lists/*
15
 
16
+ # Copy requirements first for better caching
17
+ COPY requirements_deploy.txt .
18
 
19
+ # Install Python dependencies
20
+ RUN pip install --no-cache-dir -r requirements_deploy.txt
21
 
22
+ # Clone SAM2 repository
23
+ RUN git clone https://github.com/facebookresearch/segment-anything-2.git && \
24
+ cd segment-anything-2 && \
25
+ pip install --no-cache-dir -e .
26
 
27
+ # Download SAM2 checkpoint
28
+ RUN mkdir -p segment-anything-2/checkpoints && \
29
+ cd segment-anything-2/checkpoints && \
30
+ wget https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt
31
 
32
+ # Download VREyeSAM fine-tuned weights from Hugging Face
33
+ RUN pip install --no-cache-dir huggingface-hub && \
34
+ huggingface-cli download devnagaich/VREyeSAM VREyeSAM_uncertainity_best.torch \
35
+ --local-dir segment-anything-2/checkpoints/
36
+
37
+ # Copy application files
38
+ COPY app.py .
39
+
40
+ # Expose Streamlit port
41
+ EXPOSE 7860
42
+
43
+ # Set environment variables
44
+ ENV STREAMLIT_SERVER_PORT=7860
45
+ ENV STREAMLIT_SERVER_ADDRESS=0.0.0.0
46
+ ENV STREAMLIT_SERVER_HEADLESS=true
47
+
48
+ # Run the application
49
+ CMD ["streamlit", "run", "app.py", "--server.port=7860", "--server.address=0.0.0.0"]
README.md CHANGED
@@ -1,20 +1,122 @@
1
  ---
2
- title: VREyeSAM
3
- emoji: 🚀
4
- colorFrom: red
5
- colorTo: red
6
  sdk: docker
7
- app_port: 8501
8
- tags:
9
- - streamlit
10
  pinned: false
11
- short_description: Streamlit template space
12
  license: mit
13
  ---
14
 
15
- # Welcome to Streamlit!
16
 
17
- Edit `/src/streamlit_app.py` to customize this app to your heart's desire. :heart:
 
 
18
 
19
- If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community
20
- forums](https://discuss.streamlit.io).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: VREyeSAM - Iris Segmentation
3
+ emoji: 👁️
4
+ colorFrom: blue
5
+ colorTo: green
6
  sdk: docker
 
 
 
7
  pinned: false
 
8
  license: mit
9
  ---
10
 
11
+ # VREyeSAM: Virtual Reality Non-Frontal Iris Segmentation
12
 
13
+ ![VREyeSAM Demo](https://img.shields.io/badge/Status-Active-success)
14
+ ![Python](https://img.shields.io/badge/Python-3.11+-blue)
15
+ ![PyTorch](https://img.shields.io/badge/PyTorch-2.0+-red)
16
 
17
+ ## 🎯 Overview
18
+
19
+ VREyeSAM is a robust iris segmentation framework designed specifically for non-frontal iris images captured in virtual reality and head-mounted device environments. Built on Meta's Segment Anything Model 2 (SAM2) with a novel uncertainty-weighted loss function, VREyeSAM achieves state-of-the-art performance on challenging VR/AR iris segmentation tasks.
20
+
21
+ ## 🚀 Features
22
+
23
+ - **Upload & Segment**: Upload any non-frontal iris image for instant segmentation
24
+ - **Binary Mask Generation**: Get precise binary segmentation masks
25
+ - **Iris Extraction**: Automatically extract and display the iris region as a rectangular strip
26
+ - **Visualization Options**: View overlay masks and probabilistic confidence maps
27
+ - **Download Results**: Save all segmentation outputs with one click
28
+
29
+ ## 📊 Performance Metrics
30
+
31
+ - **Precision**: 0.751
32
+ - **Recall**: 0.870
33
+ - **F1-Score**: 0.806
34
+ - **Mean IoU**: 0.647
35
+
36
+ Evaluated on the VRBiom dataset, VREyeSAM significantly outperforms existing segmentation methods.
37
+
38
+ ## 🔬 Technical Details
39
+
40
+ ### Architecture
41
+ VREyeSAM leverages:
42
+ - **Base Model**: SAM2 (Segment Anything Model 2) with Hiera-Small backbone
43
+ - **Fine-tuning**: Custom uncertainty-weighted hybrid loss function
44
+ - **Training Data**: VRBiomSegM dataset with non-frontal iris images
45
+ - **Inference**: Point-prompt based segmentation with ensemble predictions
46
+
47
+ ### Key Innovations
48
+ 1. **Quality-aware Pre-processing**: Automatically filters partially/fully closed eyes
49
+ 2. **Uncertainty-weighted Loss**: Adaptively balances multiple learning objectives
50
+ 3. **Multi-point Sampling**: Uses 30 random points for robust predictions
51
+ 4. **Probabilistic Masking**: Generates confidence-weighted segmentation
52
+
53
+ ## 🎓 Citation
54
+
55
+ If you use VREyeSAM in your research, please cite:
56
+
57
+ ```bibtex
58
+ @article{sharma2025vreyesam,
59
+ title={VREyeSAM: Virtual Reality Non-Frontal Iris Segmentation using Foundational Model with Uncertainty Weighted Loss},
60
+ author={Sharma, Geetanjali and Nagaich, Dev and Jaswal, Gaurav and Nigam, Aditya and Ramachandra, Raghavendra},
61
+ conference={IJCB},
62
+ year={2025}
63
+ }
64
+ ```
65
+
66
+ ## 👥 Authors
67
+
68
+ - **Geetanjali Sharma** - Indian Institute of Technology Mandi, India
69
+ - **Dev Nagaich** - Indian Institute of Technology Mandi, India
70
+ - **Gaurav Jaswal** - Division of Digital Forensics, Directorate of Forensic Services, Shimla, India
71
+ - **Aditya Nigam** - Indian Institute of Technology Mandi, India
72
+ - **Raghavendra Ramachandra** - Norwegian University of Science and Technology (NTNU), Norway
73
+
74
+ ## 📧 Contact
75
+
76
+ For dataset access or questions:
77
+ - **Email**: geetanjalisharma546@gmail.com
78
+ - **GitHub**: [VREyeSAM Repository](https://github.com/GeetanjaliGTZ/VREyeSAM)
79
+
80
+ ## 🔗 Resources
81
+
82
+ - [Paper on ResearchGate](https://www.researchgate.net/publication/400248367_VREyeSAM_Virtual_Reality_Non-Frontal_Iris_Segmentation_using_Foundational_Model_with_uncertainty_weighted_loss)
83
+ - [GitHub Repository](https://github.com/GeetanjaliGTZ/VREyeSAM)
84
+ - [Model Weights on Hugging Face](https://huggingface.co/devnagaich/VREyeSAM)
85
+
86
+ ## 📝 License
87
+
88
+ This project is licensed under the MIT License.
89
+
90
+ ## 🙏 Acknowledgments
91
+
92
+ - Meta AI for the Segment Anything Model 2 (SAM2)
93
+ - VRBiom dataset contributors
94
+ - Indian Institute of Technology Mandi
95
+ - Norwegian University of Science and Technology
96
+
97
+ ## 🛠️ Usage Instructions
98
+
99
+ 1. **Upload Image**: Click on the upload button and select a non-frontal iris image
100
+ 2. **Segment**: Click the "Segment Iris" button to process the image
101
+ 3. **View Results**: Explore the binary mask, overlay, and extracted iris strip
102
+ 4. **Download**: Save any of the results using the download buttons
103
+
104
+ ## ⚙️ Model Details
105
+
106
+ - **Model Type**: Image Segmentation
107
+ - **Base Architecture**: SAM2 (Hiera-Small)
108
+ - **Training Dataset**: VRBiomSegM (contact for access)
109
+ - **Input Size**: Up to 1024px (auto-resized)
110
+ - **Output**: Binary mask + Probabilistic confidence map
111
+ - **Device**: CUDA GPU (falls back to CPU if unavailable)
112
+
113
+ ## 🔍 Use Cases
114
+
115
+ - **Biometric Authentication**: Secure iris recognition in VR/AR environments
116
+ - **Medical Applications**: Iris analysis in non-ideal capture conditions
117
+ - **Research**: Benchmark for non-frontal iris segmentation
118
+ - **VR/AR Development**: Integration into head-mounted devices
119
+
120
+ ---
121
+
122
+ **Note**: This is a research prototype. For production use, please contact the authors.
app.py ADDED
@@ -0,0 +1,326 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import cv2
3
+ import torch
4
+ import numpy as np
5
+ from PIL import Image
6
+ import io
7
+ import sys
8
+ import os
9
+
10
+ # Add segment-anything-2 to path
11
+ sys.path.insert(0, os.path.join(os.path.dirname(__file__), "segment-anything-2"))
12
+
13
+ from sam2.build_sam import build_sam2
14
+ from sam2.sam2_image_predictor import SAM2ImagePredictor
15
+
16
+ # Page config
17
+ st.set_page_config(
18
+ page_title="VREyeSAM - Non-frontal Iris Segmentation",
19
+ page_icon="👁️",
20
+ layout="wide"
21
+ )
22
+
23
+ # Custom CSS
24
+ st.markdown("""
25
+ <style>
26
+ .main {
27
+ padding: 2rem;
28
+ }
29
+ .stButton>button {
30
+ width: 100%;
31
+ background-color: #4CAF50;
32
+ color: white;
33
+ padding: 0.5rem;
34
+ font-size: 16px;
35
+ }
36
+ .result-box {
37
+ border: 2px solid #ddd;
38
+ border-radius: 10px;
39
+ padding: 1rem;
40
+ margin: 1rem 0;
41
+ }
42
+ </style>
43
+ """, unsafe_allow_html=True)
44
+
45
+ @st.cache_resource
46
+ def load_model():
47
+ """Load the VREyeSAM model"""
48
+ try:
49
+ model_cfg = "configs/sam2/sam2_hiera_s.yaml"
50
+ sam2_checkpoint = "segment-anything-2/checkpoints/sam2_hiera_small.pt"
51
+ fine_tuned_weights = "segment-anything-2/checkpoints/VREyeSAM_uncertainity_best.torch"
52
+
53
+ sam2_model = build_sam2(model_cfg, sam2_checkpoint, device="cuda" if torch.cuda.is_available() else "cpu")
54
+ predictor = SAM2ImagePredictor(sam2_model)
55
+ predictor.model.load_state_dict(torch.load(fine_tuned_weights, map_location="cuda" if torch.cuda.is_available() else "cpu"))
56
+
57
+ return predictor
58
+ except Exception as e:
59
+ st.error(f"Error loading model: {str(e)}")
60
+ return None
61
+
62
+ def read_and_resize_image(image):
63
+ """Read and resize image for processing"""
64
+ img = np.array(image)
65
+ if len(img.shape) == 2: # Grayscale
66
+ img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
67
+ elif img.shape[2] == 4: # RGBA
68
+ img = cv2.cvtColor(img, cv2.COLOR_RGBA2RGB)
69
+
70
+ # Resize if needed
71
+ r = np.min([1024 / img.shape[1], 1024 / img.shape[0]])
72
+ if r < 1:
73
+ img = cv2.resize(img, (int(img.shape[1] * r), int(img.shape[0] * r)))
74
+
75
+ return img
76
+
77
+ def segment_iris(predictor, image):
78
+ """Perform iris segmentation"""
79
+ # Generate random points for inference
80
+ num_samples = 30
81
+ input_points = np.random.randint(0, min(image.shape[:2]), (num_samples, 1, 2))
82
+
83
+ # Inference
84
+ with torch.no_grad():
85
+ predictor.set_image(image)
86
+ masks, scores, _ = predictor.predict(
87
+ point_coords=input_points,
88
+ point_labels=np.ones([input_points.shape[0], 1])
89
+ )
90
+
91
+ # Convert to numpy
92
+ np_masks = np.array(masks[:, 0]).astype(np.float32)
93
+ np_scores = scores[:, 0]
94
+
95
+ # Normalize scores
96
+ score_sum = np.sum(np_scores)
97
+ if score_sum > 0:
98
+ normalized_scores = np_scores / score_sum
99
+ else:
100
+ normalized_scores = np.ones_like(np_scores) / len(np_scores)
101
+
102
+ # Generate probabilistic mask
103
+ prob_mask = np.sum(np_masks * normalized_scores[:, None, None], axis=0)
104
+ prob_mask = np.clip(prob_mask, 0, 1)
105
+
106
+ # Threshold to get binary mask
107
+ binary_mask = (prob_mask > 0.2).astype(np.uint8)
108
+
109
+ return binary_mask, prob_mask
110
+
111
+ def extract_iris_strip(image, binary_mask):
112
+ """Extract iris region and create a rectangular strip"""
113
+ # Find contours in binary mask
114
+ contours, _ = cv2.findContours(binary_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
115
+
116
+ if len(contours) == 0:
117
+ return None
118
+
119
+ # Get the largest contour (assumed to be the iris)
120
+ largest_contour = max(contours, key=cv2.contourArea)
121
+ x, y, w, h = cv2.boundingRect(largest_contour)
122
+
123
+ # Add some padding
124
+ padding = 10
125
+ x = max(0, x - padding)
126
+ y = max(0, y - padding)
127
+ w = min(image.shape[1] - x, w + 2 * padding)
128
+ h = min(image.shape[0] - y, h + 2 * padding)
129
+
130
+ # Extract the iris region
131
+ iris_region = image[y:y+h, x:x+w]
132
+
133
+ # Create a rectangular strip (normalize height)
134
+ strip_height = 150
135
+ aspect_ratio = w / h
136
+ strip_width = int(strip_height * aspect_ratio)
137
+
138
+ iris_strip = cv2.resize(iris_region, (strip_width, strip_height))
139
+
140
+ return iris_strip
141
+
142
+ def overlay_mask_on_image(image, binary_mask, color=(0, 255, 0), alpha=0.5):
143
+ """Overlay binary mask on original image"""
144
+ overlay = image.copy()
145
+ mask_colored = np.zeros_like(image)
146
+ mask_colored[binary_mask > 0] = color
147
+
148
+ # Blend
149
+ result = cv2.addWeighted(overlay, 1-alpha, mask_colored, alpha, 0)
150
+
151
+ return result
152
+
153
+ # Main App
154
+ def main():
155
+ st.title("👁️ VREyeSAM: Non-Frontal Iris Segmentation")
156
+ st.markdown("""
157
+ Upload a non-frontal iris image captured in VR/AR environments, and VREyeSAM will segment the iris region
158
+ using a fine-tuned SAM2 model with uncertainty-weighted loss.
159
+ """)
160
+
161
+ # Sidebar
162
+ with st.sidebar:
163
+ st.header("About VREyeSAM")
164
+ st.markdown("""
165
+ **VREyeSAM** is a robust iris segmentation framework designed for images captured under:
166
+ - Varying gaze directions
167
+ - Partial occlusions
168
+ - Inconsistent lighting conditions
169
+
170
+ **Model Performance:**
171
+ - Precision: 0.751
172
+ - Recall: 0.870
173
+ - F1-Score: 0.806
174
+ - Mean IoU: 0.647
175
+
176
+
177
+ """)
178
+
179
+ st.header("Settings")
180
+ show_overlay = st.checkbox("Show mask overlay", value=True)
181
+ show_probabilistic = st.checkbox("Show probabilistic mask", value=False)
182
+
183
+ # Load model
184
+ with st.spinner("Loading VREyeSAM model..."):
185
+ predictor = load_model()
186
+
187
+ if predictor is None:
188
+ st.error("Failed to load model. Please check the setup.")
189
+ return
190
+
191
+ st.success("✅ Model loaded successfully!")
192
+
193
+ # File uploader
194
+ uploaded_file = st.file_uploader(
195
+ "Upload an iris image (JPG, PNG, JPEG)",
196
+ type=["jpg", "png", "jpeg"],
197
+ help="Upload a non-frontal iris image for segmentation"
198
+ )
199
+
200
+ if uploaded_file is not None:
201
+ # Display original image
202
+ image = Image.open(uploaded_file)
203
+
204
+ col1, col2 = st.columns(2)
205
+
206
+ with col1:
207
+ st.subheader("📷 Original Image")
208
+ st.image(image, use_container_width=True)
209
+
210
+ # Process button
211
+ if st.button("🔍 Segment Iris", type="primary"):
212
+ with st.spinner("Segmenting iris..."):
213
+ # Prepare image
214
+ img_array = read_and_resize_image(image)
215
+
216
+ # Perform segmentation
217
+ binary_mask, prob_mask = segment_iris(predictor, img_array)
218
+
219
+ # Extract iris strip
220
+ ## iris_strip = extract_iris_strip(img_array, binary_mask)
221
+
222
+ with col2:
223
+ st.subheader("🎯 Binary Mask")
224
+ binary_mask_img = (binary_mask * 255).astype(np.uint8)
225
+ st.image(binary_mask_img, use_container_width=True)
226
+
227
+ # Additional results
228
+ st.markdown("---")
229
+ st.subheader("📊 Segmentation Results")
230
+
231
+ result_cols = st.columns(3)
232
+
233
+ with result_cols[0]:
234
+ if show_overlay:
235
+ st.markdown("**Overlay View**")
236
+ overlay = overlay_mask_on_image(img_array, binary_mask)
237
+ st.image(overlay, use_container_width=True)
238
+
239
+ with result_cols[1]:
240
+ if show_probabilistic:
241
+ st.markdown("**Probabilistic Mask**")
242
+ prob_mask_img = (prob_mask * 255).astype(np.uint8)
243
+ st.image(prob_mask_img, use_container_width=True)
244
+
245
+ # with result_cols[2]:
246
+ # if iris_strip is not None:
247
+ # st.markdown("**Extracted Iris Strip**")
248
+ # st.image(iris_strip, use_container_width=True)
249
+ # else:
250
+ # st.warning("No iris region detected")
251
+
252
+ # Download options
253
+ st.markdown("---")
254
+ st.subheader("💾 Download Results")
255
+
256
+ download_cols = st.columns(3)
257
+
258
+ with download_cols[0]:
259
+ # Binary mask download
260
+ binary_pil = Image.fromarray(binary_mask_img)
261
+ buf = io.BytesIO()
262
+ binary_pil.save(buf, format="PNG")
263
+ st.download_button(
264
+ label="Download Binary Mask",
265
+ data=buf.getvalue(),
266
+ file_name="binary_mask.png",
267
+ mime="image/png"
268
+ )
269
+
270
+ with download_cols[1]:
271
+ if show_overlay:
272
+ # Overlay download
273
+ overlay_pil = Image.fromarray(cv2.cvtColor(overlay, cv2.COLOR_BGR2RGB))
274
+ buf = io.BytesIO()
275
+ overlay_pil.save(buf, format="PNG")
276
+ st.download_button(
277
+ label="Download Overlay",
278
+ data=buf.getvalue(),
279
+ file_name="overlay.png",
280
+ mime="image/png"
281
+ )
282
+
283
+ # with download_cols[2]:
284
+ # if iris_strip is not None:
285
+ # # Iris strip download
286
+ # strip_pil = Image.fromarray(cv2.cvtColor(iris_strip, cv2.COLOR_BGR2RGB))
287
+ # buf = io.BytesIO()
288
+ # strip_pil.save(buf, format="PNG")
289
+ # st.download_button(
290
+ # label="Download Iris Strip",
291
+ # data=buf.getvalue(),
292
+ # file_name="iris_strip.png",
293
+ # mime="image/png"
294
+ # )
295
+
296
+ # Statistics
297
+ st.markdown("---")
298
+ st.subheader("📈 Segmentation Statistics")
299
+ stats_cols = st.columns(4)
300
+
301
+ mask_area = np.sum(binary_mask > 0)
302
+ total_area = binary_mask.shape[0] * binary_mask.shape[1]
303
+ coverage = (mask_area / total_area) * 100
304
+
305
+ with stats_cols[0]:
306
+ st.metric("Mask Coverage", f"{coverage:.2f}%")
307
+ with stats_cols[1]:
308
+ st.metric("Image Size", f"{img_array.shape[1]}x{img_array.shape[0]}")
309
+ with stats_cols[2]:
310
+ st.metric("Mask Area (pixels)", f"{mask_area:,}")
311
+ # with stats_cols[3]:
312
+ # if iris_strip is not None:
313
+ # st.metric("Strip Size", f"{iris_strip.shape[1]}x{iris_strip.shape[0]}")
314
+
315
+ # Footer
316
+ st.markdown("---")
317
+ st.markdown("""
318
+ <div style='text-align: center'>
319
+ <p><strong>VREyeSAM</strong> - Virtual Reality Non-Frontal Iris Segmentation</p>
320
+ <p>🔗 <a href='https://github.com/GeetanjaliGTZ/VREyeSAM'>GitHub</a> |
321
+ 📧 <a href='mailto:geetanjalisharma546@gmail.com'>Contact</a></p>
322
+ </div>
323
+ """, unsafe_allow_html=True)
324
+
325
+ if __name__ == "__main__":
326
+ main()
requirements.txt CHANGED
@@ -1,3 +1,39 @@
1
- altair
2
- pandas
3
- streamlit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VREyeSAM Requirements - Fixed Version Constraints
2
+ # Compatible with Python 3.11+
3
+ # This version resolves NumPy conflicts with gensim and numba
4
+
5
+ # Web Interface
6
+ streamlit>=1.28.0,<2.0.0
7
+
8
+ # Core ML and Deep Learning - COMPATIBLE VERSIONS
9
+ torch==2.3.0
10
+ torchvision==0.18.0
11
+ numpy>=1.22.0,<2.0.0
12
+
13
+ # Computer Vision
14
+ opencv-python-headless>=4.5.0,<5.0.0
15
+ Pillow>=8.0.0,<11.0.0
16
+
17
+ # Data Processing and ML
18
+ pandas>=1.3.0,<3.0.0
19
+ scikit-learn>=1.0.0,<2.0.0
20
+
21
+ # Visualization
22
+ matplotlib>=3.5.0,<4.0.0
23
+
24
+ # Utility
25
+ tqdm>=4.62.0,<5.0.0
26
+ hydra-core>=1.1.0,<2.0.0
27
+ omegaconf>=2.1.0,<3.0.0
28
+
29
+ # For downloading model weights
30
+ huggingface-hub>=0.19.0,<1.0.0
31
+
32
+ # Note: Install PyTorch with CUDA support separately if needed:
33
+ # For CUDA 11.8: pip install torch==2.3.0 torchvision==0.18.0 --index-url https://download.pytorch.org/whl/cu118
34
+ # For CUDA 12.1: pip install torch==2.3.0 torchvision==0.18.0 --index-url https://download.pytorch.org/whl/cu121
35
+ # For CPU only: pip install torch==2.3.0 torchvision==0.18.0 --index-url https://download.pytorch.org/whl/cpu
36
+
37
+ # SAM2 will be installed separately from git:
38
+ # git clone https://github.com/facebookresearch/segment-anything-2
39
+ # cd segment-anything-2 && pip install -e . && cd ..
src/streamlit_app.py DELETED
@@ -1,40 +0,0 @@
1
- import altair as alt
2
- import numpy as np
3
- import pandas as pd
4
- import streamlit as st
5
-
6
- """
7
- # Welcome to Streamlit!
8
-
9
- Edit `/streamlit_app.py` to customize this app to your heart's desire :heart:.
10
- If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community
11
- forums](https://discuss.streamlit.io).
12
-
13
- In the meantime, below is an example of what you can do with just a few lines of code:
14
- """
15
-
16
- num_points = st.slider("Number of points in spiral", 1, 10000, 1100)
17
- num_turns = st.slider("Number of turns in spiral", 1, 300, 31)
18
-
19
- indices = np.linspace(0, 1, num_points)
20
- theta = 2 * np.pi * num_turns * indices
21
- radius = indices
22
-
23
- x = radius * np.cos(theta)
24
- y = radius * np.sin(theta)
25
-
26
- df = pd.DataFrame({
27
- "x": x,
28
- "y": y,
29
- "idx": indices,
30
- "rand": np.random.randn(num_points),
31
- })
32
-
33
- st.altair_chart(alt.Chart(df, height=700, width=700)
34
- .mark_point(filled=True)
35
- .encode(
36
- x=alt.X("x", axis=None),
37
- y=alt.Y("y", axis=None),
38
- color=alt.Color("idx", legend=None, scale=alt.Scale()),
39
- size=alt.Size("rand", legend=None, scale=alt.Scale(range=[1, 150])),
40
- ))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test_app_local.py ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Local Testing Script for VREyeSAM Streamlit App
4
+
5
+ Run this script to test the app locally before deploying to Hugging Face Spaces.
6
+ Usage: python test_app_local.py
7
+ """
8
+
9
+ import subprocess
10
+ import sys
11
+ import os
12
+ import time
13
+
14
+ def check_dependencies():
15
+ """Check if all required dependencies are installed"""
16
+ print("🔍 Checking dependencies...")
17
+
18
+ required_packages = [
19
+ 'streamlit',
20
+ 'torch',
21
+ 'torchvision',
22
+ 'opencv-python',
23
+ 'numpy',
24
+ 'PIL'
25
+ ]
26
+
27
+ missing = []
28
+ for package in required_packages:
29
+ try:
30
+ __import__(package.replace('-', '_'))
31
+ print(f" ✅ {package}")
32
+ except ImportError:
33
+ print(f" ❌ {package}")
34
+ missing.append(package)
35
+
36
+ if missing:
37
+ print(f"\n⚠️ Missing packages: {', '.join(missing)}")
38
+ print("Install them with: pip install -r requirements_deploy.txt")
39
+ return False
40
+
41
+ print("✅ All dependencies installed\n")
42
+ return True
43
+
44
+ def check_model_files():
45
+ """Check if model files exist"""
46
+ print("🔍 Checking model files...")
47
+
48
+ files_to_check = [
49
+ "segment-anything-2/checkpoints/sam2_hiera_small.pt",
50
+ "segment-anything-2/checkpoints/VREyeSAM_uncertainity_best.torch"
51
+ ]
52
+
53
+ all_exist = True
54
+ for file_path in files_to_check:
55
+ if os.path.exists(file_path):
56
+ size_mb = os.path.getsize(file_path) / (1024 * 1024)
57
+ print(f" ✅ {file_path} ({size_mb:.1f} MB)")
58
+ else:
59
+ print(f" ❌ {file_path} - NOT FOUND")
60
+ all_exist = False
61
+
62
+ if not all_exist:
63
+ print("\n⚠️ Some model files are missing!")
64
+ print("Please run the setup instructions from README.md")
65
+ return False
66
+
67
+ print("✅ All model files present\n")
68
+ return True
69
+
70
+ def check_sam2_installation():
71
+ """Check if SAM2 is properly installed"""
72
+ print("🔍 Checking SAM2 installation...")
73
+
74
+ try:
75
+ sys.path.insert(0, "segment-anything-2")
76
+ from sam2.build_sam import build_sam2
77
+ from sam2.sam2_image_predictor import SAM2ImagePredictor
78
+ print(" ✅ SAM2 modules can be imported")
79
+ print("✅ SAM2 properly installed\n")
80
+ return True
81
+ except ImportError as e:
82
+ print(f" ❌ SAM2 import failed: {e}")
83
+ print("\n⚠️ SAM2 not properly installed!")
84
+ print("Install with:")
85
+ print(" git clone https://github.com/facebookresearch/segment-anything-2")
86
+ print(" cd segment-anything-2")
87
+ print(" pip install -e .")
88
+ return False
89
+
90
+ def test_app_syntax():
91
+ """Check if app.py has syntax errors"""
92
+ print("🔍 Checking app.py syntax...")
93
+
94
+ try:
95
+ with open('app.py', 'r', encoding='utf-8') as f:
96
+ code = f.read()
97
+ compile(code, 'app.py', 'exec')
98
+ print(" ✅ No syntax errors")
99
+ print("✅ app.py syntax valid\n")
100
+ return True
101
+ except SyntaxError as e:
102
+ print(f" ❌ Syntax error in app.py: {e}")
103
+ return False
104
+ except UnicodeDecodeError as e:
105
+ print(f" ⚠️ Unicode encoding issue: {e}")
106
+ print(" Trying with different encoding...")
107
+ try:
108
+ with open('app.py', 'r', encoding='latin-1') as f:
109
+ code = f.read()
110
+ compile(code, 'app.py', 'exec')
111
+ print(" ✅ No syntax errors (latin-1 encoding)")
112
+ print("✅ app.py syntax valid\n")
113
+ return True
114
+ except Exception as e2:
115
+ print(f" ❌ Still failed: {e2}")
116
+ return False
117
+
118
+ def run_streamlit_app():
119
+ """Launch the Streamlit app"""
120
+ print("🚀 Launching Streamlit app...")
121
+ print("=" * 60)
122
+ print("The app will open in your browser at http://localhost:8501")
123
+ print("Press Ctrl+C to stop the app")
124
+ print("=" * 60)
125
+ print()
126
+
127
+ try:
128
+ subprocess.run(['streamlit', 'run', 'app.py'], check=True)
129
+ except KeyboardInterrupt:
130
+ print("\n\n✅ App stopped by user")
131
+ except subprocess.CalledProcessError as e:
132
+ print(f"\n❌ Error running app: {e}")
133
+ return False
134
+
135
+ return True
136
+
137
+ def create_test_image():
138
+ """Create a simple test image if none exists"""
139
+ print("🔍 Checking for test images...")
140
+
141
+ test_dir = "test_images"
142
+ if not os.path.exists(test_dir):
143
+ os.makedirs(test_dir)
144
+ print(f" 📁 Created {test_dir} directory")
145
+
146
+ # Check if there are any test images
147
+ image_files = [f for f in os.listdir(test_dir) if f.endswith(('.jpg', '.png', '.jpeg'))]
148
+
149
+ if image_files:
150
+ print(f" ✅ Found {len(image_files)} test image(s)")
151
+ print(f" 📂 Test images in: {test_dir}/")
152
+ for img in image_files:
153
+ print(f" - {img}")
154
+ else:
155
+ print(f" ℹ️ No test images found in {test_dir}/")
156
+ print(f" 💡 Add some iris images to {test_dir}/ for testing")
157
+
158
+ print()
159
+
160
+ def main():
161
+ """Main testing function"""
162
+ print("\n" + "=" * 60)
163
+ print("VREyeSAM Local Testing Suite")
164
+ print("=" * 60 + "\n")
165
+
166
+ # Run all checks
167
+ checks = [
168
+ ("Dependencies", check_dependencies),
169
+ ("Model Files", check_model_files),
170
+ ("SAM2 Installation", check_sam2_installation),
171
+ ("App Syntax", test_app_syntax),
172
+ ]
173
+
174
+ all_passed = True
175
+ for name, check_func in checks:
176
+ if not check_func():
177
+ all_passed = False
178
+ print(f"❌ {name} check failed\n")
179
+
180
+ # Create test image directory
181
+ create_test_image()
182
+
183
+ if not all_passed:
184
+ print("=" * 60)
185
+ print("⚠️ Some checks failed. Please fix the issues above.")
186
+ print("=" * 60)
187
+ sys.exit(1)
188
+
189
+ print("=" * 60)
190
+ print("✅ All checks passed! Ready to run the app.")
191
+ print("=" * 60)
192
+ print()
193
+
194
+ # Ask user if they want to run the app
195
+ response = input("Do you want to launch the app now? (y/n): ").strip().lower()
196
+
197
+ if response == 'y':
198
+ run_streamlit_app()
199
+ else:
200
+ print("\n✅ Testing complete!")
201
+ print("To run the app manually, execute: streamlit run app.py")
202
+ print()
203
+
204
+ if __name__ == "__main__":
205
+ main()
windows.bat ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ @echo off
2
+ REM VREyeSAM Setup Script for Windows
3
+ REM This script sets up the environment and downloads required files
4
+
5
+ echo ============================================================
6
+ echo VREyeSAM Windows Setup Script
7
+ echo ============================================================
8
+ echo.
9
+
10
+ REM Check if Python is installed
11
+ python --version >nul 2>&1
12
+ if errorlevel 1 (
13
+ echo [ERROR] Python is not installed or not in PATH
14
+ echo Please install Python 3.11 from https://www.python.org/
15
+ pause
16
+ exit /b 1
17
+ )
18
+
19
+ echo [1/6] Creating virtual environment...
20
+ if exist vreyesam_env (
21
+ echo Virtual environment already exists, skipping...
22
+ ) else (
23
+ python -m venv vreyesam_env
24
+ echo Done!
25
+ )
26
+ echo.
27
+
28
+ echo [2/6] Activating virtual environment...
29
+ call vreyesam_env\Scripts\activate.bat
30
+ echo Done!
31
+ echo.
32
+
33
+ echo [3/6] Installing dependencies...
34
+ echo This may take a few minutes...
35
+ python -m pip install --upgrade pip
36
+ pip install streamlit
37
+ pip install torch==2.3.0 torchvision==0.18.0 --index-url https://download.pytorch.org/whl/cu118
38
+ pip install "numpy<2.0.0"
39
+ pip install opencv-python-headless pillow pandas scikit-learn matplotlib tqdm hydra-core
40
+ echo Done!
41
+ echo.
42
+
43
+ echo [4/6] Cloning SAM2 repository...
44
+ if exist segment-anything-2 (
45
+ echo SAM2 repository already exists, skipping...
46
+ ) else (
47
+ git clone https://github.com/facebookresearch/segment-anything-2
48
+ echo Done!
49
+ )
50
+ echo.
51
+
52
+ echo [5/6] Installing SAM2...
53
+ cd segment-anything-2
54
+ pip install -e .
55
+ cd ..
56
+ echo Done!
57
+ echo.
58
+
59
+ echo [6/6] Downloading model checkpoints...
60
+ if not exist segment-anything-2\checkpoints mkdir segment-anything-2\checkpoints
61
+
62
+ REM Download SAM2 base checkpoint
63
+ if exist segment-anything-2\checkpoints\sam2_hiera_small.pt (
64
+ echo SAM2 checkpoint already exists, skipping...
65
+ ) else (
66
+ echo Downloading SAM2 checkpoint (this may take a few minutes)...
67
+ powershell -Command "Invoke-WebRequest -Uri 'https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt' -OutFile 'segment-anything-2\checkpoints\sam2_hiera_small.pt'"
68
+ echo Done!
69
+ )
70
+
71
+ REM Download VREyeSAM weights
72
+ if exist segment-anything-2\checkpoints\VREyeSAM_uncertainity_best.torch (
73
+ echo VREyeSAM weights already exist, skipping...
74
+ ) else (
75
+ echo Downloading VREyeSAM weights...
76
+ pip install huggingface-hub
77
+ huggingface-cli download devnagaich/VREyeSAM VREyeSAM_uncertainity_best.torch --local-dir segment-anything-2\checkpoints\
78
+ echo Done!
79
+ )
80
+ echo.
81
+
82
+ echo ============================================================
83
+ echo Setup Complete!
84
+ echo ============================================================
85
+ echo.
86
+ echo To run the app:
87
+ echo 1. Activate the environment: vreyesam_env\Scripts\activate.bat
88
+ echo 2. Run: streamlit run app.py
89
+ echo.
90
+ echo Press any key to exit...
91
+ pause >nul