Dev Nagaich commited on
Commit
f74cf62
·
1 Parent(s): 2c4864e

Restructure: Clean repository - remove duplicates, consolidate at root

Browse files
Files changed (12) hide show
  1. .env.example +29 -0
  2. .gitignore +19 -5
  3. DEPLOYMENT.md +209 -0
  4. README.md +82 -0
  5. SECURITY.md +133 -0
  6. VREyeSAM +0 -1
  7. app.py +6 -55
  8. deploy.bat +0 -200
  9. model_server.py +145 -0
  10. packages.txt +0 -2
  11. test_app_local.py +0 -205
  12. windows.bat +0 -91
.env.example ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VREyeSAM Configuration Example
2
+ # Copy to .env and fill in your values
3
+ # DO NOT commit .env to git!
4
+
5
+ # Model Configuration
6
+ MODEL_DEVICE=cuda
7
+ MODEL_INFERENCE_TIMEOUT=300
8
+
9
+ # Hugging Face Configuration (Optional)
10
+ # Only needed for downloading from private repos
11
+ HF_TOKEN=your_huggingface_token_here
12
+ HF_USER=your_username
13
+
14
+ # Streamlit Configuration
15
+ STREAMLIT_SERVER_PORT=7860
16
+ STREAMLIT_SERVER_ADDRESS=0.0.0.0
17
+ STREAMLIT_SERVER_HEADLESS=true
18
+ STREAMLIT_BROWSER_GATHER_USAGE_STATS=false
19
+ STREAMLIT_SERVER_MAX_UPLOAD_SIZE=500
20
+
21
+ # Application Configuration
22
+ APP_TITLE=VREyeSAM
23
+ LOG_LEVEL=INFO
24
+ ENABLE_DEBUG=false
25
+
26
+ # Security
27
+ ALLOW_FILE_DOWNLOADS=true
28
+ ENABLE_METRICS=false
29
+ ANONYMOUS_USAGE_ONLY=true
.gitignore CHANGED
@@ -26,16 +26,30 @@ wheels/
26
  .installed.cfg
27
  *.egg
28
 
29
- # Large external directories
30
- segment-anything-2/
31
-
32
- # Model checkpoints and weights
 
 
33
  *.torch
34
  *.pth
35
  *.pt
 
 
 
 
 
 
 
 
 
 
36
 
37
- # Data directories
38
  VRBiomSegM/
 
 
39
 
40
  # Output files
41
  *.jpg
 
26
  .installed.cfg
27
  *.egg
28
 
29
+ # Model checkpoints and weights - CRITICAL: Never commit
30
+ segment-anything-2/checkpoints/**/*.torch
31
+ segment-anything-2/checkpoints/**/*.pth
32
+ segment-anything-2/checkpoints/**/*.pt
33
+ segment-anything-2/checkpoints/**/*.ckpt
34
+ segment-anything-2/checkpoints/**/*.bin
35
  *.torch
36
  *.pth
37
  *.pt
38
+ *.ckpt
39
+ *.bin
40
+
41
+ # Sensitive data
42
+ .env
43
+ .env.local
44
+ .env.*.local
45
+ secrets/
46
+ credentials/
47
+ config.secret.yaml
48
 
49
+ # Data directories - never upload training data
50
  VRBiomSegM/
51
+ data/private/
52
+ datasets/private/
53
 
54
  # Output files
55
  *.jpg
DEPLOYMENT.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VREyeSAM - Hugging Face Spaces Deployment Guide
2
+
3
+ ## 🚀 Quick Start Deployment
4
+
5
+ Follow these steps to deploy VREyeSAM to Hugging Face Spaces with full security.
6
+
7
+ ## Step 1: Prepare Your Repository
8
+
9
+ ### Verify Security Setup
10
+ ```bash
11
+ # Check no model weights are committed
12
+ git status
13
+ git ls-files | grep -E '\.(pt|pth|torch|bin)$'
14
+ # Should output nothing!
15
+
16
+ # Verify .gitignore is properly configured
17
+ cat .gitignore | grep -E '(\.pt|\.torch|checkpoints)'
18
+ ```
19
+
20
+ ### Remove Unnecessary Files
21
+ - ✅ Markdown documentation files removed
22
+ - ✅ Model server created for secure inference
23
+ - ✅ Path handling secured in model_server.py
24
+
25
+ ## Step 2: Create HuggingFace Space
26
+
27
+ 1. Navigate to: https://huggingface.co/spaces
28
+ 2. Click **"Create new Space"**
29
+ 3. Fill in:
30
+ ```
31
+ Space name: vreyesam
32
+ License: MIT
33
+ SDK: Docker
34
+ Visibility: Public (code is public, weights are private)
35
+ ```
36
+
37
+ ## Step 3: Upload Files to Space
38
+
39
+ Choose one of the following methods:
40
+
41
+ ### Method A: Git Push (Recommended)
42
+ ```bash
43
+ # Clone the space
44
+ git clone https://huggingface.co/spaces/YOUR_USERNAME/vreyesam
45
+ cd vreyesam
46
+
47
+ # Copy your project files
48
+ cp /path/to/your/vreyesam/* .
49
+
50
+ # Push to space
51
+ git add .
52
+ git commit -m "Deploy VREyeSAM with security"
53
+ git push
54
+ ```
55
+
56
+ ### Method B: Web Upload
57
+ 1. Go to space settings
58
+ 2. Use the file manager to upload:
59
+ - `app.py`
60
+ - `model_server.py`
61
+ - `requirements_deploy.txt`
62
+ - `Dockerfile`
63
+ - `.streamlit/config.toml` (if exists)
64
+ - `README.md`
65
+ - `SECURITY.md`
66
+
67
+ ## Step 4: Configure Secrets (For Private Weights)
68
+
69
+ 1. Go to space **Settings**
70
+ 2. Add **Repository secrets**:
71
+ - `HF_TOKEN`: Your HuggingFace API token (from https://huggingface.co/settings/tokens)
72
+
73
+ The Dockerfile will automatically use this to download weights.
74
+
75
+ ## Step 5: Set Webhooks (Optional)
76
+
77
+ For automatic updates when code changes:
78
+
79
+ 1. Create your code repository (GitHub, GitLab, etc.)
80
+ 2. In space Settings → Webhooks
81
+ 3. Add webhook from your repo
82
+ 4. Now pushes to your repo → automatic space update
83
+
84
+ ## Step 6: Verify Deployment
85
+
86
+ Once Docker builds complete (5-10 minutes):
87
+
88
+ 1. Click "App" tab to see your running Spaces app
89
+ 2. Test upload an iris image
90
+ 3. Verify segmentation works
91
+ 4. Download results
92
+
93
+ ## 🔒 Security Verification
94
+
95
+ ### Check These in Your Space:
96
+
97
+ ```bash
98
+ # 1. Verify weights are downloaded at startup, not committed
99
+ # Check space README - should see "Model initialized" log
100
+
101
+ # 2. Try accessing checkpoint files directly
102
+ # Should get 404 or permission denied
103
+
104
+ # 3. Examine source code
105
+ # model.pt, .torch files should NOT appear in file listing
106
+
107
+ # 4. Check Docker logs
108
+ # Should NOT show full paths to checkpoints
109
+ ```
110
+
111
+ ## 📋 File Structure Expected in Space
112
+
113
+ ```
114
+ vreyesam/
115
+ ├── app.py # Streamlit app
116
+ ├── model_server.py # Secure model wrapper
117
+ ├── Dockerfile # Container definition
118
+ ├── requirements_deploy.txt # Python dependencies
119
+ ├── README.md # Public documentation
120
+ ├── SECURITY.md # Security guide
121
+ ├── .gitignore # Git ignore rules
122
+ ├── .env.example # Configuration template
123
+ └── .streamlit/
124
+ └── config.toml # Streamlit config
125
+ ```
126
+
127
+ Model weights (checkpoint files) should NOT appear here - they're downloaded during build.
128
+
129
+ ## 🛠️ Troubleshooting Deployment
130
+
131
+ ### Weights Not Loading
132
+ - Check HF_TOKEN is set in secrets
133
+ - Verify weights URL in Dockerfile is correct
134
+ - Check space logs for download errors
135
+
136
+ ### Build Timeout
137
+ - Increase timeout in space settings
138
+ - Pre-build Docker image and push to registry instead
139
+
140
+ ### Models Not Segmenting
141
+ - Check error messages in app
142
+ - Verify config paths in model_server.py
143
+ - Check GPU availability: should fall back to CPU
144
+
145
+ ### Size Limit Issues
146
+ - Hugging Face Spaces have a 50GB storage limit
147
+ - Model weights typically 5-7GB total
148
+ - Should be sufficient for most deployments
149
+
150
+ ## 🚀 Advanced: Set Up CI/CD
151
+
152
+ ### GitHub Actions Example
153
+ ```yaml
154
+ name: Deploy to HF Spaces
155
+ on:
156
+ push:
157
+ branches: [main]
158
+
159
+ jobs:
160
+ deploy:
161
+ runs-on: ubuntu-latest
162
+ steps:
163
+ - uses: actions/checkout@v2
164
+ - name: Push to Spaces
165
+ run: |
166
+ git config user.email "your-email@example.com"
167
+ git config user.name "Your Name"
168
+ git remote add space https://${{ secrets.HF_USERNAME }}:${{ secrets.HF_TOKEN }}@huggingface.co/spaces/${{ secrets.HF_USERNAME }}/vreyesam
169
+ git push -f space HEAD:main
170
+ ```
171
+
172
+ ## 📞 Support & Monitoring
173
+
174
+ - **Check Logs**: Space Settings → Logs
175
+ - **Monitor Health**: Space card shows if running
176
+ - **Error Tracking**: Streamlit errors appear in browser inspector + logs
177
+ - **Performance**: Use HF Space hardware settings to allocate GPU if needed
178
+
179
+ ## Important Notes
180
+
181
+ ⚠️ **DO NOT:**
182
+ - Commit `segment-anything-2/checkpoints/` to git
183
+ - Push `.env` files with tokens
184
+ - Share HF_TOKEN outside of space secrets
185
+
186
+ ✅ **DO:**
187
+ - Keep code repository public
188
+ - Share README and SECURITY guide
189
+ - Use HF Spaces secrets for tokens
190
+ - Document your deployment
191
+
192
+ ---
193
+
194
+ ## Next Steps
195
+
196
+ 1. [ ] Prepare repository files
197
+ 2. [ ] Verify .gitignore is correct
198
+ 3. [ ] Create Hugging Face Space
199
+ 4. [ ] Upload/push code files
200
+ 5. [ ] Set HF_TOKEN in secrets
201
+ 6. [ ] Wait for Docker build (5-10 min)
202
+ 7. [ ] Test the deployed app
203
+ 8. [ ] Share your space link!
204
+
205
+ **Congratulations! Your model is now securely deployed.** 🎉
206
+
207
+ ---
208
+
209
+ For questions: Check SECURITY.md or contact geetanjalisharma546@gmail.com
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: VREyeSAM
3
+ emoji: 👁️
4
+ colorFrom: purple
5
+ colorTo: pink
6
+ sdk: docker
7
+ app_file: app.py
8
+ pinned: false
9
+ license: mit
10
+ ---
11
+
12
+ # VREyeSAM: Non-Frontal Iris Segmentation
13
+
14
+ ![VREyeSAM Demo](https://img.shields.io/badge/Status-Active-success)
15
+ ![Python](https://img.shields.io/badge/Python-3.11+-blue)
16
+ ![Security](https://img.shields.io/badge/Security-Protected-brightgreen)
17
+
18
+ ## 🎯 Overview
19
+
20
+ VREyeSAM is a robust iris segmentation service for non-frontal iris images captured in virtual reality and augmented reality environments.
21
+
22
+ ## 🚀 Quick Start
23
+
24
+ 1. Upload a non-frontal iris image
25
+ 2. Click "Segment Iris"
26
+ 3. Download results
27
+
28
+ ## 📊 Features
29
+
30
+ - **Fast Segmentation**: Real-time iris segmentation
31
+ - **Binary Masks**: Precise iris region extraction
32
+ - **Confidence Maps**: Uncertainty quantification
33
+ - **Easy Download**: Save results with one click
34
+
35
+ ## 🔒 Security
36
+
37
+ This model is **fully protected**:
38
+ - ✅ Model weights cannot be downloaded
39
+ - ✅ Implementation details are hidden
40
+ - ✅ Only API endpoints are exposed
41
+ - ✅ Secure inference only
42
+
43
+ ## 📈 Performance
44
+
45
+ - High accuracy on non-frontal iris images
46
+ - Optimized for VR/AR capture scenarios
47
+ - Fast inference on standard hardware
48
+
49
+ ---
50
+
51
+ ## Citation
52
+
53
+ If you use VREyeSAM in your research:
54
+
55
+ ```bibtex
56
+ @article{sharma2025vreyesam,
57
+ title={VREyeSAM: Virtual Reality Non-Frontal Iris Segmentation using Foundational Model with Uncertainty Weighted Loss},
58
+ author={Sharma, Geetanjali and Nagaich, Dev and Jaswal, Gaurav and Nigam, Aditya and Ramachandra, Raghavendra},
59
+ conference={IJCB},
60
+ year={2025}
61
+ }
62
+ ```
63
+
64
+ ## 👥 Authors
65
+
66
+ - Geetanjali Sharma
67
+ - Dev Nagaich
68
+ - Gaurav Jaswal
69
+ - Aditya Nigam
70
+ - Raghavendra Ramachandra
71
+
72
+ ## 📧 Contact
73
+
74
+ For inquiries: geetanjalisharma546@gmail.com
75
+
76
+ ## 📄 License
77
+
78
+ MIT License - See LICENSE file
79
+
80
+ ---
81
+
82
+ **For full technical details and code, visit:** [GitHub Repository](https://github.com/GeetanjaliGTZ/VREyeSAM)
SECURITY.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VREyeSAM - Model Security & Protection Guide
2
+
3
+ ## 🔒 Overview
4
+
5
+ VREyeSAM is protected with multiple security layers to prevent model weight extraction and ensure safe deployment.
6
+
7
+ ## Security Measures Implemented
8
+
9
+ ### 1. **Model Weight Protection**
10
+ - ✅ Model weights are loaded at startup and never exposed to the client
11
+ - ✅ Weights are managed in `model_server.py` using a singleton pattern
12
+ - ✅ Checkpoint paths are resolved internally and never sent to the frontend
13
+
14
+ ### 2. **File System Isolation**
15
+ - ✅ Checkpoint files have restricted permissions (600)
16
+ - ✅ Only the inference API is exposed to users
17
+ - ✅ Raw file access is blocked
18
+
19
+ ### 3. **API-Only Architecture**
20
+ - ✅ No direct model file downloads
21
+ - ✅ Only prediction results are returned to users
22
+ - ✅ Model internals stay hidden
23
+
24
+ ## Deployment to Hugging Face Spaces
25
+
26
+ ### Prerequisites
27
+ 1. HuggingFace account with Spaces access
28
+ 2. Model weights in private HuggingFace repository
29
+ 3. Docker setup for containerized deployment
30
+
31
+ ### Step 1: Create Private Model Repository
32
+
33
+ ```bash
34
+ # Clone your model repo (if not already done)
35
+ # Ensure checkpoints are NOT committed to git
36
+ # Add to .gitignore if needed
37
+ ```
38
+
39
+ ### Step 2: Deploy to HF Spaces
40
+
41
+ 1. Go to [Hugging Face Spaces](https://huggingface.co/spaces)
42
+ 2. Click "Create new Space"
43
+ 3. Fill in details:
44
+ - **Space name**: vreyesam
45
+ - **License**: MIT
46
+ - **SDK**: Docker
47
+ - **Visibility**: Public (only code, not weights)
48
+ 4. After creation, upload your `Dockerfile` and code files
49
+
50
+ ### Step 3: Authentication for Model Downloads
51
+
52
+ For accessing private model weights during Docker build:
53
+
54
+ 1. Create HuggingFace token: https://huggingface.co/settings/tokens
55
+ 2. Set in Spaces environment (Settings → Secrets with HF_TOKEN)
56
+ 3. OR use direct URL with token (not recommended, keep private)
57
+
58
+ ### Step 4: Verify Security
59
+
60
+ Before deployment:
61
+
62
+ ```bash
63
+ # Check what files will be uploaded
64
+ git status
65
+ git ls-files | grep -E '\.(pt|pth|torch|bin)$'
66
+
67
+ # Should output: (nothing - no weights!)
68
+ ```
69
+
70
+ ## Security Checklist
71
+
72
+ - [ ] Model weights are in `.gitignore`
73
+ - [ ] Checkpoint paths are not hardcoded in code
74
+ - [ ] Only `model_server.py` handles weight loading
75
+ - [ ] Docker build uses secure downloads
76
+ - [ ] `.env` files are in `.gitignore`
77
+ - [ ] Frontend cannot access file paths
78
+ - [ ] API only exposes prediction results
79
+
80
+ ## Best Practices
81
+
82
+ ### ✅ DO:
83
+ - Keep model weights private and download during deployment
84
+ - Use environment variables for configuration
85
+ - Only expose prediction API endpoints
86
+ - Log errors without exposing paths
87
+ - Use Hugging Face tokens securely in Spaces secrets
88
+
89
+ ### ❌ DON'T:
90
+ - Commit model weights to git
91
+ - Hardcode checkpoint paths in code
92
+ - Expose debug routes that show model structure
93
+ - Log full file paths to users
94
+ - Include weights in Docker layers visible to users
95
+
96
+ ## Troubleshooting
97
+
98
+ ### Issue: "Model weights not found"
99
+ 1. Verify `.gitignore` contains checkpoint paths
100
+ 2. Check Dockerfile correctly downloads from HuggingFace
101
+ 3. Ensure HF_TOKEN is set in Spaces secrets
102
+
103
+ ### Issue: "File path exposed in error"
104
+ 1. Update `model_server.py` to not show paths
105
+ 2. Generic error messages only: "Model initialization failed"
106
+ 3. Check logs don't contain sensitive details
107
+
108
+ ## Advanced Security
109
+
110
+ ### Optional: Encrypt Weights
111
+ ```python
112
+ # In model_server.py
113
+ from cryptography.fernet import Fernet
114
+ encrypted_weights = Fernet(key).encrypt(state_dict)
115
+ ```
116
+
117
+ ### Optional: Disable Direct File Access
118
+ ```python
119
+ # Set file permissions
120
+ chmod 600 segment-anything-2/checkpoints/*
121
+ # Only the app process can read them
122
+ ```
123
+
124
+ ## Support
125
+
126
+ For security questions or issues:
127
+ - Check the [GitHub Issues](https://github.com/GeetanjaliGTZ/VREyeSAM/issues)
128
+ - Contact: geetanjalisharma546@gmail.com
129
+
130
+ ---
131
+
132
+ **Last Updated**: March 2025
133
+ **Security Level**: High Protection ✅
VREyeSAM DELETED
@@ -1 +0,0 @@
1
- Subproject commit 44d9568243caf638828b48458514e811825c40ea
 
 
app.py CHANGED
@@ -1,17 +1,10 @@
1
  import streamlit as st
2
  import cv2
3
- import torch
4
  import numpy as np
5
  from PIL import Image
6
  import io
7
- import sys
8
- import os
9
 
10
- # Add segment-anything-2 to path
11
- sys.path.insert(0, os.path.join(os.path.dirname(__file__), "segment-anything-2"))
12
-
13
- from sam2.build_sam import build_sam2
14
- from sam2.sam2_image_predictor import SAM2ImagePredictor
15
 
16
  # Page config
17
  st.set_page_config(
@@ -45,24 +38,12 @@ st.markdown("""
45
 
46
  @st.cache_resource
47
  def load_model():
48
- """Load the VREyeSAM model"""
49
  try:
50
- # IMPORTANT: Hydra config system searches within sam2 package
51
- # Use relative path without "segment-anything-2/" prefix
52
- model_cfg = "configs/sam2/sam2_hiera_s.yaml"
53
- sam2_checkpoint = "segment-anything-2/checkpoints/sam2_hiera_small.pt"
54
- fine_tuned_weights = "segment-anything-2/checkpoints/VREyeSAM_uncertainity_best.torch"
55
-
56
- # Load model
57
- device = "cuda" if torch.cuda.is_available() else "cpu"
58
-
59
- sam2_model = build_sam2(model_cfg, sam2_checkpoint, device=device)
60
- predictor = SAM2ImagePredictor(sam2_model)
61
- predictor.model.load_state_dict(torch.load(fine_tuned_weights, map_location=device))
62
-
63
  return predictor
64
  except Exception as e:
65
- st.error(f"Error loading model: {str(e)}")
66
  return None
67
 
68
  def read_and_resize_image(image):
@@ -81,38 +62,8 @@ def read_and_resize_image(image):
81
  return img
82
 
83
  def segment_iris(predictor, image):
84
- """Perform iris segmentation"""
85
- # Generate random points for inference
86
- num_samples = 30
87
- input_points = np.random.randint(0, min(image.shape[:2]), (num_samples, 1, 2))
88
-
89
- # Inference
90
- with torch.no_grad():
91
- predictor.set_image(image)
92
- masks, scores, _ = predictor.predict(
93
- point_coords=input_points,
94
- point_labels=np.ones([input_points.shape[0], 1])
95
- )
96
-
97
- # Convert to numpy
98
- np_masks = np.array(masks[:, 0]).astype(np.float32)
99
- np_scores = scores[:, 0]
100
-
101
- # Normalize scores
102
- score_sum = np.sum(np_scores)
103
- if score_sum > 0:
104
- normalized_scores = np_scores / score_sum
105
- else:
106
- normalized_scores = np.ones_like(np_scores) / len(np_scores)
107
-
108
- # Generate probabilistic mask
109
- prob_mask = np.sum(np_masks * normalized_scores[:, None, None], axis=0)
110
- prob_mask = np.clip(prob_mask, 0, 1)
111
-
112
- # Threshold to get binary mask
113
- binary_mask = (prob_mask > 0.2).astype(np.uint8)
114
-
115
- return binary_mask, prob_mask
116
 
117
  def overlay_mask_on_image(image, binary_mask, color=(0, 255, 0), alpha=0.5):
118
  """Overlay binary mask on original image"""
 
1
  import streamlit as st
2
  import cv2
 
3
  import numpy as np
4
  from PIL import Image
5
  import io
 
 
6
 
7
+ from model_server import get_predictor
 
 
 
 
8
 
9
  # Page config
10
  st.set_page_config(
 
38
 
39
  @st.cache_resource
40
  def load_model():
41
+ """Load model securely through protected server"""
42
  try:
43
+ predictor = get_predictor()
 
 
 
 
 
 
 
 
 
 
 
 
44
  return predictor
45
  except Exception as e:
46
+ st.error(f"Error loading model")
47
  return None
48
 
49
  def read_and_resize_image(image):
 
62
  return img
63
 
64
  def segment_iris(predictor, image):
65
+ """Perform iris segmentation using secure model server"""
66
+ return predictor.predict(image, num_samples=30)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
  def overlay_mask_on_image(image, binary_mask, color=(0, 255, 0), alpha=0.5):
69
  """Overlay binary mask on original image"""
deploy.bat DELETED
@@ -1,200 +0,0 @@
1
- @echo off
2
- REM VREyeSAM Deployment Script for Hugging Face Spaces (Windows)
3
- REM This script automates the deployment process
4
-
5
- echo ============================================================
6
- echo VREyeSAM Deployment to Hugging Face Spaces
7
- echo ============================================================
8
- echo.
9
-
10
- REM Step 1: Check prerequisites
11
- echo [1/8] Checking prerequisites...
12
-
13
- where git >nul 2>&1
14
- if errorlevel 1 (
15
- echo [ERROR] Git is not installed
16
- echo Install from: https://git-scm.com/
17
- pause
18
- exit /b 1
19
- )
20
- echo [OK] Git installed
21
-
22
- where git-lfs >nul 2>&1
23
- if errorlevel 1 (
24
- echo [ERROR] Git LFS is not installed
25
- echo Install from: https://git-lfs.github.com/
26
- pause
27
- exit /b 1
28
- )
29
- echo [OK] Git LFS installed
30
- echo.
31
-
32
- REM Step 2: Get Hugging Face credentials
33
- echo [2/8] Hugging Face Setup
34
- set /p HF_USERNAME="Enter your Hugging Face username: "
35
- set /p HF_SPACE_NAME="Enter your Space name: "
36
-
37
- set HF_SPACE_URL=https://huggingface.co/spaces/%HF_USERNAME%/%HF_SPACE_NAME%
38
-
39
- echo Space URL: %HF_SPACE_URL%
40
- set /p CONFIRM="Is this correct? (y/n): "
41
-
42
- if /i not "%CONFIRM%"=="y" (
43
- echo Aborted.
44
- pause
45
- exit /b 0
46
- )
47
- echo.
48
-
49
- REM Step 3: Initialize Git repository
50
- echo [3/8] Initializing Git repository...
51
-
52
- if not exist ".git" (
53
- git init
54
- echo [OK] Git repository initialized
55
- ) else (
56
- echo [OK] Git repository already exists
57
- )
58
- echo.
59
-
60
- REM Step 4: Setup Git LFS
61
- echo [4/8] Setting up Git LFS...
62
-
63
- git lfs install
64
- git lfs track "*.pt"
65
- git lfs track "*.torch"
66
- git lfs track "*.pth"
67
- git lfs track "*.bin"
68
-
69
- echo [OK] Git LFS configured
70
- echo.
71
-
72
- REM Step 5: Verify required files
73
- echo [5/8] Verifying required files...
74
-
75
- set MISSING=0
76
-
77
- if exist "app.py" (
78
- echo [OK] app.py
79
- ) else (
80
- echo [ERROR] app.py missing
81
- set MISSING=1
82
- )
83
-
84
- if exist "requirements.txt" (
85
- echo [OK] requirements.txt
86
- ) else (
87
- echo [ERROR] requirements.txt missing
88
- set MISSING=1
89
- )
90
-
91
- if exist "README.md" (
92
- echo [OK] README.md
93
- ) else (
94
- echo [ERROR] README.md missing
95
- set MISSING=1
96
- )
97
-
98
- if exist ".gitattributes" (
99
- echo [OK] .gitattributes
100
- ) else (
101
- echo [ERROR] .gitattributes missing
102
- set MISSING=1
103
- )
104
-
105
- if %MISSING%==1 (
106
- echo.
107
- echo [ERROR] Missing required files. Please add them first.
108
- pause
109
- exit /b 1
110
- )
111
- echo.
112
-
113
- REM Check SAM2 files
114
- echo Checking SAM2 files...
115
-
116
- if not exist "segment-anything-2" (
117
- echo [ERROR] segment-anything-2 directory not found
118
- echo Please clone SAM2 repository first.
119
- pause
120
- exit /b 1
121
- )
122
- echo [OK] segment-anything-2 directory
123
-
124
- if not exist "segment-anything-2\checkpoints\sam2_hiera_small.pt" (
125
- echo [ERROR] sam2_hiera_small.pt not found
126
- pause
127
- exit /b 1
128
- )
129
- echo [OK] sam2_hiera_small.pt
130
-
131
- if not exist "segment-anything-2\checkpoints\VREyeSAM_uncertainity_best.torch" (
132
- echo [ERROR] VREyeSAM_uncertainity_best.torch not found
133
- pause
134
- exit /b 1
135
- )
136
- echo [OK] VREyeSAM_uncertainity_best.torch
137
- echo.
138
-
139
- REM Step 6: Add remote
140
- echo [6/8] Adding Hugging Face remote...
141
-
142
- git remote remove space >nul 2>&1
143
- git remote add space %HF_SPACE_URL%
144
-
145
- echo [OK] Remote added: %HF_SPACE_URL%
146
- echo.
147
-
148
- REM Step 7: Commit files
149
- echo [7/8] Committing files...
150
-
151
- git add .
152
- git commit -m "Deploy VREyeSAM to Hugging Face Spaces"
153
-
154
- echo [OK] Files committed
155
- echo.
156
-
157
- REM Step 8: Push to Hugging Face
158
- echo [8/8] Pushing to Hugging Face Spaces...
159
- echo This may take several minutes due to large model files...
160
- echo.
161
-
162
- set /p PUSH_CONFIRM="Ready to push? (y/n): "
163
-
164
- if /i not "%PUSH_CONFIRM%"=="y" (
165
- echo Push cancelled. You can push manually later with:
166
- echo git push --set-upstream space main --force
167
- pause
168
- exit /b 0
169
- )
170
-
171
- echo Pushing to Hugging Face...
172
- git push --set-upstream space main --force
173
-
174
- if errorlevel 1 (
175
- echo.
176
- echo ============================================================
177
- echo [ERROR] Deployment failed
178
- echo ============================================================
179
- echo.
180
- echo Common issues:
181
- echo 1. Authentication: Make sure you're logged in
182
- echo 2. Space doesn't exist: Create it first on Hugging Face
183
- echo 3. Large files: Ensure Git LFS is properly configured
184
- echo.
185
- pause
186
- exit /b 1
187
- ) else (
188
- echo.
189
- echo ============================================================
190
- echo [SUCCESS] Deployment successful!
191
- echo ============================================================
192
- echo.
193
- echo Your space is available at:
194
- echo %HF_SPACE_URL%
195
- echo.
196
- echo It may take 10-15 minutes to build. Monitor progress at:
197
- echo %HF_SPACE_URL%/logs
198
- echo.
199
- pause
200
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model_server.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Secure Model Server - Protects model weights from extraction
3
+ Never expose:
4
+ - File paths to checkpoints
5
+ - Model architecture details
6
+ - Debug routes
7
+ """
8
+
9
+ import os
10
+ import sys
11
+ import torch
12
+ import numpy as np
13
+ from pathlib import Path
14
+ from typing import Tuple, Optional
15
+
16
+ # Secure path resolution (not hardcoded)
17
+ def get_model_checkpoint_path():
18
+ """Get checkpoint path secretly, never expose to client"""
19
+ base_dir = Path(__file__).parent
20
+ checkpoint = base_dir / "segment-anything-2" / "checkpoints" / "sam2_hiera_small.pt"
21
+ if not checkpoint.exists():
22
+ raise FileNotFoundError(f"Model checkpoint not found")
23
+ return str(checkpoint)
24
+
25
+ def get_finetuned_weights_path():
26
+ """Get fine-tuned weights path secretly, never expose to client"""
27
+ base_dir = Path(__file__).parent
28
+ weights = base_dir / "segment-anything-2" / "checkpoints" / "VREyeSAM_uncertainity_best.torch"
29
+ if not weights.exists():
30
+ raise FileNotFoundError(f"Fine-tuned weights not found")
31
+ return str(weights)
32
+
33
+ def get_model_config_path():
34
+ """Get model config path secretly, never expose to client"""
35
+ return "configs/sam2/sam2_hiera_s.yaml"
36
+
37
+
38
+ class ProtectedModelServer:
39
+ """
40
+ Encapsulates model loading and inference
41
+ Only exposes inference API, never raw weights or paths
42
+ """
43
+
44
+ _instance = None # Singleton pattern
45
+ _model = None
46
+ _predictor = None
47
+
48
+ def __new__(cls):
49
+ # Singleton: only one instance ever
50
+ if cls._instance is None:
51
+ cls._instance = super().__new__(cls)
52
+ return cls._instance
53
+
54
+ def __init__(self):
55
+ """Initialize model (only once)"""
56
+ if self._predictor is None:
57
+ self._load_model()
58
+
59
+ def _load_model(self):
60
+ """Load model weights securely - never called from frontend"""
61
+ try:
62
+ # Add segment-anything-2 to path (internally only)
63
+ base_dir = Path(__file__).parent
64
+ sam2_path = base_dir / "segment-anything-2"
65
+ sys.path.insert(0, str(sam2_path))
66
+
67
+ from sam2.build_sam import build_sam2
68
+ from sam2.sam2_image_predictor import SAM2ImagePredictor
69
+
70
+ # Get paths internally - NEVER sent to client
71
+ model_cfg = get_model_config_path()
72
+ sam2_checkpoint = get_model_checkpoint_path()
73
+ fine_tuned_weights = get_finetuned_weights_path()
74
+
75
+ # Load model
76
+ device = "cuda" if torch.cuda.is_available() else "cpu"
77
+
78
+ self._model = build_sam2(model_cfg, sam2_checkpoint, device=device)
79
+ self._predictor = SAM2ImagePredictor(self._model)
80
+
81
+ # Load fine-tuned weights
82
+ state_dict = torch.load(fine_tuned_weights, map_location=device)
83
+ self._predictor.model.load_state_dict(state_dict)
84
+
85
+ # Model is now loaded - weights are NOT accessible to clients
86
+ self._predictor.model.eval()
87
+
88
+ return True
89
+ except Exception as e:
90
+ raise RuntimeError(f"Model initialization failed") from e
91
+
92
+ def predict(self, image: np.ndarray, num_samples: int = 30) -> Tuple[np.ndarray, np.ndarray]:
93
+ """
94
+ Perform iris segmentation
95
+
96
+ Args:
97
+ image: Input image (numpy array)
98
+ num_samples: Number of random points for inference
99
+
100
+ Returns:
101
+ binary_mask: Binary segmentation mask
102
+ prob_mask: Probability map
103
+ """
104
+ if self._predictor is None:
105
+ raise RuntimeError("Model not initialized")
106
+
107
+ try:
108
+ # Generate random points for inference
109
+ input_points = np.random.randint(0, min(image.shape[:2]), (num_samples, 1, 2))
110
+
111
+ # Inference
112
+ with torch.no_grad():
113
+ self._predictor.set_image(image)
114
+ masks, scores, _ = self._predictor.predict(
115
+ point_coords=input_points,
116
+ point_labels=np.ones([input_points.shape[0], 1])
117
+ )
118
+
119
+ # Convert to numpy
120
+ np_masks = np.array(masks[:, 0]).astype(np.float32)
121
+ np_scores = scores[:, 0]
122
+
123
+ # Normalize scores
124
+ score_sum = np.sum(np_scores)
125
+ if score_sum > 0:
126
+ normalized_scores = np_scores / score_sum
127
+ else:
128
+ normalized_scores = np.ones_like(np_scores) / len(np_scores)
129
+
130
+ # Generate probabilistic mask
131
+ prob_mask = np.sum(np_masks * normalized_scores[:, None, None], axis=0)
132
+ prob_mask = np.clip(prob_mask, 0, 1)
133
+
134
+ # Threshold to get binary mask
135
+ binary_mask = (prob_mask > 0.2).astype(np.uint8)
136
+
137
+ return binary_mask, prob_mask
138
+
139
+ except Exception as e:
140
+ raise RuntimeError(f"Inference failed") from e
141
+
142
+
143
+ def get_predictor() -> ProtectedModelServer:
144
+ """Get singleton model instance"""
145
+ return ProtectedModelServer()
packages.txt DELETED
@@ -1,2 +0,0 @@
1
- libgl1
2
- libglib2.0-0
 
 
 
test_app_local.py DELETED
@@ -1,205 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Local Testing Script for VREyeSAM Streamlit App
4
-
5
- Run this script to test the app locally before deploying to Hugging Face Spaces.
6
- Usage: python test_app_local.py
7
- """
8
-
9
- import subprocess
10
- import sys
11
- import os
12
- import time
13
-
14
- def check_dependencies():
15
- """Check if all required dependencies are installed"""
16
- print("🔍 Checking dependencies...")
17
-
18
- required_packages = [
19
- 'streamlit',
20
- 'torch',
21
- 'torchvision',
22
- 'opencv-python',
23
- 'numpy',
24
- 'PIL'
25
- ]
26
-
27
- missing = []
28
- for package in required_packages:
29
- try:
30
- __import__(package.replace('-', '_'))
31
- print(f" ✅ {package}")
32
- except ImportError:
33
- print(f" ❌ {package}")
34
- missing.append(package)
35
-
36
- if missing:
37
- print(f"\n⚠️ Missing packages: {', '.join(missing)}")
38
- print("Install them with: pip install -r requirements_deploy.txt")
39
- return False
40
-
41
- print("✅ All dependencies installed\n")
42
- return True
43
-
44
- def check_model_files():
45
- """Check if model files exist"""
46
- print("🔍 Checking model files...")
47
-
48
- files_to_check = [
49
- "segment-anything-2/checkpoints/sam2_hiera_small.pt",
50
- "segment-anything-2/checkpoints/VREyeSAM_uncertainity_best.torch"
51
- ]
52
-
53
- all_exist = True
54
- for file_path in files_to_check:
55
- if os.path.exists(file_path):
56
- size_mb = os.path.getsize(file_path) / (1024 * 1024)
57
- print(f" ✅ {file_path} ({size_mb:.1f} MB)")
58
- else:
59
- print(f" ❌ {file_path} - NOT FOUND")
60
- all_exist = False
61
-
62
- if not all_exist:
63
- print("\n⚠️ Some model files are missing!")
64
- print("Please run the setup instructions from README.md")
65
- return False
66
-
67
- print("✅ All model files present\n")
68
- return True
69
-
70
- def check_sam2_installation():
71
- """Check if SAM2 is properly installed"""
72
- print("🔍 Checking SAM2 installation...")
73
-
74
- try:
75
- sys.path.insert(0, "segment-anything-2")
76
- from sam2.build_sam import build_sam2
77
- from sam2.sam2_image_predictor import SAM2ImagePredictor
78
- print(" ✅ SAM2 modules can be imported")
79
- print("✅ SAM2 properly installed\n")
80
- return True
81
- except ImportError as e:
82
- print(f" ❌ SAM2 import failed: {e}")
83
- print("\n⚠️ SAM2 not properly installed!")
84
- print("Install with:")
85
- print(" git clone https://github.com/facebookresearch/segment-anything-2")
86
- print(" cd segment-anything-2")
87
- print(" pip install -e .")
88
- return False
89
-
90
- def test_app_syntax():
91
- """Check if app.py has syntax errors"""
92
- print("🔍 Checking app.py syntax...")
93
-
94
- try:
95
- with open('app.py', 'r', encoding='utf-8') as f:
96
- code = f.read()
97
- compile(code, 'app.py', 'exec')
98
- print(" ✅ No syntax errors")
99
- print("✅ app.py syntax valid\n")
100
- return True
101
- except SyntaxError as e:
102
- print(f" ❌ Syntax error in app.py: {e}")
103
- return False
104
- except UnicodeDecodeError as e:
105
- print(f" ⚠️ Unicode encoding issue: {e}")
106
- print(" Trying with different encoding...")
107
- try:
108
- with open('app.py', 'r', encoding='latin-1') as f:
109
- code = f.read()
110
- compile(code, 'app.py', 'exec')
111
- print(" ✅ No syntax errors (latin-1 encoding)")
112
- print("✅ app.py syntax valid\n")
113
- return True
114
- except Exception as e2:
115
- print(f" ❌ Still failed: {e2}")
116
- return False
117
-
118
- def run_streamlit_app():
119
- """Launch the Streamlit app"""
120
- print("🚀 Launching Streamlit app...")
121
- print("=" * 60)
122
- print("The app will open in your browser at http://localhost:8501")
123
- print("Press Ctrl+C to stop the app")
124
- print("=" * 60)
125
- print()
126
-
127
- try:
128
- subprocess.run(['streamlit', 'run', 'app.py'], check=True)
129
- except KeyboardInterrupt:
130
- print("\n\n✅ App stopped by user")
131
- except subprocess.CalledProcessError as e:
132
- print(f"\n❌ Error running app: {e}")
133
- return False
134
-
135
- return True
136
-
137
- def create_test_image():
138
- """Create a simple test image if none exists"""
139
- print("🔍 Checking for test images...")
140
-
141
- test_dir = "test_images"
142
- if not os.path.exists(test_dir):
143
- os.makedirs(test_dir)
144
- print(f" 📁 Created {test_dir} directory")
145
-
146
- # Check if there are any test images
147
- image_files = [f for f in os.listdir(test_dir) if f.endswith(('.jpg', '.png', '.jpeg'))]
148
-
149
- if image_files:
150
- print(f" ✅ Found {len(image_files)} test image(s)")
151
- print(f" 📂 Test images in: {test_dir}/")
152
- for img in image_files:
153
- print(f" - {img}")
154
- else:
155
- print(f" ℹ️ No test images found in {test_dir}/")
156
- print(f" 💡 Add some iris images to {test_dir}/ for testing")
157
-
158
- print()
159
-
160
- def main():
161
- """Main testing function"""
162
- print("\n" + "=" * 60)
163
- print("VREyeSAM Local Testing Suite")
164
- print("=" * 60 + "\n")
165
-
166
- # Run all checks
167
- checks = [
168
- ("Dependencies", check_dependencies),
169
- ("Model Files", check_model_files),
170
- ("SAM2 Installation", check_sam2_installation),
171
- ("App Syntax", test_app_syntax),
172
- ]
173
-
174
- all_passed = True
175
- for name, check_func in checks:
176
- if not check_func():
177
- all_passed = False
178
- print(f"❌ {name} check failed\n")
179
-
180
- # Create test image directory
181
- create_test_image()
182
-
183
- if not all_passed:
184
- print("=" * 60)
185
- print("⚠️ Some checks failed. Please fix the issues above.")
186
- print("=" * 60)
187
- sys.exit(1)
188
-
189
- print("=" * 60)
190
- print("✅ All checks passed! Ready to run the app.")
191
- print("=" * 60)
192
- print()
193
-
194
- # Ask user if they want to run the app
195
- response = input("Do you want to launch the app now? (y/n): ").strip().lower()
196
-
197
- if response == 'y':
198
- run_streamlit_app()
199
- else:
200
- print("\n✅ Testing complete!")
201
- print("To run the app manually, execute: streamlit run app.py")
202
- print()
203
-
204
- if __name__ == "__main__":
205
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
windows.bat DELETED
@@ -1,91 +0,0 @@
1
- @echo off
2
- REM VREyeSAM Setup Script for Windows
3
- REM This script sets up the environment and downloads required files
4
-
5
- echo ============================================================
6
- echo VREyeSAM Windows Setup Script
7
- echo ============================================================
8
- echo.
9
-
10
- REM Check if Python is installed
11
- python --version >nul 2>&1
12
- if errorlevel 1 (
13
- echo [ERROR] Python is not installed or not in PATH
14
- echo Please install Python 3.11 from https://www.python.org/
15
- pause
16
- exit /b 1
17
- )
18
-
19
- echo [1/6] Creating virtual environment...
20
- if exist vreyesam_env (
21
- echo Virtual environment already exists, skipping...
22
- ) else (
23
- python -m venv vreyesam_env
24
- echo Done!
25
- )
26
- echo.
27
-
28
- echo [2/6] Activating virtual environment...
29
- call vreyesam_env\Scripts\activate.bat
30
- echo Done!
31
- echo.
32
-
33
- echo [3/6] Installing dependencies...
34
- echo This may take a few minutes...
35
- python -m pip install --upgrade pip
36
- pip install streamlit
37
- pip install torch==2.3.0 torchvision==0.18.0 --index-url https://download.pytorch.org/whl/cu118
38
- pip install "numpy<2.0.0"
39
- pip install opencv-python-headless pillow pandas scikit-learn matplotlib tqdm hydra-core
40
- echo Done!
41
- echo.
42
-
43
- echo [4/6] Cloning SAM2 repository...
44
- if exist segment-anything-2 (
45
- echo SAM2 repository already exists, skipping...
46
- ) else (
47
- git clone https://github.com/facebookresearch/segment-anything-2
48
- echo Done!
49
- )
50
- echo.
51
-
52
- echo [5/6] Installing SAM2...
53
- cd segment-anything-2
54
- pip install -e .
55
- cd ..
56
- echo Done!
57
- echo.
58
-
59
- echo [6/6] Downloading model checkpoints...
60
- if not exist segment-anything-2\checkpoints mkdir segment-anything-2\checkpoints
61
-
62
- REM Download SAM2 base checkpoint
63
- if exist segment-anything-2\checkpoints\sam2_hiera_small.pt (
64
- echo SAM2 checkpoint already exists, skipping...
65
- ) else (
66
- echo Downloading SAM2 checkpoint (this may take a few minutes)...
67
- powershell -Command "Invoke-WebRequest -Uri 'https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt' -OutFile 'segment-anything-2\checkpoints\sam2_hiera_small.pt'"
68
- echo Done!
69
- )
70
-
71
- REM Download VREyeSAM weights
72
- if exist segment-anything-2\checkpoints\VREyeSAM_uncertainity_best.torch (
73
- echo VREyeSAM weights already exist, skipping...
74
- ) else (
75
- echo Downloading VREyeSAM weights...
76
- pip install huggingface-hub
77
- huggingface-cli download devnagaich/VREyeSAM VREyeSAM_uncertainity_best.torch --local-dir segment-anything-2\checkpoints\
78
- echo Done!
79
- )
80
- echo.
81
-
82
- echo ============================================================
83
- echo Setup Complete!
84
- echo ============================================================
85
- echo.
86
- echo To run the app:
87
- echo 1. Activate the environment: vreyesam_env\Scripts\activate.bat
88
- echo 2. Run: streamlit run app.py
89
- echo.
90
- echo Press any key to exit...
91
- pause >nul