Spaces:
Runtime error
Runtime error
File size: 4,076 Bytes
7275c33 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | # Worker Agent Setup Guide
## Architecture
```
βββββββββββββββββββ
β OVERSEER β
β (Your HF Space) β
ββββββββββ¬βββββββββ
β
ββββββββββββββββΌβββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Friend's HF β β Your β β Your β
β Space β β Laptop β β Laptop β
β (Worker 1) β β (Worker 2) β β (Overseer) β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
```
You have $30 credits. Here's how to use them efficiently:
---
## PART 1: Friend's HF Space (Worker 1)
Your friend deploys this on their HF account using your trained model.
### Steps for Your Friend:
1. **Create New HF Space**
- Go to: https://huggingface.co/spaces/new
- Name: `aegis-worker-{friendname}`
- Type: **Space**
- SDK: **Docker**
- Hardware: **A10G (small)** - $3/hour
- Visibilty: **Private** (if they want)
2. **Upload Files**
- Upload these 3 files:
- `hf_worker.py`
- `requirements.txt`
- `Dockerfile`
3. **Set Environment Variables**
- Go to Space Settings β Variables
- Add:
- `HF_TOKEN` = your token (get from HF Settings β Access Tokens)
- `WORKER_MODEL` = your trained model repo
4. **IMPORTANT: Cost Management**
- Set a **Repository Visibility** to Private
- After their turn, they must **Factory Reboot** the space to free resources
- With $30 at $3/hr = ~10 hours of runtime
- Can run in bursts: 1 hour at a time, then stop
---
## PART 2: Your Laptop (Worker 2) - Optional
If you want your laptop as a worker too:
### Setup:
```bash
# Install dependencies (if not already)
pip install -r worker_agent/requirements.txt
# Start the local worker
python worker_agent/local_worker.py
```
### Expose to Internet with ngrok:
```bash
# In a new terminal
ngrok http 7861
```
**Copy the ngrok URL** (e.g., `https://abc123.ngrok.io`) and tell your overseer about it.
---
## PART 3: Update Your Overseer
In your overseer (where you run the main model), add:
```python
from overseer_worker_client import dispatch_to_worker, set_worker_url
# After friend deploys their space, update the URL:
# set_worker_url("friend_hf", "https://friend-space.hf.space/execute")
# OR if using ngrok:
# set_worker_url("local", "https://your-ngrok-url/execute")
# To send a task:
result = await dispatch_to_worker({
"task_id": "task_001",
"worker_role": "general-dev",
"instructions": "Write a hello world program in Python",
"context": "User wants a simple script"
})
```
---
## Quick Reference
| Component | Cost | When to Use |
|-----------|------|--------------|
| Friend's HF (A10G) | ~$3/hr | When friend is online |
| Your Laptop (free) | $0 | When laptop is on |
| ngrok (free) | $0 | Tunnel to laptop |
### Total: ~$30 covers ~10 hours of HF compute
---
## Files Created
- `worker_agent/hf_worker.py` - Code for friend's HF Space
- `worker_agent/local_worker.py` - Code for your laptop
- `worker_agent/overseer_worker_client.py` - Your overseer uses this
- `worker_agent/requirements.txt` - Python dependencies
- `worker_agent/Dockerfile` - Container config
---
## Flow Example
1. **You** ask overseer a question
2. **Overseer** picks a worker (friend or laptop)
3. **Worker** processes the task with your trained model
4. **Worker** returns result to overseer
5. **Overseer** evaluates and responds
This lets you use both your friend's credits AND your laptop as workers while your HF Space acts as the overseer/manager. |