File size: 2,737 Bytes
b37a3fd
 
 
 
 
 
 
 
 
2dfebf6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b37a3fd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
title: OpenManus2
emoji: πŸ”₯
colorFrom: yellow
colorTo: red
sdk: docker
pinned: false
---

# README.md

# 🧠 OpenManus2 – Agentic AI FastAPI Interface

This private Hugging Face Space deploys the OpenManus AI agent using a FastAPI backend and template-based UI. Designed for long-running, streaming LLM tasks via external endpoints.

---

## πŸš€ Features

- βœ… FastAPI backend with SSE streaming
- 🧩 Jinja2 templates + custom UI themes
- πŸ” Asynchronous task queue with status updates
- 🌐 Integrates with external LLM tools via `OPENMANUS_ENDPOINT_URL`

---

## πŸ› οΈ Run Locally

```bash
# Clone the repo
git clone https://huggingface.co/spaces/kstang88/OpenManus2
cd OpenManus2

# Install dependencies
pip install -r requirements.txt

# Set the backend endpoint
export OPENMANUS_ENDPOINT_URL=https://your-llm-backend.com/api

# Run the app
uvicorn app:app --host 0.0.0.0 --port 7860
```

---

## 🧱 Project Structure

```
OpenManus2/
β”œβ”€β”€ app.py                # FastAPI app backend
β”œβ”€β”€ config/
β”‚   └── config.toml       # Optional: LLM endpoint config
β”œβ”€β”€ requirements.txt      # Python dependencies
β”œβ”€β”€ Dockerfile            # Hugging Face Docker Space
β”œβ”€β”€ static/               # CSS/JS + theme assets
└── templates/            # HTML pages (Jinja2)
```

---

## βš™οΈ Environment Variables

| Name                    | Description                           |
|-------------------------|---------------------------------------|
| `OPENMANUS_ENDPOINT_URL`| External LLM API endpoint (required)  |

Set via `.env` file locally or Space Secrets (UI β†’ Settings β†’ Secrets).

---

## πŸ“‘ API Endpoints

| Method | Path                      | Description                |
|--------|---------------------------|----------------------------|
| `GET`  | `/`                       | Theme selector page        |
| `GET`  | `/chat?theme=openmanus`   | Chat UI with custom theme  |
| `POST` | `/tasks`                  | Create a new task          |
| `GET`  | `/tasks`                  | List all tasks             |
| `GET`  | `/tasks/{task_id}`        | View task status           |
| `GET`  | `/tasks/{task_id}/events` | SSE task stream updates    |
| `GET`  | `/download?file_path=`    | Download generated files   |

---

## πŸ§ͺ Test Prompt

Send a prompt to your running agent:

```bash
curl -X POST http://localhost:7860/tasks \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Summarize GPT-4 architecture."}'
```

Then stream results via:

```bash
curl http://localhost:7860/tasks/<task_id>/events
```

---

## πŸ›‘ License

MIT License Β© 2025 kstang88


Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference