cesjavi commited on
Commit
81ff144
·
1 Parent(s): dc23705

Deploy Aubm Docker Space

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .dockerignore +15 -0
  2. .gitignore +9 -0
  3. Dockerfile +39 -0
  4. README.md +134 -5
  5. backend/.env.example +18 -0
  6. backend/Dockerfile +32 -0
  7. backend/agents/agent_factory.py +44 -0
  8. backend/agents/amd_agent.py +41 -0
  9. backend/agents/base.py +99 -0
  10. backend/agents/gemini_agent.py +37 -0
  11. backend/agents/groq_agent.py +57 -0
  12. backend/agents/local_agent.py +48 -0
  13. backend/agents/openai_agent.py +54 -0
  14. backend/agents_debug.json +1 -0
  15. backend/api/index.py +1 -0
  16. backend/main.py +89 -0
  17. backend/project_debug.json +1 -0
  18. backend/requirements.txt +19 -0
  19. backend/routers/__init__.py +1 -0
  20. backend/routers/agent_runner.py +77 -0
  21. backend/routers/monitoring.py +70 -0
  22. backend/routers/orchestrator.py +153 -0
  23. backend/services/agent_runner_service.py +116 -0
  24. backend/services/audit_service.py +31 -0
  25. backend/services/config.py +98 -0
  26. backend/services/orchestrator_service.py +531 -0
  27. backend/services/project_service.py +35 -0
  28. backend/services/semantic_backprop.py +104 -0
  29. backend/services/supabase_service.py +13 -0
  30. backend/services/task_queue.py +47 -0
  31. backend/tools/browser.py +35 -0
  32. backend/tools/decomposer.py +20 -0
  33. backend/tools/file_generator.py +61 -0
  34. backend/tools/registry.py +250 -0
  35. backend/tools/sandbox.py +39 -0
  36. backend/tools/sre.py +72 -0
  37. backend/tools/visuals.py +48 -0
  38. backend/worker.py +53 -0
  39. database/agent_ownership.sql +42 -0
  40. database/default_agents.sql +36 -0
  41. database/enterprise_security.sql +53 -0
  42. database/marketplace.sql +31 -0
  43. database/phase3_updates.sql +30 -0
  44. database/schema.sql +141 -0
  45. database/seed.sql +15 -0
  46. database/task_owner_policies.sql +63 -0
  47. frontend/.env.example +6 -0
  48. frontend/.gitignore +44 -0
  49. frontend/Dockerfile +20 -0
  50. frontend/README.md +73 -0
.dockerignore ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .git
2
+ .gemini
3
+ .vercel
4
+
5
+ backend/.env
6
+ backend/venv
7
+ backend/__pycache__
8
+ backend/**/__pycache__
9
+ backend/**/*.pyc
10
+
11
+ frontend/.env
12
+ frontend/node_modules
13
+ frontend/dist
14
+
15
+ *.log
.gitignore ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ backend/.env
2
+ /backend/venv
3
+ /backend/__pycache__
4
+ /backend/venv
5
+ frontend/.env
6
+ __pycache__/
7
+ *.pyc
8
+
9
+ .vercel
Dockerfile ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM node:22-slim AS frontend-build
2
+
3
+ WORKDIR /app/frontend
4
+
5
+ COPY frontend/package*.json ./
6
+ RUN npm ci
7
+
8
+ COPY frontend ./
9
+
10
+ ARG VITE_API_URL=""
11
+ ARG VITE_SUPABASE_URL=""
12
+ ARG VITE_SUPABASE_ANON_KEY=""
13
+ ARG VITE_SENTRY_DSN=""
14
+
15
+ ENV VITE_API_URL=$VITE_API_URL
16
+ ENV VITE_SUPABASE_URL=$VITE_SUPABASE_URL
17
+ ENV VITE_SUPABASE_ANON_KEY=$VITE_SUPABASE_ANON_KEY
18
+ ENV VITE_SENTRY_DSN=$VITE_SENTRY_DSN
19
+
20
+ RUN npm run build
21
+
22
+ FROM python:3.11-slim
23
+
24
+ ENV PORT=7860
25
+ ENV PYTHONUNBUFFERED=1
26
+
27
+ WORKDIR /app
28
+
29
+ COPY backend/requirements.txt backend/requirements.txt
30
+ RUN pip install --no-cache-dir -r backend/requirements.txt
31
+
32
+ COPY backend backend
33
+ COPY --from=frontend-build /app/frontend/dist frontend/dist
34
+
35
+ WORKDIR /app/backend
36
+
37
+ EXPOSE 7860
38
+
39
+ CMD ["sh", "-c", "uvicorn main:app --host 0.0.0.0 --port ${PORT:-7860}"]
README.md CHANGED
@@ -1,12 +1,141 @@
1
  ---
2
  title: Aubm
3
- emoji: 🐠
4
- colorFrom: yellow
5
- colorTo: purple
6
  sdk: docker
7
- pinned: false
8
  license: mit
9
  short_description: Automated Business Machines
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: Aubm
 
 
 
3
  sdk: docker
4
+ app_port: 7860
5
  license: mit
6
  short_description: Automated Business Machines
7
  ---
8
 
9
+ # 🤖 Aubm
10
+
11
+ ### Enterprise-Grade AI Agent Orchestration & Collaboration Platform
12
+
13
+ Aubm (Automated Unified Business Machines) is a sophisticated platform designed to orchestrate multiple autonomous AI agents to complete complex projects. Featuring **Human-in-the-Loop** supervision, **Dynamic DAG** task execution, and **Semantic RAG** context injection.
14
+
15
+ ---
16
+
17
+ ## 🚀 Key Features
18
+
19
+ - **Multi-Provider Support**: Seamless integration with OpenAI, AMD (inference.do-ai.run), Groq, Gemini, Qwen, Ollama, and OpenRouter.
20
+ - **Autonomous Orchestration**: Intelligent task prioritization and execution based on dependencies (DAG).
21
+ - **Human-in-the-Loop**: Approval-based workflow ensuring quality and safety.
22
+ - **Semantic Backpropagation**: Context from completed tasks is automatically injected into subsequent tasks.
23
+ - **Real-time Monitoring**: SSE-powered live logs and progress tracking.
24
+ - **Project Wizard**: AI-driven project creation and task decomposition.
25
+ - **Operational Safety**: Automatic recovery of stale runs and comprehensive health monitoring.
26
+
27
+ ---
28
+
29
+ ## 🛠️ Tech Stack
30
+
31
+ - **Frontend**: React + Vite + TypeScript (Styled with Vanilla CSS for maximum performance)
32
+ - **Backend**: FastAPI (Python 3.10+)
33
+ - **Database**: Supabase (Postgres + Auth + Real-time)
34
+ - **Deployment**: Optimized for Vercel (Serverless Backend + Static Frontend)
35
+
36
+ ---
37
+
38
+ ## 🏗️ Project Structure
39
+
40
+ ```bash
41
+ aubm/
42
+ ├── backend/ # FastAPI Application & AI Core
43
+ │ ├── agents/ # LLM Provider Implementations
44
+ │ ├── routers/ # API Endpoints (Runner, Orchestrator)
45
+ │ ├── services/ # Business Logic (Queue, RAG, Guards)
46
+ │ └── main.py # App Entrypoint
47
+ ├── frontend/ # React Application
48
+ │ ├── src/ # Components, Hooks, Context, Services
49
+ │ └── vite.config.ts # Vite Configuration
50
+ └── database/ # Supabase Schema & Migrations
51
+ ```
52
+
53
+ ---
54
+
55
+ ## ⚙️ Getting Started
56
+
57
+ ### 1. Database Setup (Supabase)
58
+ 1. Create a new project in [Supabase](https://supabase.com).
59
+ 2. Go to the **SQL Editor** and execute the content of `backend/schema.sql`.
60
+ 3. Enable **Auth** with your preferred providers (Email/Password by default).
61
+
62
+ ### 2. Backend Installation
63
+ ```bash
64
+ cd backend
65
+ python -m venv venv
66
+ source venv/bin/activate # or venv\Scripts\activate on Windows
67
+ pip install -r requirements.txt
68
+ ```
69
+
70
+ Create a `.env` file in `/backend`:
71
+ ```env
72
+ SUPABASE_URL=your_project_url
73
+ SUPABASE_SERVICE_ROLE_KEY=your_service_role_key
74
+ OPENAI_API_KEY=optional_key
75
+ GROQ_API_KEY=optional_key
76
+ # See SPEC.md for all available providers
77
+ ```
78
+
79
+ Run the server:
80
+ ```bash
81
+ uvicorn main:app --reload --port 8000
82
+ ```
83
+
84
+ ### 3. Frontend Installation
85
+ ```bash
86
+ cd frontend
87
+ npm install
88
+ ```
89
+
90
+ Create a `.env` file in `/frontend`:
91
+ ```env
92
+ VITE_API_URL=http://localhost:8000
93
+ VITE_SUPABASE_URL=your_project_url
94
+ VITE_SUPABASE_ANON_KEY=your_anon_key
95
+ ```
96
+
97
+ Run the development server:
98
+ ```bash
99
+ npm run dev
100
+ ```
101
+
102
+ ---
103
+
104
+ ## 📈 Operational Modes
105
+
106
+ - **Embedded Worker**: Runs the task queue within the FastAPI process (set `TASK_QUEUE_EMBEDDED_WORKER=true`).
107
+ - **Standalone Worker**: For high-load environments, run the worker in a separate process:
108
+ ```bash
109
+ cd backend
110
+ python worker.py
111
+ ```
112
+
113
+ ---
114
+
115
+ ## Hugging Face Spaces
116
+
117
+ This repository is ready to deploy as a Docker Space. Create a Hugging Face Space with SDK `Docker`, then push this repo to the Space remote.
118
+
119
+ Configure these Space secrets or variables:
120
+
121
+ ```env
122
+ SUPABASE_URL=your_project_url
123
+ SUPABASE_SERVICE_ROLE_KEY=your_service_role_key
124
+ SUPABASE_ANON_KEY=your_anon_key
125
+ GROQ_API_KEY=optional_key
126
+ OPENAI_API_KEY=optional_key
127
+ GEMINI_API_KEY=optional_key
128
+ AMD_API_KEY=optional_key
129
+ TASK_QUEUE_EMBEDDED_WORKER=true
130
+ ```
131
+
132
+ `VITE_API_URL` can stay empty on Spaces because the frontend calls the FastAPI backend on the same origin.
133
+
134
+ ---
135
+
136
+ ## 📄 Documentation
137
+
138
+ For detailed technical architecture, refer to:
139
+ - [SPEC.md](./SPEC.md) - Deep technical specifications.
140
+ - [ROADMAP.md](./ROADMAP.md) - Future development goals.
141
+ - [docs/](./docs/) - Extended guides and manuals.
backend/.env.example ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Supabase Configuration
2
+ SUPABASE_URL=https://your-project-id.supabase.co
3
+ SUPABASE_SERVICE_ROLE_KEY=your-service-role-key-here
4
+
5
+ # AI Provider Keys
6
+ OPENAI_API_KEY=your-openai-key
7
+ GROQ_API_KEY=your-groq-key
8
+ GEMINI_API_KEY=your-gemini-key
9
+ ANTHROPIC_API_KEY=your-anthropic-key
10
+
11
+ # App Settings
12
+ PORT=8000
13
+ ALLOWED_ORIGINS=http://localhost:5173,https://your-app.vercel.app
14
+ TASK_QUEUE_EMBEDDED_WORKER=true
15
+ OUTPUT_LANGUAGE=en
16
+
17
+ # Error Tracking
18
+ SENTRY_DSN=your-sentry-dsn
backend/Dockerfile ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Build stage for backend
2
+ FROM python:3.11-slim
3
+
4
+ # Set environment variables
5
+ ENV PYTHONDONTWRITEBYTECODE=1
6
+ ENV PYTHONUNBUFFERED=1
7
+
8
+ # Set work directory
9
+ WORKDIR /app
10
+
11
+ # Install system dependencies
12
+ RUN apt-get update && apt-get install -y --no-install-recommends \
13
+ build-essential \
14
+ libpq-dev \
15
+ curl \
16
+ && rm -rf /var/lib/apt/lists/*
17
+
18
+ # Install python dependencies
19
+ COPY requirements.txt .
20
+ RUN pip install --no-cache-dir -r requirements.txt
21
+
22
+ # Install Playwright browsers and their dependencies
23
+ RUN playwright install --with-deps chromium
24
+
25
+ # Copy project
26
+ COPY . .
27
+
28
+ # Expose port
29
+ EXPOSE 8000
30
+
31
+ # Run the application
32
+ CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
backend/agents/agent_factory.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Dict, Type
2
+ from .base import BaseAgent
3
+ from .openai_agent import OpenAIAgent
4
+ from .amd_agent import AMDAgent
5
+ from .groq_agent import GroqAgent
6
+ from .gemini_agent import GeminiAgent
7
+ from .local_agent import LocalAgent
8
+ from services.config import settings
9
+
10
+ # Map of providers to their respective classes
11
+ PROVIDER_MAP: Dict[str, Type[BaseAgent]] = {
12
+ "openai": OpenAIAgent,
13
+ "amd": AMDAgent,
14
+ "groq": GroqAgent,
15
+ "gemini": GeminiAgent,
16
+ "local": LocalAgent,
17
+ "ollama": LocalAgent
18
+ }
19
+
20
+ class AgentFactory:
21
+ @staticmethod
22
+ def get_agent(provider: str, name: str, role: str, model: str, system_prompt: str = None) -> BaseAgent:
23
+ """
24
+ Instantiates the appropriate agent based on the provider string.
25
+ Includes a fallback to Groq if OpenAI is requested but no key is provided.
26
+ """
27
+ provider = provider.lower()
28
+
29
+ # Groq Redirection Logic
30
+ if provider == "openai" and not settings.OPENAI_API_KEY:
31
+ # Check if we have a Groq key before redirecting
32
+ if settings.GROQ_API_KEY:
33
+ provider = "groq"
34
+ model = "llama-3.3-70b-versatile" # Robust fallback model
35
+ else:
36
+ # If neither is available, let it fail with the original provider
37
+ pass
38
+
39
+ agent_class = PROVIDER_MAP.get(provider)
40
+
41
+ if not agent_class:
42
+ raise ValueError(f"Unsupported agent provider: {provider}")
43
+
44
+ return agent_class(name=name, role=role, model=model, system_prompt=system_prompt)
backend/agents/amd_agent.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .base import BaseAgent
2
+ from typing import Dict, Any, List
3
+ import openai
4
+ from services.config import settings, config_service
5
+
6
+ class AMDAgent(BaseAgent):
7
+ """
8
+ Agent implementation for AMD Inference (inference.do-ai.run).
9
+ Compatible with OpenAI's API format.
10
+ """
11
+ def __init__(self, name: str, role: str, model: str = "gpt-4o", system_prompt: str = None):
12
+ super().__init__(name, role, model, system_prompt)
13
+
14
+ self.provider_config = config_service.get_provider_config("amd")
15
+ api_key = self.provider_config.get("api_key") or settings.AMD_API_KEY
16
+
17
+ self.client = openai.AsyncOpenAI(
18
+ api_key=api_key,
19
+ base_url=self.provider_config.get("base_url", "https://inference.do-ai.run/v1")
20
+ )
21
+ self.temperature = self.provider_config.get("temperature", 0.7)
22
+ self.max_tokens = self.provider_config.get("max_tokens", 4096)
23
+
24
+ async def run(self, task_description: str, context: List[Dict[str, Any]], use_tools: bool = False, extra_context: str = "") -> Dict[str, Any]:
25
+ try:
26
+ response = await self.client.chat.completions.create(
27
+ model=self.model,
28
+ messages=self._build_chat_messages(task_description, context, extra_context),
29
+ response_format={"type": "json_object"},
30
+ temperature=self.temperature,
31
+ max_tokens=self.max_tokens
32
+ )
33
+
34
+ return self._result("amd", response.choices[0].message.content or "")
35
+ except Exception as e:
36
+ return {
37
+ "agent_name": self.name,
38
+ "provider": "amd",
39
+ "status": "error",
40
+ "error": str(e)
41
+ }
backend/agents/base.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from abc import ABC, abstractmethod
2
+ from typing import Dict, Any, List, Optional
3
+ import json
4
+
5
+ class BaseAgent(ABC):
6
+ def __init__(self, name: str, role: str, model: str, system_prompt: Optional[str] = None):
7
+ self.name = name
8
+ self.role = role
9
+ self.model = model
10
+ self.system_prompt = system_prompt or f"You are {name}, acting as a {role}."
11
+
12
+ @abstractmethod
13
+ async def run(self, task_description: str, context: List[Dict[str, Any]], use_tools: bool = False, extra_context: str = "") -> Dict[str, Any]:
14
+ """
15
+ Executes a task given its description and previous context.
16
+ Returns a dictionary containing the output data.
17
+ """
18
+ pass
19
+
20
+ def _format_context(self, context: List[Dict[str, Any]]) -> str:
21
+ """Helper to format previous task outputs for the current agent."""
22
+ if not context:
23
+ return "No previous context available."
24
+
25
+ formatted = "Previous tasks context:\n"
26
+ for item in context:
27
+ formatted += f"- Task: {item.get('title')}\n Output: {json.dumps(item.get('output_data', {}))}\n"
28
+ return formatted
29
+
30
+ def _build_json_prompt(self, task_description: str, context: List[Dict[str, Any]], extra_context: str = "") -> str:
31
+ return f"""
32
+ Task: {task_description}
33
+
34
+ {self._format_context(context)}
35
+
36
+ {extra_context}
37
+
38
+ Please provide your output as a JSON object.
39
+ """
40
+
41
+ def _build_chat_messages(self, task_description: str, context: List[Dict[str, Any]], extra_context: str = "") -> List[Dict[str, Any]]:
42
+ return [
43
+ {"role": "system", "content": self.system_prompt},
44
+ {"role": "user", "content": self._build_json_prompt(task_description, context, extra_context)}
45
+ ]
46
+
47
+ def _parse_json_output(self, content: str) -> Any:
48
+ """Parse strict JSON first, then tolerate fenced or prefixed JSON."""
49
+ if not content:
50
+ return {}
51
+
52
+ try:
53
+ return json.loads(content)
54
+ except json.JSONDecodeError:
55
+ pass
56
+
57
+ try:
58
+ if "```json" in content:
59
+ clean = content.split("```json", 1)[1].split("```", 1)[0].strip()
60
+ elif "```" in content:
61
+ clean = content.split("```", 1)[1].split("```", 1)[0].strip()
62
+ else:
63
+ object_start, array_start = content.find("{"), content.find("[")
64
+ starts = [index for index in (object_start, array_start) if index != -1]
65
+ start = min(starts) if starts else -1
66
+ if start == array_start:
67
+ end = content.rfind("]")
68
+ else:
69
+ end = content.rfind("}")
70
+ clean = content[start:end + 1] if start != -1 and end != -1 else content
71
+ return json.loads(clean)
72
+ except Exception:
73
+ return {"raw_text": content}
74
+
75
+ def _parse_tool_arguments(self, arguments: str | None) -> Dict[str, Any]:
76
+ parsed = self._parse_json_output(arguments or "{}")
77
+ return parsed if isinstance(parsed, dict) else {}
78
+
79
+ async def _append_tool_results(self, messages: List[Dict[str, Any]], tool_calls: Any, tool_registry: Any) -> None:
80
+ for tool_call in tool_calls or []:
81
+ tool_name = tool_call.function.name
82
+ tool_args = self._parse_tool_arguments(tool_call.function.arguments)
83
+ tool_result = await tool_registry.call_tool(tool_name, tool_args)
84
+
85
+ messages.append({
86
+ "tool_call_id": tool_call.id,
87
+ "role": "tool",
88
+ "name": tool_name,
89
+ "content": str(tool_result),
90
+ })
91
+
92
+ def _result(self, provider: str, content: str) -> Dict[str, Any]:
93
+ return {
94
+ "agent_name": self.name,
95
+ "provider": provider,
96
+ "model": self.model,
97
+ "raw_output": content,
98
+ "data": self._parse_json_output(content)
99
+ }
backend/agents/gemini_agent.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .base import BaseAgent
2
+ from typing import Dict, Any, List
3
+ from google import genai
4
+ from services.config import settings, config_service
5
+
6
+ class GeminiAgent(BaseAgent):
7
+ """
8
+ Agent implementation for Google Gemini using the new google-genai SDK.
9
+ """
10
+ def __init__(self, name: str, role: str, model: str = "gemini-2.0-flash", system_prompt: str = None):
11
+ super().__init__(name, role, model, system_prompt)
12
+
13
+ # Load dynamic config
14
+ self.provider_config = config_service.get_provider_config("gemini")
15
+ api_key = self.provider_config.get("api_key") or settings.GEMINI_API_KEY
16
+
17
+ self.client = genai.Client(api_key=api_key)
18
+ self.temperature = self.provider_config.get("temperature", 0.7)
19
+
20
+ async def run(self, task_description: str, context: List[Dict[str, Any]], use_tools: bool = False, extra_context: str = "") -> Dict[str, Any]:
21
+ full_prompt = f"""
22
+ System Instruction: {self.system_prompt}
23
+
24
+ {self._build_json_prompt(task_description, context, extra_context)}
25
+ """
26
+
27
+ # Gemini 2.0 Flash is very fast.
28
+ response = await self.client.aio.models.generate(
29
+ model=self.model,
30
+ contents=full_prompt,
31
+ config={
32
+ "temperature": self.temperature,
33
+ "response_mime_type": "application/json",
34
+ }
35
+ )
36
+
37
+ return self._result("gemini", response.text or "")
backend/agents/groq_agent.py ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ from .base import BaseAgent
3
+ from typing import Dict, Any, List
4
+ import groq
5
+ from services.config import settings, config_service
6
+ from tools.registry import tool_registry
7
+
8
+ logger = logging.getLogger("uvicorn")
9
+
10
+ class GroqAgent(BaseAgent):
11
+ """
12
+ Agent implementation for Groq.
13
+ """
14
+ def __init__(self, name: str, role: str, model: str = "llama-3.3-70b-versatile", system_prompt: str = None):
15
+ super().__init__(name, role, model, system_prompt)
16
+
17
+ # Load dynamic config
18
+ self.provider_config = config_service.get_provider_config("groq")
19
+ api_key = self.provider_config.get("api_key") or settings.GROQ_API_KEY
20
+
21
+ self.client = groq.AsyncGroq(api_key=api_key)
22
+ self.temperature = self.provider_config.get("temperature", 0.7)
23
+ self.max_tokens = self.provider_config.get("max_tokens", 4096)
24
+
25
+ async def run(self, task_description: str, context: List[Dict[str, Any]], use_tools: bool = False, extra_context: str = "") -> Dict[str, Any]:
26
+ messages = self._build_chat_messages(task_description, context, extra_context)
27
+
28
+ kwargs = {
29
+ "model": self.model,
30
+ "messages": messages,
31
+ "temperature": self.temperature,
32
+ "max_tokens": self.max_tokens
33
+ }
34
+
35
+ if use_tools:
36
+ kwargs["tools"] = tool_registry.get_tool_definitions()
37
+ kwargs["tool_choice"] = "auto"
38
+
39
+ response = await self.client.chat.completions.create(**kwargs)
40
+ message = response.choices[0].message
41
+
42
+ # Handle tool calls
43
+ if message.tool_calls:
44
+ messages.append(message)
45
+ await self._append_tool_results(messages, message.tool_calls, tool_registry)
46
+
47
+ final_response = await self.client.chat.completions.create(
48
+ model=self.model,
49
+ messages=messages,
50
+ temperature=self.temperature,
51
+ max_tokens=self.max_tokens
52
+ )
53
+ content = final_response.choices[0].message.content
54
+ else:
55
+ content = message.content or ""
56
+
57
+ return self._result("groq", content)
backend/agents/local_agent.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .base import BaseAgent
2
+ from typing import Dict, Any, List
3
+ import httpx
4
+ from services.config import config_service
5
+
6
+ class LocalAgent(BaseAgent):
7
+ """
8
+ Agent implementation for Local LLMs (Ollama).
9
+ """
10
+ def __init__(self, name: str, role: str, model: str = "llama3.1:8b", system_prompt: str = None):
11
+ super().__init__(name, role, model, system_prompt)
12
+
13
+ # Load dynamic config
14
+ self.provider_config = config_service.get_provider_config("ollama")
15
+ self.base_url = self.provider_config.get("base_url", "http://localhost:11434")
16
+ self.temperature = self.provider_config.get("temperature", 0.7)
17
+
18
+ async def run(self, task_description: str, context: List[Dict[str, Any]], use_tools: bool = False, extra_context: str = "") -> Dict[str, Any]:
19
+ full_prompt = f"""
20
+ System Instructions: {self.system_prompt}
21
+
22
+ {self._build_json_prompt(task_description, context, extra_context)}
23
+ """
24
+
25
+ async with httpx.AsyncClient(timeout=60.0) as client:
26
+ try:
27
+ response = await client.post(
28
+ f"{self.base_url}/api/generate",
29
+ json={
30
+ "model": self.model,
31
+ "prompt": full_prompt,
32
+ "stream": False,
33
+ "format": "json",
34
+ "options": {
35
+ "temperature": self.temperature
36
+ }
37
+ }
38
+ )
39
+ response.raise_for_status()
40
+ result = response.json()
41
+ return self._result("local", result.get("response", "{}"))
42
+ except Exception as e:
43
+ return {
44
+ "agent_name": self.name,
45
+ "provider": "local",
46
+ "status": "error",
47
+ "error": f"Ollama connection failed: {str(e)}"
48
+ }
backend/agents/openai_agent.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .base import BaseAgent
2
+ from typing import Dict, Any, List
3
+ import openai
4
+ from services.config import settings, config_service
5
+ from tools.registry import tool_registry
6
+
7
+ class OpenAIAgent(BaseAgent):
8
+ def __init__(self, name: str, role: str, model: str = "gpt-4o", system_prompt: str = None):
9
+ super().__init__(name, role, model, system_prompt)
10
+
11
+ # Load dynamic config
12
+ self.provider_config = config_service.get_provider_config("openai")
13
+ api_key = self.provider_config.get("api_key") or settings.OPENAI_API_KEY
14
+
15
+ self.client = openai.AsyncOpenAI(api_key=api_key)
16
+ self.temperature = self.provider_config.get("temperature", 0.7)
17
+ self.max_tokens = self.provider_config.get("max_tokens", 4096)
18
+
19
+ async def run(self, task_description: str, context: List[Dict[str, Any]], use_tools: bool = False, extra_context: str = "") -> Dict[str, Any]:
20
+ messages = self._build_chat_messages(task_description, context, extra_context)
21
+
22
+ request_kwargs = {
23
+ "model": self.model,
24
+ "messages": messages,
25
+ "response_format": {"type": "json_object"},
26
+ "temperature": self.temperature,
27
+ "max_tokens": self.max_tokens
28
+ }
29
+ if use_tools:
30
+ request_kwargs["tools"] = tool_registry.get_tool_definitions()
31
+ request_kwargs["tool_choice"] = "auto"
32
+
33
+ response = await self.client.chat.completions.create(**request_kwargs)
34
+
35
+ message = response.choices[0].message
36
+
37
+ # Handle tool calls
38
+ if message.tool_calls:
39
+ messages.append(message)
40
+ await self._append_tool_results(messages, message.tool_calls, tool_registry)
41
+
42
+ # Second call after tool execution
43
+ final_response = await self.client.chat.completions.create(
44
+ model=self.model,
45
+ messages=messages,
46
+ response_format={"type": "json_object"},
47
+ temperature=self.temperature,
48
+ max_tokens=self.max_tokens
49
+ )
50
+ content = final_response.choices[0].message.content
51
+ else:
52
+ content = message.content
53
+
54
+ return self._result("openai", content or "")
backend/agents_debug.json ADDED
@@ -0,0 +1 @@
 
 
1
+ [{"id": "297ef087-89af-4e3b-8d80-c5c5c7499e3b", "name": "GPT-4o", "role": "General Intelligence", "api_provider": "openai", "model": "gpt-4o", "system_prompt": "You are a highly capable AI assistant.", "created_at": "2026-05-04T14:32:03.072888+00:00", "updated_at": "2026-05-04T14:32:03.072888+00:00", "user_id": null}, {"id": "64aebf0c-625a-4c6f-895d-a36274c4f9fd", "name": "AMD-4o", "role": "Performance Specialist", "api_provider": "amd", "model": "gpt-4o", "system_prompt": "You are a high-performance agent running on AMD infrastructure.", "created_at": "2026-05-04T14:32:03.072888+00:00", "updated_at": "2026-05-04T14:32:03.072888+00:00", "user_id": null}, {"id": "f7cc5a82-1e7d-4f21-9855-922fb82cd6f9", "name": "Llama-3-70B", "role": "Fast Logic", "api_provider": "groq", "model": "llama3-70b-8192", "system_prompt": "You are a fast and efficient reasoning agent.", "created_at": "2026-05-04T14:32:03.072888+00:00", "updated_at": "2026-05-04T14:32:03.072888+00:00", "user_id": null}, {"id": "1d988b3c-38c1-4132-85e1-82e7a4bc4f8a", "name": "Growth Hacker", "role": "Marketing Expert", "api_provider": "openai", "model": "gpt-4o", "system_prompt": "You are a Growth Hacker focused on low-cost, high-impact strategies.", "created_at": "2026-05-04T15:50:52.628388+00:00", "updated_at": "2026-05-04T15:50:52.628388+00:00", "user_id": "483025be-ca4b-4de3-aa80-9eb39dbd3578"}, {"id": "7b056c90-f4a6-4629-81b1-f4316b76d091", "name": "Growth Hacker", "role": "Marketing Expert", "api_provider": "openai", "model": "gpt-4o", "system_prompt": "You are a Growth Hacker focused on low-cost, high-impact strategies.", "created_at": "2026-05-04T16:22:10.993123+00:00", "updated_at": "2026-05-04T16:22:10.993123+00:00", "user_id": "483025be-ca4b-4de3-aa80-9eb39dbd3578"}, {"id": "138d7d29-b2c0-4ffb-b89f-1a0965dca6b6", "name": "Planner", "role": "Project Planner", "api_provider": "openai", "model": "gpt-4o", "system_prompt": "You decompose goals into clear, ordered implementation tasks.", "created_at": "2026-05-04T18:06:05.280562+00:00", "updated_at": "2026-05-04T18:06:05.280562+00:00", "user_id": "483025be-ca4b-4de3-aa80-9eb39dbd3578"}, {"id": "edfc99cf-a70c-42e0-a2f6-4508bb6aac33", "name": "Builder", "role": "Implementation Agent", "api_provider": "openai", "model": "gpt-4o", "system_prompt": "You implement practical, production-oriented solutions with concise output.", "created_at": "2026-05-04T18:06:05.280562+00:00", "updated_at": "2026-05-04T18:06:05.280562+00:00", "user_id": "483025be-ca4b-4de3-aa80-9eb39dbd3578"}, {"id": "5ef6c6f3-fc4a-40ba-abe8-7630fefcece2", "name": "Reviewer", "role": "Quality Reviewer", "api_provider": "openai", "model": "gpt-4o", "system_prompt": "You review outputs for correctness, security, completeness, and missing tests.", "created_at": "2026-05-04T18:06:05.280562+00:00", "updated_at": "2026-05-04T18:06:05.280562+00:00", "user_id": "483025be-ca4b-4de3-aa80-9eb39dbd3578"}]
backend/api/index.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from main import app
backend/main.py ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI
2
+ from fastapi.middleware.cors import CORSMiddleware
3
+ from fastapi.responses import FileResponse, Response
4
+ import os
5
+ import json
6
+ from pathlib import Path
7
+ from dotenv import load_dotenv
8
+ import sentry_sdk
9
+
10
+ # Load environment variables
11
+ load_dotenv()
12
+ FRONTEND_DIST = Path(__file__).resolve().parent.parent / "frontend" / "dist"
13
+
14
+ # Sentry Initialization
15
+ SENTRY_DSN = os.getenv("SENTRY_DSN")
16
+ if SENTRY_DSN:
17
+ sentry_sdk.init(
18
+ dsn=SENTRY_DSN,
19
+ traces_sample_rate=1.0,
20
+ profiles_sample_rate=1.0,
21
+ )
22
+
23
+ app = FastAPI(
24
+ title="Aubm API",
25
+ description="Enterprise-Grade AI Agent Orchestration & Collaboration Platform",
26
+ version="0.1.0"
27
+ )
28
+
29
+ # CORS Configuration
30
+ allowed_origins = os.getenv("ALLOWED_ORIGINS", "*").split(",")
31
+ app.add_middleware(
32
+ CORSMiddleware,
33
+ allow_origins=allowed_origins,
34
+ allow_credentials=True,
35
+ allow_methods=["*"],
36
+ allow_headers=["*"],
37
+ )
38
+
39
+ @app.get("/")
40
+ async def root():
41
+ index_path = FRONTEND_DIST / "index.html"
42
+ if index_path.exists():
43
+ return FileResponse(index_path)
44
+
45
+ return {
46
+ "status": "online",
47
+ "message": "Aubm API is operational",
48
+ "version": "0.1.0"
49
+ }
50
+
51
+ # Placeholder for routers
52
+ from routers import agent_runner, orchestrator, monitoring
53
+
54
+ app.include_router(agent_runner.router, prefix="/tasks", tags=["Tasks"])
55
+ app.include_router(orchestrator.router, prefix="/orchestrator", tags=["Orchestration"])
56
+ app.include_router(monitoring.router, prefix="/monitoring", tags=["Monitoring"])
57
+
58
+ @app.get("/runtime-config.js", include_in_schema=False)
59
+ async def runtime_config():
60
+ config = {
61
+ "apiUrl": os.getenv("VITE_API_URL", ""),
62
+ "supabaseUrl": os.getenv("VITE_SUPABASE_URL", os.getenv("SUPABASE_URL", "")),
63
+ "supabaseAnonKey": os.getenv("VITE_SUPABASE_ANON_KEY", os.getenv("SUPABASE_ANON_KEY", "")),
64
+ "sentryDsn": os.getenv("VITE_SENTRY_DSN", os.getenv("SENTRY_DSN", "")),
65
+ }
66
+ return Response(
67
+ content=f"window.__AUBM_CONFIG__ = {json.dumps(config)};",
68
+ media_type="application/javascript",
69
+ )
70
+
71
+ @app.get("/{path:path}", include_in_schema=False)
72
+ async def serve_frontend(path: str):
73
+ if not FRONTEND_DIST.exists():
74
+ return await root()
75
+
76
+ requested_path = FRONTEND_DIST / path
77
+ if requested_path.is_file():
78
+ return FileResponse(requested_path)
79
+
80
+ index_path = FRONTEND_DIST / "index.html"
81
+ if index_path.exists():
82
+ return FileResponse(index_path)
83
+
84
+ return await root()
85
+
86
+ if __name__ == "__main__":
87
+ import uvicorn
88
+ from services.config import settings
89
+ uvicorn.run("main:app", host="0.0.0.0", port=settings.PORT, reload=True)
backend/project_debug.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"id": "53f39562-09aa-447b-b251-3844e1415d4c", "name": "Commercial analysis", "description": "I want to evaluate differents applications similar to this app", "context": "search in internet", "owner_id": "483025be-ca4b-4de3-aa80-9eb39dbd3578", "status": "active", "is_public": true, "created_at": "2026-05-04T16:38:03.872534+00:00", "updated_at": "2026-05-04T16:38:03.872534+00:00", "team_id": null}
backend/requirements.txt ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ fastapi
2
+ uvicorn[standard]
3
+ supabase
4
+ openai
5
+ groq
6
+ google-genai
7
+ playwright
8
+ folium
9
+ python-dotenv
10
+ pydantic
11
+ pydantic-settings
12
+ httpx
13
+ jinja2
14
+ python-multipart
15
+ reportlab
16
+ pandas
17
+ openpyxl
18
+ psutil
19
+ sentry-sdk[fastapi]
backend/routers/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Routers package
backend/routers/agent_runner.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, HTTPException, BackgroundTasks
2
+ from services.supabase_service import supabase
3
+ from services.agent_runner_service import AgentRunnerService
4
+ import logging
5
+
6
+ router = APIRouter()
7
+ logger = logging.getLogger("uvicorn")
8
+
9
+ def update_task_status(task_id: str, status: str):
10
+ result = (
11
+ supabase.table("tasks")
12
+ .update({"status": status})
13
+ .eq("id", task_id)
14
+ .select("id,project_id,status")
15
+ .single()
16
+ .execute()
17
+ )
18
+ if not result.data:
19
+ raise HTTPException(status_code=404, detail="Task not found or status was not updated")
20
+
21
+ project_id = result.data.get("project_id")
22
+ if project_id:
23
+ task_result = (
24
+ supabase.table("tasks")
25
+ .select("id,status")
26
+ .eq("project_id", project_id)
27
+ .execute()
28
+ )
29
+ tasks = task_result.data or []
30
+ if status == "done" and tasks and all(task.get("status") == "done" for task in tasks):
31
+ supabase.table("projects").update({"status": "completed"}).eq("id", project_id).execute()
32
+ elif status != "done":
33
+ supabase.table("projects").update({"status": "active"}).eq("id", project_id).execute()
34
+
35
+ return result.data
36
+
37
+ @router.post("/{task_id}/run")
38
+ async def run_task(task_id: str, background_tasks: BackgroundTasks):
39
+ """
40
+ Triggers the execution of a specific task.
41
+ """
42
+ # 1. Fetch task data
43
+ task_res = supabase.table("tasks").select("*, project:projects(*)").eq("id", task_id).single().execute()
44
+ if not task_res.data:
45
+ raise HTTPException(status_code=404, detail="Task not found")
46
+
47
+ task = task_res.data
48
+
49
+ # 2. Check if agent is assigned
50
+ agent_id = task.get("assigned_agent_id")
51
+ if not agent_id:
52
+ raise HTTPException(status_code=400, detail="No agent assigned to this task")
53
+
54
+ # 3. Fetch agent data
55
+ agent_res = supabase.table("agents").select("*").eq("id", agent_id).single().execute()
56
+ if not agent_res.data:
57
+ raise HTTPException(status_code=404, detail="Assigned agent not found")
58
+
59
+ agent_data = agent_res.data
60
+
61
+ # 4. Update task status to in_progress
62
+ supabase.table("tasks").update({"status": "in_progress"}).eq("id", task_id).execute()
63
+
64
+ # 5. Run in background
65
+ background_tasks.add_task(AgentRunnerService.execute_agent_logic, task, agent_data)
66
+
67
+ return {"message": "Task execution started", "task_id": task_id}
68
+
69
+ @router.post("/{task_id}/approve")
70
+ async def approve_task(task_id: str):
71
+ task = update_task_status(task_id, "done")
72
+ return {"message": "Task approved", "task": task}
73
+
74
+ @router.post("/{task_id}/reject")
75
+ async def reject_task(task_id: str):
76
+ task = update_task_status(task_id, "todo")
77
+ return {"message": "Task rejected", "task": task}
backend/routers/monitoring.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datetime import datetime, timezone
2
+ from fastapi import APIRouter
3
+ from services.supabase_service import supabase
4
+
5
+ router = APIRouter()
6
+
7
+
8
+ def _count_table(table_name: str) -> int:
9
+ response = supabase.table(table_name).select("id", count="exact").limit(1).execute()
10
+ return response.count or 0
11
+
12
+
13
+ @router.get("/summary")
14
+ async def monitoring_summary():
15
+ """
16
+ Lightweight operational summary for dashboards and uptime checks.
17
+ """
18
+ checks = {
19
+ "api": "ok",
20
+ "database": "ok",
21
+ }
22
+
23
+ counts = {
24
+ "projects": 0,
25
+ "tasks": 0,
26
+ "agents": 0,
27
+ "task_runs": 0,
28
+ "failed_tasks": 0,
29
+ "pending_reviews": 0,
30
+ }
31
+
32
+ try:
33
+ counts["projects"] = _count_table("projects")
34
+ counts["tasks"] = _count_table("tasks")
35
+ counts["agents"] = _count_table("agents")
36
+ counts["task_runs"] = _count_table("task_runs")
37
+ counts["failed_tasks"] = (
38
+ supabase.table("tasks")
39
+ .select("id", count="exact")
40
+ .eq("status", "failed")
41
+ .limit(1)
42
+ .execute()
43
+ .count
44
+ or 0
45
+ )
46
+ counts["pending_reviews"] = (
47
+ supabase.table("tasks")
48
+ .select("id", count="exact")
49
+ .eq("status", "awaiting_approval")
50
+ .limit(1)
51
+ .execute()
52
+ .count
53
+ or 0
54
+ )
55
+ except Exception as exc:
56
+ checks["database"] = "error"
57
+ return {
58
+ "status": "degraded",
59
+ "checks": checks,
60
+ "counts": counts,
61
+ "error": str(exc),
62
+ "timestamp": datetime.now(timezone.utc).isoformat(),
63
+ }
64
+
65
+ return {
66
+ "status": "ok",
67
+ "checks": checks,
68
+ "counts": counts,
69
+ "timestamp": datetime.now(timezone.utc).isoformat(),
70
+ }
backend/routers/orchestrator.py ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, BackgroundTasks, HTTPException
2
+ from fastapi.responses import Response
3
+ from services.orchestrator_service import orchestrator_service
4
+ from pydantic import BaseModel
5
+ from io import BytesIO
6
+ from reportlab.lib.pagesizes import letter
7
+ from reportlab.lib.styles import getSampleStyleSheet
8
+ from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer
9
+ from reportlab.graphics.shapes import Drawing
10
+ from reportlab.graphics.charts.barcharts import VerticalBarChart
11
+ from reportlab.graphics.charts.piecharts import Pie
12
+ from reportlab.lib import colors
13
+ from reportlab.lib.units import inch
14
+ import re
15
+
16
+ router = APIRouter()
17
+
18
+ def _safe_filename(value: str) -> str:
19
+ return re.sub(r"[^a-zA-Z0-9_-]+", "_", value).strip("_").lower() or "report"
20
+
21
+ def _bar_chart(title: str, rows: list[dict]) -> Drawing:
22
+ drawing = Drawing(460, 180)
23
+ chart = VerticalBarChart()
24
+ chart.x = 40
25
+ chart.y = 35
26
+ chart.height = 110
27
+ chart.width = 380
28
+ chart.data = [[row["value"] for row in rows]]
29
+ chart.categoryAxis.categoryNames = [row["label"] for row in rows]
30
+ chart.valueAxis.valueMin = 0
31
+ chart.valueAxis.valueMax = max([row["value"] for row in rows] + [1])
32
+ chart.valueAxis.valueStep = max(1, round(chart.valueAxis.valueMax / 4))
33
+ chart.bars[0].fillColor = colors.HexColor("#14b8a6")
34
+ drawing.add(chart)
35
+ return drawing
36
+
37
+ def _pie_chart(rows: list[dict]) -> Drawing:
38
+ drawing = Drawing(460, 180)
39
+ pie = Pie()
40
+ pie.x = 150
41
+ pie.y = 20
42
+ pie.width = 140
43
+ pie.height = 140
44
+ pie.data = [row["value"] for row in rows]
45
+ pie.labels = [row["label"] for row in rows]
46
+ pie.slices.strokeWidth = 0.5
47
+ palette = [colors.HexColor("#22c55e"), colors.HexColor("#facc15"), colors.HexColor("#ef4444")]
48
+ for index, color in enumerate(palette[:len(rows)]):
49
+ pie.slices[index].fillColor = color
50
+ drawing.add(pie)
51
+ return drawing
52
+
53
+ def _report_pdf_bytes(title: str, content: str, charts: dict | None = None) -> bytes:
54
+ buffer = BytesIO()
55
+ doc = SimpleDocTemplate(
56
+ buffer,
57
+ pagesize=letter,
58
+ rightMargin=0.7 * inch,
59
+ leftMargin=0.7 * inch,
60
+ topMargin=0.7 * inch,
61
+ bottomMargin=0.7 * inch,
62
+ )
63
+ styles = getSampleStyleSheet()
64
+ story = [Paragraph(title, styles["Title"]), Spacer(1, 0.2 * inch)]
65
+ if charts:
66
+ story.extend([
67
+ Paragraph("Project Charts", styles["Heading2"]),
68
+ Paragraph("Completion Status", styles["Heading3"]),
69
+ _pie_chart(charts.get("status", [])),
70
+ Spacer(1, 0.1 * inch),
71
+ Paragraph("Task Categories", styles["Heading3"]),
72
+ _bar_chart("Task Categories", charts.get("categories", [])),
73
+ Spacer(1, 0.1 * inch),
74
+ Paragraph("Project Scores", styles["Heading3"]),
75
+ _bar_chart("Project Scores", charts.get("scores", [])),
76
+ Spacer(1, 0.2 * inch),
77
+ ])
78
+
79
+ for raw_line in content.splitlines():
80
+ line = raw_line.strip()
81
+ if not line:
82
+ story.append(Spacer(1, 0.1 * inch))
83
+ continue
84
+ if line.startswith("# "):
85
+ story.append(Paragraph(line[2:], styles["Title"]))
86
+ elif line.startswith("## "):
87
+ story.append(Paragraph(line[3:], styles["Heading2"]))
88
+ elif line.startswith("### "):
89
+ story.append(Paragraph(line[4:], styles["Heading3"]))
90
+ elif line.startswith("- "):
91
+ story.append(Paragraph(f"• {line[2:]}", styles["BodyText"]))
92
+ else:
93
+ story.append(Paragraph(line, styles["BodyText"]))
94
+
95
+ doc.build(story)
96
+ return buffer.getvalue()
97
+
98
+ class DebateRequest(BaseModel):
99
+
100
+ task_id: str
101
+ agent_a_id: str
102
+ agent_b_id: str
103
+
104
+ @router.post("/debate")
105
+ async def start_debate(request: DebateRequest, background_tasks: BackgroundTasks):
106
+ """
107
+ Starts a debate between two agents for a specific task.
108
+ """
109
+ background_tasks.add_task(
110
+ orchestrator_service.run_debate,
111
+ request.task_id,
112
+ request.agent_a_id,
113
+ request.agent_b_id
114
+ )
115
+ return {"message": "Debate started in background"}
116
+
117
+
118
+ @router.post("/projects/{project_id}/run")
119
+ async def run_project_orchestrator(project_id: str, background_tasks: BackgroundTasks):
120
+ """
121
+ Runs all queued tasks for a project in priority order.
122
+ """
123
+ background_tasks.add_task(orchestrator_service.run_project, project_id)
124
+ return {"message": "Project orchestrator started", "project_id": project_id}
125
+
126
+ @router.get("/projects/{project_id}/final-report")
127
+ async def get_project_final_report(project_id: str, variant: str = "full"):
128
+ """
129
+ Builds a consolidated report from all approved task outputs.
130
+ """
131
+ try:
132
+ return await orchestrator_service.build_final_report(project_id, variant)
133
+ except ValueError as exc:
134
+ raise HTTPException(status_code=400, detail=str(exc)) from exc
135
+
136
+ @router.get("/projects/{project_id}/final-report.pdf")
137
+ async def download_project_final_report_pdf(project_id: str, variant: str = "full"):
138
+ """
139
+ Downloads the selected report variant as a PDF.
140
+ """
141
+ try:
142
+ result = await orchestrator_service.build_final_report(project_id, variant)
143
+ except ValueError as exc:
144
+ raise HTTPException(status_code=400, detail=str(exc)) from exc
145
+
146
+ title = f"{result['project_name']} - {result['variant']} report"
147
+ pdf = _report_pdf_bytes(title, result["report"], result.get("charts"))
148
+ filename = f"{_safe_filename(result['project_name'])}_{_safe_filename(result['variant'])}.pdf"
149
+ return Response(
150
+ content=pdf,
151
+ media_type="application/pdf",
152
+ headers={"Content-Disposition": f'attachment; filename="{filename}"'}
153
+ )
backend/services/agent_runner_service.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ from datetime import datetime, timezone
3
+ from services.supabase_service import supabase
4
+ from services.audit_service import audit_service
5
+ from agents.agent_factory import AgentFactory
6
+ from services.semantic_backprop import semantic_backprop
7
+
8
+ logger = logging.getLogger("agent_runner_service")
9
+
10
+ class AgentRunnerService:
11
+ @staticmethod
12
+ async def run_agent_task(
13
+ task: dict,
14
+ agent_data: dict,
15
+ *,
16
+ include_semantic_context: bool = False,
17
+ start_action: str = "execution_start",
18
+ start_content: str | None = None,
19
+ complete_action: str = "execution_complete",
20
+ complete_content: str = "Agent successfully completed the task and produced output."
21
+ ) -> tuple[dict, str]:
22
+ task_id = task["id"]
23
+ project_id = task["project_id"]
24
+ run_id = None
25
+
26
+ supabase.table("tasks").update({"status": "in_progress"}).eq("id", task_id).execute()
27
+
28
+ try:
29
+ run_res = supabase.table("task_runs").insert({
30
+ "task_id": task_id,
31
+ "agent_id": agent_data["id"],
32
+ "status": "running"
33
+ }).execute()
34
+ run_id = run_res.data[0]["id"]
35
+
36
+ agent = AgentFactory.get_agent(
37
+ provider=agent_data["api_provider"],
38
+ name=agent_data["name"],
39
+ role=agent_data["role"],
40
+ model=agent_data["model"],
41
+ system_prompt=agent_data.get("system_prompt")
42
+ )
43
+
44
+ context_res = supabase.table("tasks").select("title, output_data") \
45
+ .eq("project_id", project_id) \
46
+ .eq("status", "done") \
47
+ .execute()
48
+ context = context_res.data if context_res.data else []
49
+
50
+ extra_context = ""
51
+ if include_semantic_context:
52
+ extra_context = await semantic_backprop.get_project_context(project_id, task_id)
53
+
54
+ supabase.table("agent_logs").insert({
55
+ "task_id": task_id,
56
+ "run_id": run_id,
57
+ "action": start_action,
58
+ "content": start_content or f"Agent {agent_data['name']} starting task: {task['title']}"
59
+ }).execute()
60
+
61
+ result = await agent.run(task.get("description") or task["title"], context, extra_context=extra_context)
62
+ if result.get("status") == "error":
63
+ raise RuntimeError(result.get("error") or "Agent returned an error result.")
64
+
65
+ supabase.table("tasks").update({
66
+ "status": "awaiting_approval",
67
+ "output_data": result
68
+ }).eq("id", task_id).execute()
69
+
70
+ supabase.table("task_runs").update({
71
+ "status": "completed",
72
+ "finished_at": datetime.now(timezone.utc).isoformat()
73
+ }).eq("id", run_id).execute()
74
+
75
+ supabase.table("agent_logs").insert({
76
+ "task_id": task_id,
77
+ "run_id": run_id,
78
+ "action": complete_action,
79
+ "content": complete_content
80
+ }).execute()
81
+
82
+ return result, run_id
83
+
84
+ except Exception as e:
85
+ logger.error(f"Error executing task {task_id}: {str(e)}")
86
+ if run_id:
87
+ supabase.table("task_runs").update({
88
+ "status": "failed",
89
+ "finished_at": datetime.now(timezone.utc).isoformat()
90
+ }).eq("id", run_id).execute()
91
+ supabase.table("tasks").update({
92
+ "status": "failed",
93
+ "output_data": {"error": str(e)}
94
+ }).eq("id", task_id).execute()
95
+ raise e
96
+
97
+ @staticmethod
98
+ async def execute_agent_logic(task: dict, agent_data: dict):
99
+ task_id = task["id"]
100
+ try:
101
+ await AgentRunnerService.run_agent_task(
102
+ task,
103
+ agent_data,
104
+ include_semantic_context=True
105
+ )
106
+
107
+ await audit_service.log_action(
108
+ user_id=None,
109
+ action="agent_task_completed",
110
+ agent_id=agent_data["id"],
111
+ task_id=task_id,
112
+ metadata={"model": agent_data["model"]}
113
+ )
114
+
115
+ except Exception:
116
+ raise
backend/services/audit_service.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from services.supabase_service import supabase
2
+ from typing import Dict, Any, Optional
3
+ import logging
4
+
5
+ logger = logging.getLogger("uvicorn")
6
+
7
+ class AuditService:
8
+ @staticmethod
9
+ async def log_action(
10
+ user_id: Optional[str],
11
+ action: str,
12
+ agent_id: Optional[str] = None,
13
+ task_id: Optional[str] = None,
14
+ metadata: Optional[Dict[str, Any]] = None
15
+ ):
16
+ """
17
+ Records an action in the audit_logs table.
18
+ """
19
+ try:
20
+ data = {
21
+ "user_id": user_id,
22
+ "action": action,
23
+ "agent_id": agent_id,
24
+ "task_id": task_id,
25
+ "metadata": metadata or {}
26
+ }
27
+ supabase.table("audit_logs").insert(data).execute()
28
+ except Exception as e:
29
+ logger.error(f"AuditService error: {str(e)}")
30
+
31
+ audit_service = AuditService()
backend/services/config.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from pydantic_settings import BaseSettings
3
+ from typing import Optional, Dict, Any
4
+ from supabase import create_client, Client
5
+
6
+ class Settings(BaseSettings):
7
+ # Supabase
8
+ SUPABASE_URL: str = os.getenv("SUPABASE_URL", "")
9
+ SUPABASE_SERVICE_ROLE_KEY: str = os.getenv("SUPABASE_SERVICE_ROLE_KEY", "")
10
+
11
+ # AI Providers
12
+ OPENAI_API_KEY: Optional[str] = os.getenv("OPENAI_API_KEY")
13
+ GROQ_API_KEY: Optional[str] = os.getenv("GROQ_API_KEY")
14
+ GEMINI_API_KEY: Optional[str] = os.getenv("GEMINI_API_KEY")
15
+ ANTHROPIC_API_KEY: Optional[str] = os.getenv("ANTHROPIC_API_KEY")
16
+ AMD_API_KEY: Optional[str] = os.getenv("AMD_API_KEY")
17
+
18
+ # App Config
19
+ TASK_QUEUE_EMBEDDED_WORKER: bool = os.getenv("TASK_QUEUE_EMBEDDED_WORKER", "true").lower() == "true"
20
+ OUTPUT_LANGUAGE: str = os.getenv("OUTPUT_LANGUAGE", "en")
21
+ PORT: int = int(os.getenv("PORT", 8000))
22
+ SENTRY_DSN: Optional[str] = os.getenv("SENTRY_DSN")
23
+
24
+ class Config:
25
+ env_file = ".env"
26
+
27
+ settings = Settings()
28
+
29
+ class ConfigService:
30
+ """
31
+ Manages application-wide settings stored in Supabase with local fallback defaults.
32
+ Borrowed from AgentCollab for enhanced flexibility.
33
+ """
34
+ _cache: Dict[str, Any] = {}
35
+ _supabase: Client = None
36
+
37
+ @classmethod
38
+ def _get_supabase(cls):
39
+ if not cls._supabase:
40
+ if not settings.SUPABASE_URL or not settings.SUPABASE_SERVICE_ROLE_KEY:
41
+ return None
42
+ cls._supabase = create_client(settings.SUPABASE_URL, settings.SUPABASE_SERVICE_ROLE_KEY)
43
+ return cls._supabase
44
+
45
+ # Defaults used when DB has no config entry for a provider
46
+ _DEFAULTS: Dict[str, Any] = {
47
+ "groq": {"enabled": True, "default_model": "llama-3.3-70b-versatile", "temperature": 0.7, "max_tokens": 4096},
48
+ "openai": {"enabled": True, "default_model": "gpt-4o", "temperature": 0.7, "max_tokens": 4096},
49
+ "openrouter": {"enabled": True, "default_model": "google/gemini-2.0-flash", "temperature": 0.7, "max_tokens": 8192},
50
+ "gemini": {"enabled": True, "default_model": "gemini-2.0-flash", "temperature": 0.7, "max_tokens": 8192},
51
+ "amd": {"enabled": True, "default_model": "gpt-4o", "temperature": 0.7, "max_tokens": 4096, "base_url": "https://inference.do-ai.run/v1"},
52
+ "ollama": {"enabled": True, "default_model": "llama3.1:8b", "temperature": 0.7, "base_url": "http://localhost:11434"},
53
+ }
54
+
55
+ @classmethod
56
+ def get_provider_config(cls, provider: str) -> Dict[str, Any]:
57
+ """Returns config for a provider from cache, DB, then defaults."""
58
+ cache_key = f"provider:{provider}"
59
+ if cache_key in cls._cache:
60
+ return cls._cache[cache_key]
61
+
62
+ db = cls._get_supabase()
63
+ if db:
64
+ try:
65
+ resp = db.table("app_config").select("*").eq("key", provider).execute()
66
+ if resp.data and len(resp.data) > 0:
67
+ cls._cache[cache_key] = resp.data[0]["value"]
68
+ return cls._cache[cache_key]
69
+ except Exception:
70
+ pass # Fall through to defaults
71
+
72
+ result = cls._DEFAULTS.get(provider, {})
73
+ cls._cache[cache_key] = result
74
+ return result
75
+
76
+ @classmethod
77
+ def get_global_setting(cls, key: str, default: Any = None) -> Any:
78
+ cache_key = f"global:{key}"
79
+ if cache_key in cls._cache:
80
+ return cls._cache[cache_key]
81
+
82
+ db = cls._get_supabase()
83
+ if db:
84
+ try:
85
+ resp = db.table("app_config").select("*").eq("key", key).execute()
86
+ if resp.data and len(resp.data) > 0:
87
+ cls._cache[cache_key] = resp.data[0]["value"]
88
+ return cls._cache[cache_key]
89
+ except Exception:
90
+ pass
91
+
92
+ return default
93
+
94
+ @classmethod
95
+ def invalidate_cache(cls) -> None:
96
+ cls._cache.clear()
97
+
98
+ config_service = ConfigService()
backend/services/orchestrator_service.py ADDED
@@ -0,0 +1,531 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from services.supabase_service import supabase
2
+ from agents.agent_factory import AgentFactory
3
+ import json
4
+ import logging
5
+ from services.config import settings
6
+ from services.agent_runner_service import AgentRunnerService
7
+
8
+ logger = logging.getLogger("uvicorn")
9
+
10
+ def _humanize_key(key: str) -> str:
11
+ return key.replace("_", " ").replace("-", " ").strip().title()
12
+
13
+ def _format_value_for_report(value, level: int = 0) -> list[str]:
14
+ if value is None:
15
+ return ["Not specified."]
16
+
17
+ if isinstance(value, (str, int, float, bool)):
18
+ return [str(value)]
19
+
20
+ if isinstance(value, list):
21
+ lines: list[str] = []
22
+ for item in value:
23
+ if isinstance(item, dict):
24
+ item_lines = _format_value_for_report(item, level + 1)
25
+ if item_lines:
26
+ lines.append(f"- {item_lines[0]}")
27
+ lines.extend(f" {line}" for line in item_lines[1:])
28
+ elif isinstance(item, list):
29
+ nested = _format_value_for_report(item, level + 1)
30
+ lines.extend(f"- {line}" for line in nested)
31
+ else:
32
+ lines.append(f"- {item}")
33
+ return lines or ["No items."]
34
+
35
+ if isinstance(value, dict):
36
+ lines: list[str] = []
37
+ for key, item in value.items():
38
+ title = _humanize_key(str(key))
39
+ if isinstance(item, dict):
40
+ lines.append(f"{title}:")
41
+ lines.extend(f" {line}" for line in _format_value_for_report(item, level + 1))
42
+ elif isinstance(item, list):
43
+ lines.append(f"{title}:")
44
+ lines.extend(f" {line}" for line in _format_value_for_report(item, level + 1))
45
+ else:
46
+ lines.append(f"{title}: {item}")
47
+ return lines or ["No details."]
48
+
49
+ return [str(value)]
50
+
51
+ def _format_output_for_report(output_data) -> str:
52
+ if not output_data:
53
+ return "No approved output was saved for this task."
54
+
55
+ if isinstance(output_data, dict):
56
+ primary = (
57
+ output_data.get("data")
58
+ or output_data.get("final")
59
+ or output_data.get("raw_output")
60
+ or output_data
61
+ )
62
+ else:
63
+ primary = output_data
64
+
65
+ if isinstance(primary, str):
66
+ return primary
67
+
68
+ return "\n".join(_format_value_for_report(primary))
69
+
70
+ def _output_text(output_data) -> str:
71
+ return _format_output_for_report(output_data).lower()
72
+
73
+ def _build_report_charts(tasks: list[dict]) -> dict:
74
+ total = len(tasks)
75
+ done = sum(1 for task in tasks if task.get("status") == "done")
76
+ failed = sum(1 for task in tasks if task.get("status") == "failed")
77
+ pending = max(total - done - failed, 0)
78
+
79
+ priority_counts: dict[str, int] = {}
80
+ for task in tasks:
81
+ priority = str(task.get("priority") if task.get("priority") is not None else 0)
82
+ priority_counts[priority] = priority_counts.get(priority, 0) + 1
83
+
84
+ categories = {
85
+ "Market": ("market", "competitor", "customer", "segment", "demand"),
86
+ "Product": ("product", "mvp", "feature", "design", "scope"),
87
+ "Revenue": ("revenue", "price", "pricing", "margin", "commission"),
88
+ "Operations": ("operation", "process", "logistic", "support", "fulfillment"),
89
+ "Risk": ("risk", "threat", "failure", "weak", "mitigation")
90
+ }
91
+ category_counts = {name: 0 for name in categories}
92
+ risk_mentions = 0
93
+
94
+ for task in tasks:
95
+ text = f"{task.get('title', '')} {task.get('description', '')} {_output_text(task.get('output_data'))}"
96
+ risk_mentions += sum(text.count(term) for term in categories["Risk"])
97
+ for category, terms in categories.items():
98
+ if any(term in text for term in terms):
99
+ category_counts[category] += 1
100
+
101
+ opportunity_score = 85 if total and done == total else round((done / total) * 85) if total else 0
102
+ risk_score = min(95, 35 + risk_mentions * 3)
103
+ readiness_score = round((done / total) * 100) if total else 0
104
+
105
+ return {
106
+ "status": [
107
+ {"label": "Approved", "value": done},
108
+ {"label": "Pending", "value": pending},
109
+ {"label": "Failed", "value": failed}
110
+ ],
111
+ "priorities": [
112
+ {"label": f"Priority {key}", "value": value}
113
+ for key, value in sorted(priority_counts.items(), key=lambda item: int(item[0]) if item[0].isdigit() else 0, reverse=True)
114
+ ],
115
+ "categories": [
116
+ {"label": label, "value": value}
117
+ for label, value in category_counts.items()
118
+ ],
119
+ "scores": [
120
+ {"label": "Readiness", "value": readiness_score},
121
+ {"label": "Opportunity", "value": opportunity_score},
122
+ {"label": "Risk", "value": risk_score}
123
+ ]
124
+ }
125
+
126
+ REPORT_VARIANTS = {
127
+ "full": {
128
+ "title": "Final Report",
129
+ "agent_terms": [],
130
+ "fallback_heading": "Approved Work Summary",
131
+ "prompt": ""
132
+ },
133
+ "brief": {
134
+ "title": "Short Brief",
135
+ "agent_terms": ["brief", "summary", "writer"],
136
+ "fallback_heading": "Short Brief",
137
+ "prompt": (
138
+ "Create a concise executive brief from the approved project work. "
139
+ "Use plain English, no JSON, no code blocks. Include: objective, main findings, recommended next steps, and key risks. "
140
+ "Keep it short and decision-oriented."
141
+ )
142
+ },
143
+ "pessimistic": {
144
+ "title": "Pessimistic Analysis",
145
+ "agent_terms": ["pessimistic", "risk", "critic", "reviewer"],
146
+ "fallback_heading": "Pessimistic Analysis",
147
+ "prompt": (
148
+ "Create a skeptical, downside-focused analysis from the approved project work. "
149
+ "Use plain English, no JSON, no code blocks. Focus on what can fail, weak assumptions, operational risks, market risks, "
150
+ "financial risks, execution gaps, and mitigation priorities."
151
+ )
152
+ }
153
+ }
154
+
155
+ class OrchestratorService:
156
+ """
157
+ Handles complex multi-agent workflows like Debates and Peer Reviews.
158
+ """
159
+
160
+ async def run_debate(self, task_id: str, agent_a_id: str, agent_b_id: str):
161
+ """
162
+ Executes a debate between two agents for a specific task.
163
+ """
164
+ try:
165
+ # 1. Fetch task and agents
166
+ task = supabase.table("tasks").select("*").eq("id", task_id).single().execute().data
167
+ agent_a = supabase.table("agents").select("*").eq("id", agent_a_id).single().execute().data
168
+ agent_b = supabase.table("agents").select("*").eq("id", agent_b_id).single().execute().data
169
+
170
+ # 2. Agent A generates initial response
171
+ inst_a = AgentFactory.get_agent(agent_a["api_provider"], agent_a["name"], agent_a["role"], agent_a["model"])
172
+ initial_res = await inst_a.run(task["description"], [])
173
+
174
+ # 3. Agent B reviews and critiques
175
+ inst_b = AgentFactory.get_agent(agent_b["api_provider"], agent_b["name"], agent_b["role"], agent_b["model"])
176
+ critique_prompt = f"Review the following output for the task: '{task['description']}'. Provide constructive critique and identify errors.\n\nOutput: {json.dumps(initial_res['data'])}"
177
+ critique_res = await inst_b.run(critique_prompt, [])
178
+
179
+ # 4. Agent A refines based on critique
180
+ refinement_prompt = f"Refine your initial output for the task: '{task['description']}' based on this critique: {json.dumps(critique_res['data'])}"
181
+ final_res = await inst_a.run(refinement_prompt, [])
182
+
183
+ # 5. Save final result
184
+ supabase.table("tasks").update({
185
+ "status": "done",
186
+ "output_data": {
187
+ "initial": initial_res["data"],
188
+ "critique": critique_res["data"],
189
+ "final": final_res["data"]
190
+ }
191
+ }).eq("id", task_id).execute()
192
+
193
+ logger.info(f"Debate completed for task {task_id}")
194
+
195
+ except Exception as e:
196
+ logger.error(f"Debate failed: {str(e)}")
197
+ supabase.table("tasks").update({"status": "failed"}).eq("id", task_id).execute()
198
+
199
+ async def run_project(self, project_id: str):
200
+ """
201
+ Runs queued tasks in a project sequentially. Unassigned tasks are assigned
202
+ to the first available project-owner or global agent.
203
+ """
204
+ project = supabase.table("projects").select("*").eq("id", project_id).single().execute().data
205
+ if not project:
206
+ raise ValueError(f"Project not found: {project_id}")
207
+
208
+ owner_id = project.get("owner_id")
209
+ tasks = (
210
+ supabase.table("tasks")
211
+ .select("*")
212
+ .eq("project_id", project_id)
213
+ .eq("status", "todo")
214
+ .order("priority", desc=True)
215
+ .order("created_at", desc=False)
216
+ .execute()
217
+ .data
218
+ or []
219
+ )
220
+
221
+ # Automatic Decomposition: If no tasks exist, try to decompose the project first
222
+ if not tasks:
223
+ logger.info(f"No tasks found for project {project_id}. Triggering auto-decomposition.")
224
+ await self.decompose_project(project_id)
225
+ # Re-fetch tasks after decomposition
226
+ tasks = (
227
+ supabase.table("tasks")
228
+ .select("*")
229
+ .eq("project_id", project_id)
230
+ .in_("status", ["todo", "failed"])
231
+ .order("priority", desc=True)
232
+ .order("created_at", desc=False)
233
+ .execute()
234
+ .data
235
+ or []
236
+ )
237
+
238
+ agents = supabase.table("agents").select("*").execute().data or []
239
+ available_agents = [
240
+ agent for agent in agents
241
+ if agent.get("user_id") in (None, owner_id)
242
+ ]
243
+
244
+ completed = 0
245
+ failed = 0
246
+
247
+ for task in tasks:
248
+ try:
249
+ agent_data = self._resolve_agent(task, available_agents)
250
+ if not agent_data:
251
+ raise ValueError("No available agent for task")
252
+
253
+ if not task.get("assigned_agent_id"):
254
+ supabase.table("tasks").update({
255
+ "assigned_agent_id": agent_data["id"]
256
+ }).eq("id", task["id"]).execute()
257
+ task["assigned_agent_id"] = agent_data["id"]
258
+
259
+ await self._run_task(task, agent_data)
260
+ completed += 1
261
+ except Exception as exc:
262
+ failed += 1
263
+ logger.error(f"Project orchestration task failed: {str(exc)}")
264
+ supabase.table("tasks").update({
265
+ "status": "failed",
266
+ "output_data": {"error": str(exc)}
267
+ }).eq("id", task["id"]).execute()
268
+
269
+ return {
270
+ "project_id": project_id,
271
+ "queued_tasks": len(tasks),
272
+ "completed": completed,
273
+ "failed": failed,
274
+ }
275
+
276
+ def _select_report_agent(self, project: dict, variant: str):
277
+ config = REPORT_VARIANTS.get(variant, REPORT_VARIANTS["full"])
278
+ terms = config["agent_terms"]
279
+ if not terms:
280
+ return None
281
+
282
+ owner_id = project.get("owner_id")
283
+ agents = supabase.table("agents").select("*").execute().data or []
284
+ available_agents = [
285
+ agent for agent in agents
286
+ if agent.get("user_id") in (None, owner_id)
287
+ ]
288
+
289
+ return next(
290
+ (
291
+ agent for agent in available_agents
292
+ if any(term in f"{agent.get('name', '')} {agent.get('role', '')}".lower() for term in terms)
293
+ ),
294
+ available_agents[0] if available_agents else None
295
+ )
296
+
297
+ async def _generate_report_variant_with_agent(self, project: dict, report: str, variant: str):
298
+ agent_data = self._select_report_agent(project, variant)
299
+ if not agent_data:
300
+ return None
301
+
302
+ config = REPORT_VARIANTS[variant]
303
+ agent = AgentFactory.get_agent(
304
+ provider=agent_data["api_provider"],
305
+ name=agent_data["name"],
306
+ role=agent_data["role"],
307
+ model=agent_data["model"],
308
+ system_prompt=agent_data.get("system_prompt")
309
+ )
310
+ result = await agent.run(f"{config['prompt']}\n\nApproved project material:\n{report}", [])
311
+ if result.get("status") == "error":
312
+ raise RuntimeError(result.get("error") or "Report agent returned an error.")
313
+
314
+ data = result.get("data")
315
+ if isinstance(data, dict):
316
+ for key in ("brief", "analysis", "report", "summary", "content"):
317
+ if isinstance(data.get(key), str):
318
+ return data[key]
319
+ return "\n".join(_format_value_for_report(data))
320
+ if isinstance(data, str):
321
+ return data
322
+ return result.get("raw_output")
323
+
324
+ def _build_fallback_variant(self, project: dict, tasks: list[dict], variant: str):
325
+ config = REPORT_VARIANTS[variant]
326
+ lines = [
327
+ f"# {config['title']}: {project['name']}",
328
+ "",
329
+ "## Project Brief",
330
+ project.get("description") or "No project description provided.",
331
+ "",
332
+ f"## {config['fallback_heading']}"
333
+ ]
334
+
335
+ if variant == "brief":
336
+ lines.extend([
337
+ f"All {len(tasks)} approved tasks have been consolidated.",
338
+ "The project is ready for decision review based on the approved task outputs.",
339
+ "",
340
+ "Recommended next steps:",
341
+ "- Validate the highest-impact assumptions with real users or customers.",
342
+ "- Prioritize the smallest launch scope that proves demand.",
343
+ "- Convert approved outputs into an execution backlog with owners and dates."
344
+ ])
345
+ return "\n".join(lines)
346
+
347
+ if variant == "pessimistic":
348
+ lines.extend([
349
+ "This project can still fail even with all tasks approved.",
350
+ "",
351
+ "Primary downside risks:",
352
+ "- Approved task outputs may be internally consistent but unvalidated by the market.",
353
+ "- Revenue, conversion, operational, and adoption assumptions may be too optimistic.",
354
+ "- Execution scope can expand faster than the team can deliver.",
355
+ "- Competitors can respond with pricing, distribution, or trust advantages.",
356
+ "",
357
+ "Mitigation priorities:",
358
+ "- Validate demand before building broad feature scope.",
359
+ "- Stress-test unit economics and support costs.",
360
+ "- Define kill criteria before committing more resources."
361
+ ])
362
+ return "\n".join(lines)
363
+
364
+ return None
365
+
366
+ async def build_final_report(self, project_id: str, variant: str = "full"):
367
+ variant = variant if variant in REPORT_VARIANTS else "full"
368
+ project = supabase.table("projects").select("*").eq("id", project_id).single().execute().data
369
+ if not project:
370
+ raise ValueError(f"Project not found: {project_id}")
371
+
372
+ tasks = (
373
+ supabase.table("tasks")
374
+ .select("title,description,status,priority,output_data,created_at")
375
+ .eq("project_id", project_id)
376
+ .order("priority", desc=True)
377
+ .order("created_at", desc=False)
378
+ .execute()
379
+ .data
380
+ or []
381
+ )
382
+
383
+ if not tasks:
384
+ raise ValueError("Project has no tasks to summarize.")
385
+
386
+ incomplete = [task for task in tasks if task.get("status") != "done"]
387
+ if incomplete:
388
+ raise ValueError(f"Final report is available after all tasks are approved. Pending tasks: {len(incomplete)}")
389
+
390
+ report_title = REPORT_VARIANTS[variant]["title"]
391
+ lines = [
392
+ f"# {report_title}: {project['name']}",
393
+ "",
394
+ "## Project Brief",
395
+ project.get("description") or "No project description provided.",
396
+ ]
397
+
398
+ if project.get("context"):
399
+ lines.extend(["", "## Context", project["context"]])
400
+
401
+ lines.extend(["", "## Approved Work Summary"])
402
+
403
+ for index, task in enumerate(tasks, start=1):
404
+ lines.extend([
405
+ "",
406
+ f"### {index}. {task['title']}",
407
+ task.get("description") or "No task description provided.",
408
+ "",
409
+ _format_output_for_report(task.get("output_data"))
410
+ ])
411
+
412
+ lines.extend([
413
+ "",
414
+ "## Completion Status",
415
+ f"All {len(tasks)} tasks are approved. Project status: completed."
416
+ ])
417
+
418
+ supabase.table("projects").update({"status": "completed"}).eq("id", project_id).execute()
419
+ report = "\n".join(lines)
420
+
421
+ if variant != "full":
422
+ try:
423
+ generated = await self._generate_report_variant_with_agent(project, report, variant)
424
+ report = generated or self._build_fallback_variant(project, tasks, variant) or report
425
+ except Exception as exc:
426
+ logger.warning(f"Report variant generation failed: {exc}")
427
+ report = self._build_fallback_variant(project, tasks, variant) or report
428
+
429
+ return {
430
+ "project_id": project_id,
431
+ "project_name": project["name"],
432
+ "task_count": len(tasks),
433
+ "variant": variant,
434
+ "report": report,
435
+ "charts": _build_report_charts(tasks)
436
+ }
437
+
438
+ async def decompose_project(self, project_id: str):
439
+ """
440
+ Uses a Planner agent to decompose a project into discrete tasks.
441
+ """
442
+ project = supabase.table("projects").select("*").eq("id", project_id).single().execute().data
443
+ owner_id = project.get("owner_id")
444
+
445
+ # Find a Planner agent, prioritizing Groq as requested
446
+ agents = supabase.table("agents").select("*").execute().data or []
447
+
448
+ # 1. Try to find an existing Groq Planner
449
+ planner_agent_data = next(
450
+ (a for a in agents if "Planner" in a["name"] and a.get("api_provider") == "groq"),
451
+ None
452
+ )
453
+
454
+ # 2. If not found, try any Planner
455
+ if not planner_agent_data:
456
+ planner_agent_data = next(
457
+ (a for a in agents if "Planner" in a["name"] and a.get("user_id") in (None, owner_id)),
458
+ next((a for a in agents if a.get("user_id") in (None, owner_id)), None)
459
+ )
460
+
461
+ # 3. If still no agent, or it's OpenAI but we want Groq, create a temporary one
462
+ if not planner_agent_data or (planner_agent_data.get("api_provider") == "openai" and not settings.OPENAI_API_KEY):
463
+ logger.info("Using default Groq Planner for decomposition.")
464
+ planner = AgentFactory.get_agent(
465
+ provider="groq",
466
+ name="System Planner",
467
+ role="Project Decomposer",
468
+ model="llama-3.3-70b-versatile",
469
+ system_prompt="You decompose goals into clear, ordered implementation tasks."
470
+ )
471
+ else:
472
+ planner = AgentFactory.get_agent(
473
+ provider=planner_agent_data["api_provider"],
474
+ name=planner_agent_data["name"],
475
+ role=planner_agent_data["role"],
476
+ model=planner_agent_data["model"],
477
+ system_prompt=planner_agent_data.get("system_prompt")
478
+ )
479
+
480
+ prompt = f"""Decompose the following project into 3-5 clear, actionable tasks.
481
+ Project Name: {project['name']}
482
+ Description: {project['description']}
483
+ Context: {project.get('context', 'None')}
484
+
485
+ Return ONLY a valid JSON array of objects.
486
+ Each object MUST have exactly these keys: 'title', 'description', and 'priority' (integer 1-5).
487
+ IMPORTANT: Do not wrap the list in an object. Return a flat [{{...}}, {{...}}] array.
488
+ """
489
+
490
+ try:
491
+ result = await planner.run(prompt, [])
492
+ # Some cleaning might be needed if agent returns markdown
493
+ content = result["data"]
494
+ tasks_data = planner._parse_json_output(content) if isinstance(content, str) else content
495
+
496
+ # Ensure tasks_data is a list
497
+ if isinstance(tasks_data, dict):
498
+ # If agent wrapped it in {"tasks": [...]}, extract it
499
+ if "tasks" in tasks_data and isinstance(tasks_data["tasks"], list):
500
+ tasks_data = tasks_data["tasks"]
501
+ else:
502
+ # Single task as object, wrap in list
503
+ tasks_data = [tasks_data]
504
+
505
+ if not isinstance(tasks_data, list):
506
+ raise ValueError(f"Agent returned invalid format: {type(tasks_data)}. Expected list or dict.")
507
+
508
+ # Insert tasks
509
+ from .project_service import project_service
510
+ await project_service.add_tasks_to_project(project_id, tasks_data)
511
+ logger.info(f"Auto-decomposed project {project_id} into {len(tasks_data)} tasks.")
512
+ except Exception as e:
513
+ logger.error(f"Project decomposition failed: {e}")
514
+
515
+ def _resolve_agent(self, task: dict, available_agents: list[dict]):
516
+ assigned_agent_id = task.get("assigned_agent_id")
517
+ if assigned_agent_id:
518
+ return next((agent for agent in available_agents if agent["id"] == assigned_agent_id), None)
519
+ return available_agents[0] if available_agents else None
520
+
521
+ async def _run_task(self, task: dict, agent_data: dict):
522
+ await AgentRunnerService.run_agent_task(
523
+ task,
524
+ agent_data,
525
+ start_action="orchestrator_execution_start",
526
+ start_content=f"Orchestrator assigned {agent_data['name']} to task: {task['title']}",
527
+ complete_action="orchestrator_execution_complete",
528
+ complete_content="Task completed and is awaiting approval."
529
+ )
530
+
531
+ orchestrator_service = OrchestratorService()
backend/services/project_service.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from services.supabase_service import supabase
2
+ from typing import List, Dict, Any
3
+ import logging
4
+
5
+ logger = logging.getLogger("uvicorn")
6
+
7
+ class ProjectService:
8
+ """
9
+ Handles the creation and management of projects and their constituent tasks.
10
+ """
11
+
12
+ @staticmethod
13
+ async def create_project(title: str, description: str, user_id: str) -> Dict[str, Any]:
14
+ res = supabase.table("projects").insert({
15
+ "title": title,
16
+ "description": description,
17
+ "user_id": user_id,
18
+ "status": "active"
19
+ }).execute()
20
+ return res.data[0]
21
+
22
+ @staticmethod
23
+ async def add_tasks_to_project(project_id: str, tasks: List[Dict[str, Any]]):
24
+ """
25
+ Adds a list of tasks to a project.
26
+ tasks: [{"title": "...", "description": "...", "assigned_agent_id": "..."}]
27
+ """
28
+ formatted_tasks = [
29
+ {**task, "project_id": project_id, "status": "todo"}
30
+ for task in tasks
31
+ ]
32
+ supabase.table("tasks").insert(formatted_tasks).execute()
33
+ logger.info(f"Added {len(tasks)} tasks to project {project_id}")
34
+
35
+ project_service = ProjectService()
backend/services/semantic_backprop.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import logging
3
+ from typing import List, Dict, Any
4
+ from services.supabase_service import supabase
5
+
6
+ logger = logging.getLogger("uvicorn")
7
+
8
+ class SemanticBackpropService:
9
+ """
10
+ Ensures numerical consistency across agent tasks by extracting 'Canonical Numbers'
11
+ from previous task outputs.
12
+ """
13
+
14
+ @staticmethod
15
+ async def get_project_context(project_id: str, current_task_id: str) -> str:
16
+ """
17
+ Fetches and extracts canonical figures from all completed sibling tasks.
18
+ """
19
+ try:
20
+ resp = supabase.table("tasks") \
21
+ .select("title, output_data") \
22
+ .eq("project_id", project_id) \
23
+ .eq("status", "done") \
24
+ .neq("id", current_task_id) \
25
+ .execute()
26
+
27
+ if not resp.data:
28
+ return ""
29
+
30
+ canonical_blocks = []
31
+ topic_blocks = []
32
+
33
+ for task in resp.data:
34
+ output = task.get("output_data") or {}
35
+ # Handle different output formats (raw string or dict with 'result')
36
+ result_text = ""
37
+ if isinstance(output, dict):
38
+ result_text = output.get("result", "") or output.get("raw_output", "")
39
+ elif isinstance(output, str):
40
+ result_text = output
41
+
42
+ if not result_text:
43
+ continue
44
+
45
+ # Extract financial and numerical lines
46
+ lines = result_text.splitlines()
47
+ financial_lines = []
48
+
49
+ # Keywords that often indicate a 'canonical' number
50
+ keywords = [
51
+ "$", "%", "USD", "MRR", "ARR", "ROI", "cost", "budget",
52
+ "revenue", "price", "fee", "estimate", "total", "quota"
53
+ ]
54
+
55
+ for line in lines:
56
+ if any(k.lower() in line.lower() for k in keywords):
57
+ if len(line.strip()) > 5: # Ignore very short lines
58
+ financial_lines.append(line.strip())
59
+
60
+ if financial_lines:
61
+ # De-duplicate similar lines
62
+ seen = set()
63
+ unique_fin = []
64
+ for fl in financial_lines:
65
+ key = fl[:50]
66
+ if key not in seen:
67
+ seen.add(key)
68
+ unique_fin.append(fl)
69
+
70
+ canonical_blocks.append(
71
+ f"Source Task: **{task['title']}**\n" +
72
+ "\n".join(f" • {fl}" for fl in unique_fin[:8])
73
+ )
74
+
75
+ # Also track what topics were covered to avoid repetition
76
+ topic_blocks.append(f"- **{task['title']}**: (Covered in previous step)")
77
+
78
+ if not canonical_blocks and not topic_blocks:
79
+ return ""
80
+
81
+ context = "\n---\n"
82
+ if canonical_blocks:
83
+ context += (
84
+ "### ⚠️ CANONICAL FIGURES — PREVIOUSLY ESTABLISHED\n"
85
+ "> **MANDATORY RULE**: The following numbers and figures were established by agents\n"
86
+ "> responsible for those domains. You MUST use these exact values if you reference them.\n"
87
+ "> DO NOT re-calculate or propose alternative values for these specific items.\n\n"
88
+ )
89
+ context += "\n\n".join(canonical_blocks) + "\n\n"
90
+
91
+ if topic_blocks:
92
+ context += (
93
+ "### 📋 PREVIOUSLY COVERED TOPICS\n"
94
+ "> Do not repeat the analysis of these topics. Focus only on your specific task.\n"
95
+ )
96
+ context += "\n".join(topic_blocks) + "\n"
97
+
98
+ return context
99
+
100
+ except Exception as e:
101
+ logger.error(f"Semantic Backprop failed: {e}")
102
+ return ""
103
+
104
+ semantic_backprop = SemanticBackpropService()
backend/services/supabase_service.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from supabase import create_client, Client
2
+ from services.config import settings
3
+
4
+ def get_supabase_client() -> Client:
5
+ """
6
+ Initializes and returns a Supabase client.
7
+ """
8
+ if not settings.SUPABASE_URL or not settings.SUPABASE_SERVICE_ROLE_KEY:
9
+ raise ValueError("SUPABASE_URL and SUPABASE_SERVICE_ROLE_KEY must be set in environment.")
10
+
11
+ return create_client(settings.SUPABASE_URL, settings.SUPABASE_SERVICE_ROLE_KEY)
12
+
13
+ supabase: Client = get_supabase_client()
backend/services/task_queue.py ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ from typing import Optional, List
3
+ from .supabase_service import supabase
4
+
5
+ logger = logging.getLogger(__name__)
6
+
7
+ class TaskQueueService:
8
+ @staticmethod
9
+ async def queue_task(task_id: str):
10
+ """
11
+ Marks a task as 'queued' in the database.
12
+ """
13
+ try:
14
+ result = supabase.table("tasks").update({"status": "queued"}).eq("id", task_id).execute()
15
+ return result
16
+ except Exception as e:
17
+ logger.error(f"Error queueing task {task_id}: {e}")
18
+ return None
19
+
20
+ @staticmethod
21
+ async def get_next_queued_task():
22
+ """
23
+ Fetches the next available task from the queue.
24
+ """
25
+ try:
26
+ # Fetch one task that is in 'queued' status, ordered by priority and created_at
27
+ result = supabase.table("tasks") \
28
+ .select("*") \
29
+ .eq("status", "queued") \
30
+ .order("priority", desc=True) \
31
+ .order("created_at") \
32
+ .limit(1) \
33
+ .execute()
34
+
35
+ if result.data:
36
+ return result.data[0]
37
+ return None
38
+ except Exception as e:
39
+ logger.error(f"Error fetching next queued task: {e}")
40
+ return None
41
+
42
+ @staticmethod
43
+ async def mark_in_progress(task_id: str):
44
+ """
45
+ Marks a task as 'in_progress'.
46
+ """
47
+ return supabase.table("tasks").update({"status": "in_progress"}).eq("id", task_id).execute()
backend/tools/browser.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from playwright.async_api import async_playwright
2
+ import logging
3
+
4
+ logger = logging.getLogger("uvicorn")
5
+
6
+ class BrowserTool:
7
+ """
8
+ A tool that allows agents to browse the web and extract content.
9
+ """
10
+ async def search_and_extract(self, url: str) -> str:
11
+ """
12
+ Navigates to a URL and returns the text content.
13
+ """
14
+ logger.info(f"BrowserTool: Navigating to {url}")
15
+ async with async_playwright() as p:
16
+ browser = await p.chromium.launch(headless=True)
17
+ page = await browser.new_page()
18
+ try:
19
+ await page.goto(url, wait_until="networkidle", timeout=30000)
20
+ # Simple extraction: get all text from body
21
+ content = await page.inner_text("body")
22
+ # Truncate if too long for LLM context
23
+ return content[:10000]
24
+ except Exception as e:
25
+ logger.error(f"BrowserTool error: {str(e)}")
26
+ return f"Error accessing {url}: {str(e)}"
27
+ finally:
28
+ await browser.close()
29
+
30
+ async def google_search(self, query: str) -> str:
31
+ """
32
+ Performs a Google search and returns results.
33
+ """
34
+ search_url = f"https://www.google.com/search?q={query}"
35
+ return await self.search_and_extract(search_url)
backend/tools/decomposer.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from services.project_service import project_service
2
+ from typing import List, Dict, Any
3
+ import logging
4
+
5
+ logger = logging.getLogger("uvicorn")
6
+
7
+ class DecompositionTool:
8
+ """
9
+ A tool that allows agents to break down complex goals into actionable tasks.
10
+ """
11
+ async def create_subtasks(self, project_id: str, tasks: List[Dict[str, Any]]) -> str:
12
+ """
13
+ Takes a list of task definitions and adds them to the database for the given project.
14
+ """
15
+ logger.info(f"DecompositionTool: Creating {len(tasks)} subtasks for project {project_id}")
16
+ try:
17
+ await project_service.add_tasks_to_project(project_id, tasks)
18
+ return f"Successfully created {len(tasks)} subtasks."
19
+ except Exception as e:
20
+ return f"Failed to create subtasks: {str(e)}"
backend/tools/file_generator.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import pandas as pd
3
+ from reportlab.lib.pagesizes import letter
4
+ from reportlab.pdfgen import canvas
5
+ from datetime import datetime
6
+ import logging
7
+
8
+ logger = logging.getLogger("uvicorn")
9
+
10
+ class FileGeneratorTool:
11
+ """
12
+ A tool that allows agents to generate PDF and Excel files.
13
+ """
14
+ def __init__(self):
15
+ self.output_dir = "outputs"
16
+ os.makedirs(self.output_dir, exist_ok=True)
17
+
18
+ def _generate_filename(self, extension: str) -> str:
19
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
20
+ return os.path.join(self.output_dir, f"report_{timestamp}.{extension}")
21
+
22
+ async def generate_pdf(self, title: str, content: str) -> str:
23
+ """
24
+ Generates a PDF document with the provided title and content.
25
+ """
26
+ filename = self._generate_filename("pdf")
27
+ logger.info(f"FileGenerator: Generating PDF {filename}")
28
+
29
+ try:
30
+ c = canvas.Canvas(filename, pagesize=letter)
31
+ width, height = letter
32
+
33
+ # Title
34
+ c.setFont("Helvetica-Bold", 16)
35
+ c.drawString(72, height - 72, title)
36
+
37
+ # Content (very basic wrapping/split)
38
+ c.setFont("Helvetica", 12)
39
+ text_object = c.beginText(72, height - 100)
40
+ for line in content.split('\n'):
41
+ text_object.textLine(line)
42
+ c.drawText(text_object)
43
+
44
+ c.save()
45
+ return f"PDF generated successfully: {filename}"
46
+ except Exception as e:
47
+ return f"Failed to generate PDF: {str(e)}"
48
+
49
+ async def generate_excel(self, data: list) -> str:
50
+ """
51
+ Generates an Excel file from a list of dictionaries.
52
+ """
53
+ filename = self._generate_filename("xlsx")
54
+ logger.info(f"FileGenerator: Generating Excel {filename}")
55
+
56
+ try:
57
+ df = pd.DataFrame(data)
58
+ df.to_excel(filename, index=False)
59
+ return f"Excel generated successfully: {filename}"
60
+ except Exception as e:
61
+ return f"Failed to generate Excel: {str(e)}"
backend/tools/registry.py ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .file_generator import FileGeneratorTool
2
+ from .decomposer import DecompositionTool
3
+ from .sre import SRETool
4
+ from .browser import BrowserTool
5
+ from .sandbox import CodeSandboxTool
6
+ from .visuals import VisualsTool
7
+ from typing import Any, Dict, List
8
+
9
+ class ToolRegistry:
10
+ def __init__(self):
11
+ self.browser = BrowserTool()
12
+ self.sandbox = CodeSandboxTool()
13
+ self.file_gen = FileGeneratorTool()
14
+ self.decomposer = DecompositionTool()
15
+ self.sre = SRETool()
16
+ self.visuals = VisualsTool()
17
+ self.tools = {
18
+ "web_search": {
19
+ "func": self.browser.google_search,
20
+ "description": "Searches the web for a given query and returns the results."
21
+ },
22
+ "extract_url": {
23
+ "func": self.browser.search_and_extract,
24
+ "description": "Extracts text content from a specific URL."
25
+ },
26
+ "execute_python": {
27
+ "func": self.sandbox.execute_python,
28
+ "description": "Executes Python code and returns the output."
29
+ },
30
+ "generate_pdf": {
31
+ "func": self.file_gen.generate_pdf,
32
+ "description": "Generates a PDF document."
33
+ },
34
+ "generate_excel": {
35
+ "func": self.file_gen.generate_excel,
36
+ "description": "Generates an Excel spreadsheet from structured data."
37
+ },
38
+ "create_subtasks": {
39
+ "func": self.decomposer.create_subtasks,
40
+ "description": "Breaks down a goal into a list of actionable tasks."
41
+ },
42
+ "generate_chart": {
43
+ "func": self.visuals.generate_chart,
44
+ "description": "Generates a chart image (bar, line, pie) from a JSON config."
45
+ },
46
+ "generate_illustration": {
47
+ "func": self.visuals.generate_illustration,
48
+ "description": "Generates an AI illustration or drawing based on a text prompt."
49
+ },
50
+ "get_system_health": {
51
+ "func": self.sre.get_system_health,
52
+ "description": "Returns system health metrics (CPU, Memory, Disk)."
53
+ },
54
+ "check_service_status": {
55
+ "func": self.sre.check_service_status,
56
+ "description": "Checks if a specific service or process is running."
57
+ },
58
+ "run_patch_command": {
59
+ "func": self.sre.run_patch_command,
60
+ "description": "Executes a safe system patch command (e.g., git pull, npm install)."
61
+ }
62
+ }
63
+
64
+ def get_tool_definitions(self) -> List[Dict[str, Any]]:
65
+ """
66
+ Returns OpenAI-style tool definitions.
67
+ """
68
+ return [
69
+ {
70
+ "type": "function",
71
+ "function": {
72
+ "name": "web_search",
73
+ "description": "Search the web for information",
74
+ "parameters": {
75
+ "type": "object",
76
+ "properties": {
77
+ "query": {"type": "string", "description": "The search query"}
78
+ },
79
+ "required": ["query"]
80
+ }
81
+ }
82
+ },
83
+ {
84
+ "type": "function",
85
+ "function": {
86
+ "name": "extract_url",
87
+ "description": "Extract text content from a URL",
88
+ "parameters": {
89
+ "type": "object",
90
+ "properties": {
91
+ "url": {"type": "string", "description": "The URL to extract from"}
92
+ },
93
+ "required": ["url"]
94
+ }
95
+ }
96
+ },
97
+ {
98
+ "type": "function",
99
+ "function": {
100
+ "name": "execute_python",
101
+ "description": "Execute Python code to perform calculations, data analysis, or logic verification.",
102
+ "parameters": {
103
+ "type": "object",
104
+ "properties": {
105
+ "code": {"type": "string", "description": "The Python code to execute"}
106
+ },
107
+ "required": ["code"]
108
+ }
109
+ }
110
+ },
111
+ {
112
+ "type": "function",
113
+ "function": {
114
+ "name": "generate_pdf",
115
+ "description": "Create a professional PDF report",
116
+ "parameters": {
117
+ "type": "object",
118
+ "properties": {
119
+ "title": {"type": "string", "description": "The title of the report"},
120
+ "content": {"type": "string", "description": "The text content of the report"}
121
+ },
122
+ "required": ["title", "content"]
123
+ }
124
+ }
125
+ },
126
+ {
127
+ "type": "function",
128
+ "function": {
129
+ "name": "generate_excel",
130
+ "description": "Create an Excel spreadsheet from data",
131
+ "parameters": {
132
+ "type": "object",
133
+ "properties": {
134
+ "data": {
135
+ "type": "array",
136
+ "items": {"type": "object"},
137
+ "description": "List of rows as objects"
138
+ }
139
+ },
140
+ "required": ["data"]
141
+ }
142
+ }
143
+ },
144
+ {
145
+ "type": "function",
146
+ "function": {
147
+ "name": "create_subtasks",
148
+ "description": "Break down a complex goal into smaller, actionable tasks for other agents.",
149
+ "parameters": {
150
+ "type": "object",
151
+ "properties": {
152
+ "project_id": {"type": "string", "description": "The current project UUID"},
153
+ "tasks": {
154
+ "type": "array",
155
+ "items": {
156
+ "type": "object",
157
+ "properties": {
158
+ "title": {"type": "string", "description": "Clear title of the subtask"},
159
+ "description": {"type": "string", "description": "Detailed instructions for the next agent"},
160
+ "assigned_agent_id": {"type": "string", "description": "The UUID of the agent to handle this task"}
161
+ },
162
+ "required": ["title", "description", "assigned_agent_id"]
163
+ }
164
+ }
165
+ },
166
+ "required": ["project_id", "tasks"]
167
+ }
168
+ }
169
+ },
170
+ {
171
+ "type": "function",
172
+ "function": {
173
+ "name": "generate_chart",
174
+ "description": "Generate a visual chart image (bar, line, pie, etc.) using QuickChart.io.",
175
+ "parameters": {
176
+ "type": "object",
177
+ "properties": {
178
+ "chart_type": {"type": "string", "enum": ["bar", "line", "pie", "doughnut"], "description": "Type of chart"},
179
+ "chart_config": {"type": "string", "description": "The JSON configuration for QuickChart (e.g., {type: 'bar', data: {...}})"}
180
+ },
181
+ "required": ["chart_type", "chart_config"]
182
+ }
183
+ }
184
+ },
185
+ {
186
+ "type": "function",
187
+ "function": {
188
+ "name": "generate_illustration",
189
+ "description": "Generate an AI illustration or drawing based on a prompt using Pollinations.ai.",
190
+ "parameters": {
191
+ "type": "object",
192
+ "properties": {
193
+ "prompt": {"type": "string", "description": "Detailed description of the illustration to generate"}
194
+ },
195
+ "required": ["prompt"]
196
+ }
197
+ }
198
+ },
199
+ {
200
+ "type": "function",
201
+ "function": {
202
+ "name": "get_system_health",
203
+ "description": "Monitor server vital signs like CPU usage, memory availability, and disk space.",
204
+ "parameters": {
205
+ "type": "object",
206
+ "properties": {}
207
+ }
208
+ }
209
+ },
210
+ {
211
+ "type": "function",
212
+ "function": {
213
+ "name": "check_service_status",
214
+ "description": "Verify if a critical service or process is currently active on the host.",
215
+ "parameters": {
216
+ "type": "object",
217
+ "properties": {
218
+ "service_name": {"type": "string", "description": "The name of the process or service to check"}
219
+ },
220
+ "required": ["service_name"]
221
+ }
222
+ }
223
+ },
224
+ {
225
+ "type": "function",
226
+ "function": {
227
+ "name": "run_patch_command",
228
+ "description": "Apply a system patch or update. Limited to safe commands like 'git pull' or 'npm install'.",
229
+ "parameters": {
230
+ "type": "object",
231
+ "properties": {
232
+ "command": {"type": "string", "description": "The restricted command to execute"}
233
+ },
234
+ "required": ["command"]
235
+ }
236
+ }
237
+ }
238
+ ]
239
+
240
+ async def call_tool(self, name: str, arguments: Dict[str, Any]) -> Any:
241
+ """
242
+ Executes a tool by name with provided arguments.
243
+ """
244
+ if name not in self.tools:
245
+ raise ValueError(f"Tool {name} not found")
246
+
247
+ func = self.tools[name]["func"]
248
+ return await func(**arguments)
249
+
250
+ tool_registry = ToolRegistry()
backend/tools/sandbox.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import io
3
+ import contextlib
4
+ import logging
5
+
6
+ logger = logging.getLogger("uvicorn")
7
+
8
+ class CodeSandboxTool:
9
+ """
10
+ A tool that allows agents to execute Python code and see the output.
11
+ """
12
+ async def execute_python(self, code: str) -> str:
13
+ """
14
+ Executes the provided Python code and returns the stdout/stderr.
15
+ """
16
+ logger.info("CodeSandboxTool: Executing Python code")
17
+
18
+ # Capture stdout and stderr
19
+ stdout = io.StringIO()
20
+ stderr = io.StringIO()
21
+
22
+ try:
23
+ with contextlib.redirect_stdout(stdout), contextlib.redirect_stderr(stderr):
24
+ # Using a fresh globals dictionary for each execution
25
+ exec_globals = {}
26
+ exec(code, exec_globals)
27
+
28
+ output = stdout.getvalue()
29
+ errors = stderr.getvalue()
30
+
31
+ if errors:
32
+ return f"Output:\n{output}\nErrors:\n{errors}"
33
+ return output if output else "Execution successful (no output)."
34
+
35
+ except Exception as e:
36
+ return f"Execution failed: {str(e)}"
37
+ finally:
38
+ stdout.close()
39
+ stderr.close()
backend/tools/sre.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import psutil
2
+ import os
3
+ import platform
4
+ from typing import Dict, Any
5
+ import logging
6
+
7
+ logger = logging.getLogger("uvicorn")
8
+
9
+ class SRETool:
10
+ """
11
+ A toolset for Site Reliability Engineering (SRE) agents to monitor and manage system health.
12
+ """
13
+
14
+ async def get_system_health(self) -> Dict[str, Any]:
15
+ """
16
+ Returns real-time system health metrics (CPU, RAM, Disk).
17
+ """
18
+ logger.info("SRETool: Gathering system health metrics")
19
+ return {
20
+ "cpu_percent": psutil.cpu_percent(interval=1),
21
+ "memory": {
22
+ "total": psutil.virtual_memory().total,
23
+ "available": psutil.virtual_memory().available,
24
+ "percent": psutil.virtual_memory().percent
25
+ },
26
+ "disk": {
27
+ "total": psutil.disk_usage('/').total,
28
+ "used": psutil.disk_usage('/').used,
29
+ "percent": psutil.disk_usage('/').percent
30
+ },
31
+ "os": platform.system(),
32
+ "uptime": self._get_uptime()
33
+ }
34
+
35
+ async def check_service_status(self, service_name: str) -> str:
36
+ """
37
+ Checks if a specific service/process is running.
38
+ """
39
+ logger.info(f"SRETool: Checking status of {service_name}")
40
+ for proc in psutil.process_iter(['name']):
41
+ if service_name.lower() in proc.info['name'].lower():
42
+ return f"Service '{service_name}' is RUNNING."
43
+ return f"Service '{service_name}' is NOT running."
44
+
45
+ def _get_uptime(self) -> str:
46
+ # Simple uptime calculation
47
+ import time
48
+ boot_time = psutil.boot_time()
49
+ uptime_seconds = time.time() - boot_time
50
+ return f"{int(uptime_seconds // 3600)}h {int((uptime_seconds % 3600) // 60)}m"
51
+
52
+ async def run_patch_command(self, command: str) -> str:
53
+ """
54
+ Executes a restricted set of patching commands.
55
+ """
56
+ logger.warning(f"SRETool: Attempting to run patch command: {command}")
57
+
58
+ # Restricted whitelist for security
59
+ whitelist = ["npm install", "pip install", "git pull", "npm audit fix"]
60
+
61
+ is_safe = any(command.startswith(safe) for safe in whitelist)
62
+ if not is_safe:
63
+ return f"Command '{command}' is not in the safety whitelist. Patch rejected."
64
+
65
+ try:
66
+ import subprocess
67
+ result = subprocess.run(command, shell=True, capture_output=True, text=True, timeout=60)
68
+ if result.returncode == 0:
69
+ return f"Patch successful: {result.stdout}"
70
+ return f"Patch failed: {result.stderr}"
71
+ except Exception as e:
72
+ return f"Error executing patch: {str(e)}"
backend/tools/visuals.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import urllib.parse
2
+ import json
3
+ import logging
4
+
5
+ logger = logging.getLogger("uvicorn")
6
+
7
+ class VisualsTool:
8
+ """
9
+ Provides visual generation capabilities like charts and illustrations.
10
+ """
11
+
12
+ async def generate_chart(self, chart_type: str, chart_config: str) -> str:
13
+ """
14
+ Generates a chart using QuickChart.io.
15
+ chart_type: 'bar', 'line', 'pie', 'doughnut'
16
+ chart_config: A JSON string containing the QuickChart configuration.
17
+ """
18
+ try:
19
+ # If chart_config is already a dict, convert to JSON
20
+ if isinstance(chart_config, dict):
21
+ config_str = json.dumps(chart_config)
22
+ else:
23
+ # Try to parse it to validate
24
+ config_str = chart_config
25
+ json.loads(config_str)
26
+
27
+ encoded_config = urllib.parse.quote(config_str)
28
+ url = f"https://quickchart.io/chart?c={encoded_config}"
29
+
30
+ logger.info(f"Generated chart URL: {url}")
31
+ return f"Chart generated successfully: {url}. You should include this URL in your markdown output as an image: ![Chart]({url})"
32
+ except Exception as e:
33
+ logger.error(f"Failed to generate chart: {e}")
34
+ return f"Error generating chart: {str(e)}. Please ensure your chart_config is a valid JSON string."
35
+
36
+ async def generate_illustration(self, prompt: str) -> str:
37
+ """
38
+ Generates an illustration using Pollinations.ai.
39
+ """
40
+ try:
41
+ encoded_prompt = urllib.parse.quote(prompt)
42
+ url = f"https://pollinations.ai/p/{encoded_prompt}?width=1024&height=1024&seed=42&model=flux"
43
+
44
+ logger.info(f"Generated illustration URL: {url}")
45
+ return f"Illustration generated successfully: {url}. You should include this URL in your markdown output as an image: ![Illustration]({url})"
46
+ except Exception as e:
47
+ logger.error(f"Failed to generate illustration: {e}")
48
+ return f"Error generating illustration: {str(e)}"
backend/worker.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import asyncio
2
+ import logging
3
+ import signal
4
+ from services.task_queue import TaskQueueService
5
+ from services.supabase_service import supabase
6
+ from services.agent_runner_service import AgentRunnerService
7
+
8
+ logging.basicConfig(level=logging.INFO)
9
+ logger = logging.getLogger("worker")
10
+
11
+ class AubmWorker:
12
+ def __init__(self):
13
+ self.running = True
14
+
15
+ async def start(self):
16
+ logger.info("Aubm Background Worker started.")
17
+ while self.running:
18
+ task = await TaskQueueService.get_next_queued_task()
19
+
20
+ if task:
21
+ task_id = task['id']
22
+ logger.info(f"Processing task: {task_id}")
23
+
24
+ try:
25
+ # Fetch agent data for this task
26
+ agent_res = supabase.table("agents").select("*").eq("id", task["assigned_agent_id"]).single().execute()
27
+ if agent_res.data:
28
+ await AgentRunnerService.execute_agent_logic(task, agent_res.data)
29
+ logger.info(f"Task {task_id} completed successfully.")
30
+ else:
31
+ logger.error(f"No agent found for task {task_id}")
32
+ except Exception as e:
33
+ logger.error(f"Failed to process task {task_id}: {e}")
34
+ else:
35
+ # No tasks, sleep for a bit
36
+ await asyncio.sleep(5)
37
+
38
+ def stop(self):
39
+ logger.info("Stopping worker...")
40
+ self.running = False
41
+
42
+ async def main():
43
+ worker = AubmWorker()
44
+
45
+ # Handle shutdown signals
46
+ loop = asyncio.get_running_loop()
47
+ for sig in (signal.SIGINT, signal.SIGTERM):
48
+ loop.add_signal_handler(sig, worker.stop)
49
+
50
+ await worker.start()
51
+
52
+ if __name__ == "__main__":
53
+ asyncio.run(main())
database/agent_ownership.sql ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Agent ownership and marketplace deploy policies
2
+ -- Apply this migration to existing Supabase projects after schema.sql.
3
+
4
+ ALTER TABLE public.agents
5
+ ADD COLUMN IF NOT EXISTS user_id UUID REFERENCES auth.users ON DELETE CASCADE;
6
+
7
+ ALTER TABLE public.agents ENABLE ROW LEVEL SECURITY;
8
+
9
+ DO $$
10
+ BEGIN
11
+ IF NOT EXISTS (
12
+ SELECT 1 FROM pg_policies
13
+ WHERE schemaname = 'public'
14
+ AND tablename = 'agents'
15
+ AND policyname = 'Users can create own agents'
16
+ ) THEN
17
+ CREATE POLICY "Users can create own agents" ON public.agents
18
+ FOR INSERT TO authenticated WITH CHECK (auth.uid() = user_id);
19
+ END IF;
20
+
21
+ IF NOT EXISTS (
22
+ SELECT 1 FROM pg_policies
23
+ WHERE schemaname = 'public'
24
+ AND tablename = 'agents'
25
+ AND policyname = 'Users can update own agents'
26
+ ) THEN
27
+ CREATE POLICY "Users can update own agents" ON public.agents
28
+ FOR UPDATE TO authenticated USING (auth.uid() = user_id) WITH CHECK (auth.uid() = user_id);
29
+ END IF;
30
+
31
+ IF NOT EXISTS (
32
+ SELECT 1 FROM pg_policies
33
+ WHERE schemaname = 'public'
34
+ AND tablename = 'agents'
35
+ AND policyname = 'Users can delete own agents'
36
+ ) THEN
37
+ CREATE POLICY "Users can delete own agents" ON public.agents
38
+ FOR DELETE TO authenticated USING (auth.uid() = user_id);
39
+ END IF;
40
+ END $$;
41
+
42
+ NOTIFY pgrst, 'reload schema';
database/default_agents.sql ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Global default agents
2
+ -- These agents are readable by authenticated users and usable by the backend
3
+ -- orchestrator as fallback agents. They are not owned by a specific user.
4
+
5
+ INSERT INTO public.agents (name, role, api_provider, model, system_prompt)
6
+ SELECT
7
+ 'Planner',
8
+ 'Project Planner',
9
+ 'openai',
10
+ 'gpt-4o',
11
+ 'You decompose goals into clear, ordered implementation tasks.'
12
+ WHERE NOT EXISTS (
13
+ SELECT 1 FROM public.agents WHERE user_id IS NULL AND name = 'Planner'
14
+ );
15
+
16
+ INSERT INTO public.agents (name, role, api_provider, model, system_prompt)
17
+ SELECT
18
+ 'Builder',
19
+ 'Implementation Agent',
20
+ 'openai',
21
+ 'gpt-4o',
22
+ 'You implement practical, production-oriented solutions with concise output.'
23
+ WHERE NOT EXISTS (
24
+ SELECT 1 FROM public.agents WHERE user_id IS NULL AND name = 'Builder'
25
+ );
26
+
27
+ INSERT INTO public.agents (name, role, api_provider, model, system_prompt)
28
+ SELECT
29
+ 'Reviewer',
30
+ 'Quality Reviewer',
31
+ 'openai',
32
+ 'gpt-4o',
33
+ 'You review outputs for correctness, security, completeness, and missing tests.'
34
+ WHERE NOT EXISTS (
35
+ SELECT 1 FROM public.agents WHERE user_id IS NULL AND name = 'Reviewer'
36
+ );
database/enterprise_security.sql ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Enterprise Teams Table
2
+ CREATE TABLE IF NOT EXISTS teams (
3
+ id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
4
+ name TEXT NOT NULL,
5
+ created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
6
+ );
7
+
8
+ -- Team Members Table
9
+ CREATE TABLE IF NOT EXISTS team_members (
10
+ id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
11
+ team_id UUID REFERENCES teams(id) ON DELETE CASCADE,
12
+ user_id UUID REFERENCES auth.users(id) ON DELETE CASCADE,
13
+ role TEXT CHECK (role IN ('admin', 'editor', 'viewer')) DEFAULT 'viewer',
14
+ UNIQUE(team_id, user_id)
15
+ );
16
+
17
+ -- Add team_id to Projects
18
+ ALTER TABLE projects ADD COLUMN IF NOT EXISTS team_id UUID REFERENCES teams(id);
19
+
20
+ -- Advanced RLS for Projects
21
+ ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
22
+
23
+ -- Policy: Users can view projects of their teams
24
+ CREATE POLICY "Team members can view team projects" ON projects
25
+ FOR SELECT USING (
26
+ EXISTS (
27
+ SELECT 1 FROM team_members
28
+ WHERE team_members.team_id = projects.team_id
29
+ AND team_members.user_id = auth.uid()
30
+ )
31
+ );
32
+
33
+ -- Policy: Only admins and editors can modify projects
34
+ CREATE POLICY "Team admins and editors can modify projects" ON projects
35
+ FOR ALL USING (
36
+ EXISTS (
37
+ SELECT 1 FROM team_members
38
+ WHERE team_members.team_id = projects.team_id
39
+ AND team_members.user_id = auth.uid()
40
+ AND team_members.role IN ('admin', 'editor')
41
+ )
42
+ );
43
+
44
+ -- Audit Logs for RLS
45
+ CREATE POLICY "Team members can view audit logs" ON audit_logs
46
+ FOR SELECT USING (
47
+ EXISTS (
48
+ SELECT 1 FROM projects
49
+ JOIN team_members ON team_members.team_id = projects.team_id
50
+ WHERE projects.id = audit_logs.task_id -- Simplified link
51
+ AND team_members.user_id = auth.uid()
52
+ )
53
+ );
database/marketplace.sql ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Agent Marketplace Table
2
+ CREATE TABLE IF NOT EXISTS agent_templates (
3
+ id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
4
+ name TEXT NOT NULL,
5
+ role TEXT NOT NULL,
6
+ description TEXT,
7
+ model TEXT NOT NULL,
8
+ api_provider TEXT NOT NULL,
9
+ system_prompt TEXT,
10
+ category TEXT, -- e.g., 'Marketing', 'Development', 'Legal'
11
+ author_id UUID REFERENCES auth.users(id),
12
+ is_featured BOOLEAN DEFAULT false,
13
+ usage_count INT DEFAULT 0,
14
+ created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
15
+ );
16
+
17
+ -- RLS for Marketplace (Public View)
18
+ ALTER TABLE agent_templates ENABLE ROW LEVEL SECURITY;
19
+
20
+ CREATE POLICY "Anyone can view templates" ON agent_templates
21
+ FOR SELECT USING (true);
22
+
23
+ CREATE POLICY "Users can create their own templates" ON agent_templates
24
+ FOR INSERT WITH CHECK (auth.uid() = author_id);
25
+
26
+ -- Seed some marketplace templates
27
+ INSERT INTO agent_templates (name, role, description, model, api_provider, category, system_prompt)
28
+ VALUES
29
+ ('Growth Hacker', 'Marketing Expert', 'Optimizes funnels and generates viral content ideas.', 'gpt-4o', 'openai', 'Marketing', 'You are a Growth Hacker focused on low-cost, high-impact strategies.'),
30
+ ('Code Architect', 'Senior Developer', 'Designs robust software architectures and reviews code.', 'gpt-4o', 'openai', 'Development', 'You are a Code Architect. Focus on scalability, security, and clean code.'),
31
+ ('Legal Analyst', 'Legal Advisor', 'Analyzes contracts and identifies legal risks.', 'gpt-4o', 'openai', 'Legal', 'You are a Legal Analyst. Review documents with high precision and caution.');
database/phase3_updates.sql ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Audit Logs Table
2
+ CREATE TABLE IF NOT EXISTS audit_logs (
3
+ id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
4
+ user_id UUID REFERENCES auth.users(id),
5
+ agent_id UUID REFERENCES agents(id),
6
+ task_id UUID REFERENCES tasks(id),
7
+ action TEXT NOT NULL,
8
+ metadata JSONB,
9
+ created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
10
+ );
11
+
12
+ -- Feedback Table for Fine-tuning
13
+ CREATE TABLE IF NOT EXISTS task_feedback (
14
+ id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
15
+ task_id UUID REFERENCES tasks(id) UNIQUE,
16
+ user_id UUID REFERENCES auth.users(id),
17
+ rating INT CHECK (rating IN (-1, 1)), -- -1 for dislike, 1 for like
18
+ comment TEXT,
19
+ created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
20
+ );
21
+
22
+ -- Add RLS to new tables
23
+ ALTER TABLE audit_logs ENABLE ROW LEVEL SECURITY;
24
+ ALTER TABLE task_feedback ENABLE ROW LEVEL SECURITY;
25
+
26
+ CREATE POLICY "Users can view their own audit logs" ON audit_logs
27
+ FOR SELECT USING (auth.uid() = user_id);
28
+
29
+ CREATE POLICY "Users can manage their own feedback" ON task_feedback
30
+ FOR ALL USING (auth.uid() = user_id);
database/schema.sql ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Aubm Database Schema
2
+ -- Designed for Supabase (PostgreSQL)
3
+
4
+ -- 1. Profiles (User Extensions)
5
+ CREATE TABLE IF NOT EXISTS public.profiles (
6
+ id UUID PRIMARY KEY REFERENCES auth.users ON DELETE CASCADE,
7
+ role TEXT CHECK (role IN ('user', 'manager', 'admin')) DEFAULT 'user',
8
+ full_name TEXT,
9
+ avatar_url TEXT,
10
+ created_at TIMESTAMPTZ DEFAULT NOW(),
11
+ updated_at TIMESTAMPTZ DEFAULT NOW()
12
+ );
13
+
14
+ -- 2. Projects
15
+ CREATE TABLE IF NOT EXISTS public.projects (
16
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
17
+ name TEXT NOT NULL,
18
+ description TEXT,
19
+ context TEXT,
20
+ owner_id UUID REFERENCES auth.users ON DELETE CASCADE,
21
+ status TEXT CHECK (status IN ('active', 'archived', 'completed')) DEFAULT 'active',
22
+ is_public BOOLEAN DEFAULT FALSE,
23
+ created_at TIMESTAMPTZ DEFAULT NOW(),
24
+ updated_at TIMESTAMPTZ DEFAULT NOW()
25
+ );
26
+
27
+ -- 3. Agents (AI Identities)
28
+ CREATE TABLE IF NOT EXISTS public.agents (
29
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
30
+ user_id UUID REFERENCES auth.users ON DELETE CASCADE,
31
+ name TEXT NOT NULL,
32
+ role TEXT,
33
+ api_provider TEXT NOT NULL,
34
+ model TEXT NOT NULL,
35
+ system_prompt TEXT,
36
+ created_at TIMESTAMPTZ DEFAULT NOW(),
37
+ updated_at TIMESTAMPTZ DEFAULT NOW()
38
+ );
39
+
40
+ -- 4. Tasks (Units of work)
41
+ CREATE TABLE IF NOT EXISTS public.tasks (
42
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
43
+ project_id UUID REFERENCES public.projects ON DELETE CASCADE,
44
+ assigned_agent_id UUID REFERENCES public.agents ON DELETE SET NULL,
45
+ title TEXT NOT NULL,
46
+ description TEXT,
47
+ status TEXT CHECK (status IN ('todo', 'in_progress', 'awaiting_approval', 'done', 'failed', 'cancelled')) DEFAULT 'todo',
48
+ priority INTEGER DEFAULT 0,
49
+ is_critical BOOLEAN DEFAULT FALSE,
50
+ output_data JSONB,
51
+ created_at TIMESTAMPTZ DEFAULT NOW(),
52
+ updated_at TIMESTAMPTZ DEFAULT NOW()
53
+ );
54
+
55
+ -- 5. Task Runs (Execution History)
56
+ CREATE TABLE IF NOT EXISTS public.task_runs (
57
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
58
+ task_id UUID REFERENCES public.tasks ON DELETE CASCADE,
59
+ agent_id UUID REFERENCES public.agents ON DELETE SET NULL,
60
+ status TEXT CHECK (status IN ('queued', 'running', 'completed', 'failed', 'cancelled')) DEFAULT 'queued',
61
+ error_message TEXT,
62
+ created_at TIMESTAMPTZ DEFAULT NOW(),
63
+ finished_at TIMESTAMPTZ
64
+ );
65
+
66
+ -- 6. Agent Logs (Execution Traces)
67
+ CREATE TABLE IF NOT EXISTS public.agent_logs (
68
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
69
+ task_id UUID REFERENCES public.tasks ON DELETE CASCADE,
70
+ run_id UUID REFERENCES public.task_runs ON DELETE CASCADE,
71
+ action TEXT,
72
+ content TEXT,
73
+ metadata JSONB,
74
+ created_at TIMESTAMPTZ DEFAULT NOW()
75
+ );
76
+
77
+ -- 7. App Config (Global Settings)
78
+ CREATE TABLE IF NOT EXISTS public.app_config (
79
+ key TEXT PRIMARY KEY,
80
+ value JSONB,
81
+ updated_at TIMESTAMPTZ DEFAULT NOW()
82
+ );
83
+
84
+ -- RLS (Row Level Security) - Initial setup
85
+ ALTER TABLE public.profiles ENABLE ROW LEVEL SECURITY;
86
+ ALTER TABLE public.projects ENABLE ROW LEVEL SECURITY;
87
+ ALTER TABLE public.agents ENABLE ROW LEVEL SECURITY;
88
+ ALTER TABLE public.tasks ENABLE ROW LEVEL SECURITY;
89
+ ALTER TABLE public.task_runs ENABLE ROW LEVEL SECURITY;
90
+ ALTER TABLE public.agent_logs ENABLE ROW LEVEL SECURITY;
91
+ ALTER TABLE public.app_config ENABLE ROW LEVEL SECURITY;
92
+
93
+ -- Basic Policies (To be refined)
94
+ -- Projects: Owners can do anything, others can read if public
95
+ CREATE POLICY "Projects visibility" ON public.projects
96
+ FOR SELECT USING (auth.uid() = owner_id OR is_public = true);
97
+
98
+ CREATE POLICY "Projects ownership" ON public.projects
99
+ FOR ALL USING (auth.uid() = owner_id);
100
+
101
+ -- Tasks: Protected by project ownership
102
+ CREATE POLICY "Tasks visibility" ON public.tasks
103
+ FOR SELECT USING (EXISTS (
104
+ SELECT 1 FROM public.projects
105
+ WHERE projects.id = tasks.project_id AND (projects.owner_id = auth.uid() OR projects.is_public = true)
106
+ ));
107
+
108
+ CREATE POLICY "Project owners can create tasks" ON public.tasks
109
+ FOR INSERT TO authenticated WITH CHECK (EXISTS (
110
+ SELECT 1 FROM public.projects
111
+ WHERE projects.id = tasks.project_id AND projects.owner_id = auth.uid()
112
+ ));
113
+
114
+ CREATE POLICY "Project owners can update tasks" ON public.tasks
115
+ FOR UPDATE TO authenticated USING (EXISTS (
116
+ SELECT 1 FROM public.projects
117
+ WHERE projects.id = tasks.project_id AND projects.owner_id = auth.uid()
118
+ )) WITH CHECK (EXISTS (
119
+ SELECT 1 FROM public.projects
120
+ WHERE projects.id = tasks.project_id AND projects.owner_id = auth.uid()
121
+ ));
122
+
123
+ CREATE POLICY "Project owners can delete tasks" ON public.tasks
124
+ FOR DELETE TO authenticated USING (EXISTS (
125
+ SELECT 1 FROM public.projects
126
+ WHERE projects.id = tasks.project_id AND projects.owner_id = auth.uid()
127
+ ));
128
+
129
+ -- Agents: Marketplace templates are readable by all authenticated users.
130
+ -- Deployed agents are owned by the user who deployed them.
131
+ CREATE POLICY "Agents readable" ON public.agents
132
+ FOR SELECT TO authenticated USING (true);
133
+
134
+ CREATE POLICY "Users can create own agents" ON public.agents
135
+ FOR INSERT TO authenticated WITH CHECK (auth.uid() = user_id);
136
+
137
+ CREATE POLICY "Users can update own agents" ON public.agents
138
+ FOR UPDATE TO authenticated USING (auth.uid() = user_id) WITH CHECK (auth.uid() = user_id);
139
+
140
+ CREATE POLICY "Users can delete own agents" ON public.agents
141
+ FOR DELETE TO authenticated USING (auth.uid() = user_id);
database/seed.sql ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Seed Data for Aubm
2
+
3
+ -- 1. Default Agents
4
+ INSERT INTO public.agents (name, role, api_provider, model, system_prompt)
5
+ VALUES
6
+ ('GPT-4o', 'General Intelligence', 'openai', 'gpt-4o', 'You are a highly capable AI assistant.'),
7
+ ('AMD-4o', 'Performance Specialist', 'amd', 'gpt-4o', 'You are a high-performance agent running on AMD infrastructure.'),
8
+ ('Llama-3-70B', 'Fast Logic', 'groq', 'llama3-70b-8192', 'You are a fast and efficient reasoning agent.');
9
+
10
+ -- 2. Default App Config
11
+ INSERT INTO public.app_config (key, value)
12
+ VALUES
13
+ ('output_language', '"en"'),
14
+ ('max_parallel_tasks', '5'),
15
+ ('enable_human_loop', 'true');
database/task_owner_policies.sql ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Task ownership policies for project owners
2
+ -- Apply this migration to existing Supabase projects after schema.sql.
3
+
4
+ ALTER TABLE public.tasks ENABLE ROW LEVEL SECURITY;
5
+
6
+ DO $$
7
+ BEGIN
8
+ IF NOT EXISTS (
9
+ SELECT 1 FROM pg_policies
10
+ WHERE schemaname = 'public'
11
+ AND tablename = 'tasks'
12
+ AND policyname = 'Project owners can create tasks'
13
+ ) THEN
14
+ CREATE POLICY "Project owners can create tasks" ON public.tasks
15
+ FOR INSERT TO authenticated WITH CHECK (
16
+ EXISTS (
17
+ SELECT 1 FROM public.projects
18
+ WHERE projects.id = tasks.project_id
19
+ AND projects.owner_id = auth.uid()
20
+ )
21
+ );
22
+ END IF;
23
+
24
+ IF NOT EXISTS (
25
+ SELECT 1 FROM pg_policies
26
+ WHERE schemaname = 'public'
27
+ AND tablename = 'tasks'
28
+ AND policyname = 'Project owners can update tasks'
29
+ ) THEN
30
+ CREATE POLICY "Project owners can update tasks" ON public.tasks
31
+ FOR UPDATE TO authenticated USING (
32
+ EXISTS (
33
+ SELECT 1 FROM public.projects
34
+ WHERE projects.id = tasks.project_id
35
+ AND projects.owner_id = auth.uid()
36
+ )
37
+ ) WITH CHECK (
38
+ EXISTS (
39
+ SELECT 1 FROM public.projects
40
+ WHERE projects.id = tasks.project_id
41
+ AND projects.owner_id = auth.uid()
42
+ )
43
+ );
44
+ END IF;
45
+
46
+ IF NOT EXISTS (
47
+ SELECT 1 FROM pg_policies
48
+ WHERE schemaname = 'public'
49
+ AND tablename = 'tasks'
50
+ AND policyname = 'Project owners can delete tasks'
51
+ ) THEN
52
+ CREATE POLICY "Project owners can delete tasks" ON public.tasks
53
+ FOR DELETE TO authenticated USING (
54
+ EXISTS (
55
+ SELECT 1 FROM public.projects
56
+ WHERE projects.id = tasks.project_id
57
+ AND projects.owner_id = auth.uid()
58
+ )
59
+ );
60
+ END IF;
61
+ END $$;
62
+
63
+ NOTIFY pgrst, 'reload schema';
frontend/.env.example ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ VITE_API_URL=http://localhost:8000
2
+ VITE_SUPABASE_URL=https://your-project-id.supabase.co
3
+ VITE_SUPABASE_ANON_KEY=your-anon-key-here
4
+
5
+ # Optional: Sentry
6
+ VITE_SENTRY_DSN=your-sentry-dsn
frontend/.gitignore ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Logs
2
+ logs
3
+ *.log
4
+ npm-debug.log*
5
+ yarn-debug.log*
6
+ yarn-error.log*
7
+ pnpm-debug.log*
8
+ lerna-debug.log*
9
+
10
+ node_modules
11
+ dist
12
+ dist-ssr
13
+ *.local
14
+
15
+ # Editor directories and files
16
+ .vscode/*
17
+ !.vscode/extensions.json
18
+ .idea
19
+ .DS_Store
20
+ *.suo
21
+ *.ntvs*
22
+ *.njsproj
23
+ *.sln
24
+ *.sw?
25
+ # Python virtual env
26
+ venv/
27
+ backend/venv/
28
+ **/venv/
29
+
30
+ # Python cache
31
+ __pycache__/
32
+ *.pyc
33
+ *.pyd
34
+
35
+ # Env files
36
+ .env
37
+ .env.*
38
+ **/.env
39
+ **/.env.*
40
+
41
+ # Permitir ejemplos sin secretos
42
+ !.env.example
43
+ !**/.env.example
44
+
frontend/Dockerfile ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Build stage
2
+ FROM node:20-slim AS build-stage
3
+
4
+ WORKDIR /app
5
+
6
+ COPY package*.json ./
7
+ RUN npm install
8
+
9
+ COPY . .
10
+ RUN npm run build
11
+
12
+ # Production stage
13
+ FROM nginx:stable-alpine AS production-stage
14
+
15
+ COPY --from=build-stage /app/dist /usr/share/nginx/html
16
+ COPY --from=build-stage /app/nginx.conf /etc/nginx/conf.d/default.conf
17
+
18
+ EXPOSE 80
19
+
20
+ CMD ["nginx", "-g", "daemon off;"]
frontend/README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # React + TypeScript + Vite
2
+
3
+ This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
4
+
5
+ Currently, two official plugins are available:
6
+
7
+ - [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react) uses [Oxc](https://oxc.rs)
8
+ - [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react-swc) uses [SWC](https://swc.rs/)
9
+
10
+ ## React Compiler
11
+
12
+ The React Compiler is not enabled on this template because of its impact on dev & build performances. To add it, see [this documentation](https://react.dev/learn/react-compiler/installation).
13
+
14
+ ## Expanding the ESLint configuration
15
+
16
+ If you are developing a production application, we recommend updating the configuration to enable type-aware lint rules:
17
+
18
+ ```js
19
+ export default defineConfig([
20
+ globalIgnores(['dist']),
21
+ {
22
+ files: ['**/*.{ts,tsx}'],
23
+ extends: [
24
+ // Other configs...
25
+
26
+ // Remove tseslint.configs.recommended and replace with this
27
+ tseslint.configs.recommendedTypeChecked,
28
+ // Alternatively, use this for stricter rules
29
+ tseslint.configs.strictTypeChecked,
30
+ // Optionally, add this for stylistic rules
31
+ tseslint.configs.stylisticTypeChecked,
32
+
33
+ // Other configs...
34
+ ],
35
+ languageOptions: {
36
+ parserOptions: {
37
+ project: ['./tsconfig.node.json', './tsconfig.app.json'],
38
+ tsconfigRootDir: import.meta.dirname,
39
+ },
40
+ // other options...
41
+ },
42
+ },
43
+ ])
44
+ ```
45
+
46
+ You can also install [eslint-plugin-react-x](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-x) and [eslint-plugin-react-dom](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-dom) for React-specific lint rules:
47
+
48
+ ```js
49
+ // eslint.config.js
50
+ import reactX from 'eslint-plugin-react-x'
51
+ import reactDom from 'eslint-plugin-react-dom'
52
+
53
+ export default defineConfig([
54
+ globalIgnores(['dist']),
55
+ {
56
+ files: ['**/*.{ts,tsx}'],
57
+ extends: [
58
+ // Other configs...
59
+ // Enable lint rules for React
60
+ reactX.configs['recommended-typescript'],
61
+ // Enable lint rules for React DOM
62
+ reactDom.configs.recommended,
63
+ ],
64
+ languageOptions: {
65
+ parserOptions: {
66
+ project: ['./tsconfig.node.json', './tsconfig.app.json'],
67
+ tsconfigRootDir: import.meta.dirname,
68
+ },
69
+ // other options...
70
+ },
71
+ },
72
+ ])
73
+ ```