Spaces:
Running
feat: MiroOrg v2 full rebuild — conditional graph, new model layer, finance/mirofish nodes, research tools, sentinel layer, frontend overhaul
Browse files## Backend
- _model.py: OpenRouter free ladder (Nemotron → Llama 3.3 → DeepSeek R1 → openrouter/free) → Ollama fallback, zero Gemini
- graph.py: conditional LangGraph topology — switchboard forks to mirofish/finance/research, verifier→planner feedback loop (max 2 replans)
- agents/finance_node.py: Alpha Vantage integration (GLOBAL_QUOTE, OVERVIEW, NEWS_SENTIMENT, TOP_GAINERS_LOSERS, REAL_GDP, CPI, INFLATION)
- agents/mirofish_node.py: MiroFish simulation engine client
- agents/api_discovery.py: dynamic API registry for runtime tool expansion
- research.py: Tavily + NewsAPI + knowledge store + API discovery tool stack
- All agents: call_model(messages) + safe_parse(), structured error dicts, never None
- schemas.py: AgentState TypedDict, RunResponse with simulation/finance fields, no chart_data
- memory.py: KnowledgeStore with keyword search over data/knowledge/
- main.py: /debug/state/{case_id} endpoint, structured agent error logging
- routers/finance.py: /finance/ticker, /finance/news/analyze, /finance/headlines
- routers/sentinel.py: full sentinel layer API
- routers/learning.py: learning layer API
- services/sentinel/: watcher, diagnostician, patcher, capability_tracker, sentinel_engine, scheduler
- services/learning/: knowledge_ingestor, knowledge_store, learning_engine, prompt_optimizer, skill_distiller, trust_manager, freshness_manager, scheduler
- prompts/: switchboard, research, planner, verifier, synthesizer, finance, simulation all rewritten
## Frontend
- JANUS interface: art piece intro, Command/Intel Stream/Markets tabs
- Command tab: full 5-agent pipeline with confidence rings, typewriter synthesis
- Intel Stream: live headlines, search, Deep Research button fires full pipeline on any article
- Markets tab: ticker search (Indian NSE/BSE + global), TradingView chart, AI signal, Deep Research
- Sidebar: top nav bar replacing restrictive side rail
- Removed: scam/rumor/trust keyword-based scores (unreliable), TradingView logo issues fixed
## Security
- .env never committed (gitignored)
- .env.example uses placeholder values only
- backend/backend/ stray directory excluded
- Runtime data (memory/*.json, sentinel/*.json, knowledge/*.json) gitignored
- .kiro/specs/ai-financial-intelligence-system/tasks.md +3 -3
- backend/.env.example +28 -49
- backend/app/agents/_model.py +84 -162
- backend/app/agents/api_discovery.py +49 -0
- backend/app/agents/finance_node.py +118 -0
- backend/app/agents/mirofish_node.py +55 -0
- backend/app/agents/planner.py +57 -57
- backend/app/agents/research.py +142 -83
- backend/app/agents/switchboard.py +47 -74
- backend/app/agents/synthesizer.py +58 -88
- backend/app/agents/verifier.py +49 -89
- backend/app/config.py +9 -0
- backend/app/graph.py +136 -129
- backend/app/main.py +62 -32
- backend/app/memory.py +37 -1
- backend/app/prompts/finance.txt +10 -0
- backend/app/prompts/planner.txt +29 -90
- backend/app/prompts/research.txt +37 -87
- backend/app/prompts/simulation.txt +12 -0
- backend/app/prompts/switchboard.txt +24 -0
- backend/app/prompts/synthesizer.txt +30 -103
- backend/app/prompts/verifier.txt +28 -111
- backend/app/schemas.py +23 -6
- backend/requirements.txt +9 -8
- frontend/src/app/page.tsx +2 -23
|
@@ -353,7 +353,7 @@ The implementation follows 9 phases, each building on the previous while maintai
|
|
| 353 |
|
| 354 |
### Phase 7: Testing and Documentation
|
| 355 |
|
| 356 |
-
- [
|
| 357 |
- [ ]* 25.1 Write provider abstraction tests
|
| 358 |
- Test OpenRouter, Ollama, OpenAI provider calls
|
| 359 |
- Test provider fallback behavior
|
|
@@ -388,7 +388,7 @@ The implementation follows 9 phases, each building on the previous while maintai
|
|
| 388 |
- Test error handling for disabled MiroFish
|
| 389 |
- _Requirements: 8.1, 8.11, 8.12_
|
| 390 |
|
| 391 |
-
- [
|
| 392 |
- [ ]* 26.1 Write Property 1: Configuration Environment Isolation
|
| 393 |
- **Property 1: Configuration Environment Isolation**
|
| 394 |
- **Validates: Requirements 1.8, 6.7**
|
|
@@ -479,7 +479,7 @@ The implementation follows 9 phases, each building on the previous while maintai
|
|
| 479 |
- Test that new domain packs don't require agent changes
|
| 480 |
- Create mock domain pack and verify integration
|
| 481 |
|
| 482 |
-
- [
|
| 483 |
- [ ]* 27.1 Write end-to-end case execution test
|
| 484 |
- Test complete workflow from user input to final answer
|
| 485 |
- Verify all agents execute correctly
|
|
|
|
| 353 |
|
| 354 |
### Phase 7: Testing and Documentation
|
| 355 |
|
| 356 |
+
- [ ] 25. Write unit tests for core functionality
|
| 357 |
- [ ]* 25.1 Write provider abstraction tests
|
| 358 |
- Test OpenRouter, Ollama, OpenAI provider calls
|
| 359 |
- Test provider fallback behavior
|
|
|
|
| 388 |
- Test error handling for disabled MiroFish
|
| 389 |
- _Requirements: 8.1, 8.11, 8.12_
|
| 390 |
|
| 391 |
+
- [ ] 26. Write property-based tests
|
| 392 |
- [ ]* 26.1 Write Property 1: Configuration Environment Isolation
|
| 393 |
- **Property 1: Configuration Environment Isolation**
|
| 394 |
- **Validates: Requirements 1.8, 6.7**
|
|
|
|
| 479 |
- Test that new domain packs don't require agent changes
|
| 480 |
- Create mock domain pack and verify integration
|
| 481 |
|
| 482 |
+
- [ ] 27. Write integration tests
|
| 483 |
- [ ]* 27.1 Write end-to-end case execution test
|
| 484 |
- Test complete workflow from user input to final answer
|
| 485 |
- Verify all agents execute correctly
|
|
@@ -1,76 +1,55 @@
|
|
| 1 |
# ========================================
|
| 2 |
-
# MiroOrg
|
| 3 |
# Environment Configuration
|
| 4 |
# ========================================
|
| 5 |
|
| 6 |
# ---------- Application Version ----------
|
| 7 |
-
APP_VERSION=0.
|
| 8 |
|
| 9 |
-
# ---------- Primary
|
| 10 |
-
# PRIMARY_PROVIDER: The main LLM provider to use (openrouter, ollama, or openai)
|
| 11 |
-
# FALLBACK_PROVIDER: The backup provider if primary fails (openrouter, ollama, or openai)
|
| 12 |
-
PRIMARY_PROVIDER=openrouter
|
| 13 |
-
FALLBACK_PROVIDER=ollama
|
| 14 |
-
|
| 15 |
-
# ---------- OpenRouter ----------
|
| 16 |
# Get your API key from: https://openrouter.ai/keys
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
OPENROUTER_CHAT_MODEL=openrouter/free
|
| 20 |
-
OPENROUTER_REASONER_MODEL=openrouter/free
|
| 21 |
-
OPENROUTER_SITE_URL=http://localhost:3000
|
| 22 |
-
OPENROUTER_APP_NAME=MiroOrg Basic
|
| 23 |
|
| 24 |
-
# ---------- Ollama ----------
|
| 25 |
-
# Ollama provides local LLM inference
|
| 26 |
# Install from: https://ollama.ai
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
OLLAMA_CHAT_MODEL=qwen2.5:3b-instruct
|
| 30 |
-
OLLAMA_REASONER_MODEL=qwen2.5:3b-instruct
|
| 31 |
-
|
| 32 |
-
# ---------- OpenAI ----------
|
| 33 |
-
# OpenAI provides GPT models
|
| 34 |
-
# Get your API key from: https://platform.openai.com/api-keys
|
| 35 |
-
OPENAI_API_KEY=
|
| 36 |
-
OPENAI_BASE_URL=https://api.openai.com/v1
|
| 37 |
-
OPENAI_CHAT_MODEL=gpt-4o-mini
|
| 38 |
-
OPENAI_REASONER_MODEL=gpt-4o
|
| 39 |
|
| 40 |
-
# ---------- External
|
| 41 |
# Tavily: AI-powered web search API - https://tavily.com
|
|
|
|
|
|
|
| 42 |
# NewsAPI: News aggregation API - https://newsapi.org
|
|
|
|
|
|
|
| 43 |
# Alpha Vantage: Financial data API - https://www.alphavantage.co
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
|
| 50 |
-
# ----------
|
| 51 |
-
#
|
| 52 |
-
|
| 53 |
-
MIROFISH_ENABLED=true
|
| 54 |
-
MIROFISH_API_BASE=http://127.0.0.1:5001
|
| 55 |
-
MIROFISH_TIMEOUT_SECONDS=120
|
| 56 |
-
MIROFISH_HEALTH_PATH=/health
|
| 57 |
-
MIROFISH_RUN_PATH=/simulation/run
|
| 58 |
-
MIROFISH_STATUS_PATH=/simulation/{id}
|
| 59 |
-
MIROFISH_REPORT_PATH=/simulation/{id}/report
|
| 60 |
-
MIROFISH_CHAT_PATH=/simulation/{id}/chat
|
| 61 |
|
| 62 |
# ---------- Routing ----------
|
| 63 |
# Comma-separated list of keywords that trigger simulation mode
|
| 64 |
-
# Examples: simulate, predict, what if, reaction, scenario, public opinion, policy impact, market impact, digital twin
|
| 65 |
SIMULATION_TRIGGER_KEYWORDS=simulate,predict,what if,reaction,scenario,public opinion,policy impact,market impact,digital twin
|
| 66 |
|
| 67 |
# ---------- Domain Packs ----------
|
| 68 |
-
# Enable/disable domain packs (future feature)
|
| 69 |
FINANCE_DOMAIN_PACK_ENABLED=true
|
| 70 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
# ---------- Sentinel Layer ----------
|
| 73 |
-
# Sentinel provides adaptive maintenance and self-healing
|
| 74 |
SENTINEL_ENABLED=true
|
| 75 |
SENTINEL_CYCLE_INTERVAL_MINUTES=60
|
| 76 |
SENTINEL_MAX_DIAGNOSES_PER_CYCLE=5
|
|
|
|
| 1 |
# ========================================
|
| 2 |
+
# MiroOrg v2 — Multi-Agent Intelligence Platform
|
| 3 |
# Environment Configuration
|
| 4 |
# ========================================
|
| 5 |
|
| 6 |
# ---------- Application Version ----------
|
| 7 |
+
APP_VERSION=2.0.0
|
| 8 |
|
| 9 |
+
# ---------- OpenRouter (Primary — Free Models) ----------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
# Get your API key from: https://openrouter.ai/keys
|
| 11 |
+
# Uses free model ladder: Nemotron → Llama 3.3 → DeepSeek R1 → openrouter/free
|
| 12 |
+
OPENROUTER_API_KEY=your_openrouter_key_here
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
+
# ---------- Ollama (Fallback — Local) ----------
|
| 15 |
+
# Ollama provides local LLM inference via OpenAI-compatible endpoint
|
| 16 |
# Install from: https://ollama.ai
|
| 17 |
+
OLLAMA_BASE_URL=http://localhost:11434
|
| 18 |
+
OLLAMA_MODEL=llama3.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
# ---------- External Research APIs ----------
|
| 21 |
# Tavily: AI-powered web search API - https://tavily.com
|
| 22 |
+
TAVILY_API_KEY=your_tavily_key_here
|
| 23 |
+
|
| 24 |
# NewsAPI: News aggregation API - https://newsapi.org
|
| 25 |
+
NEWS_API_KEY=your_newsapi_key_here
|
| 26 |
+
|
| 27 |
# Alpha Vantage: Financial data API - https://www.alphavantage.co
|
| 28 |
+
ALPHA_VANTAGE_API_KEY=your_alpha_vantage_key_here
|
| 29 |
+
|
| 30 |
+
# ---------- MiroFish Simulation Engine ----------
|
| 31 |
+
# MiroFish handles scenario modelling, agent-based simulation, and outcome projection
|
| 32 |
+
MIROFISH_BASE_URL=http://localhost:8001
|
| 33 |
|
| 34 |
+
# ---------- API Discovery Layer ----------
|
| 35 |
+
# Dynamic API registry for runtime tool expansion
|
| 36 |
+
API_DISCOVERY_ENDPOINT=http://localhost:8002
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
# ---------- Routing ----------
|
| 39 |
# Comma-separated list of keywords that trigger simulation mode
|
|
|
|
| 40 |
SIMULATION_TRIGGER_KEYWORDS=simulate,predict,what if,reaction,scenario,public opinion,policy impact,market impact,digital twin
|
| 41 |
|
| 42 |
# ---------- Domain Packs ----------
|
|
|
|
| 43 |
FINANCE_DOMAIN_PACK_ENABLED=true
|
| 44 |
|
| 45 |
+
# ---------- Learning Layer ----------
|
| 46 |
+
LEARNING_ENABLED=true
|
| 47 |
+
KNOWLEDGE_MAX_SIZE_MB=200
|
| 48 |
+
LEARNING_SCHEDULE_INTERVAL=6
|
| 49 |
+
LEARNING_BATCH_SIZE=10
|
| 50 |
+
LEARNING_TOPICS=finance,markets,technology,policy
|
| 51 |
|
| 52 |
# ---------- Sentinel Layer ----------
|
|
|
|
| 53 |
SENTINEL_ENABLED=true
|
| 54 |
SENTINEL_CYCLE_INTERVAL_MINUTES=60
|
| 55 |
SENTINEL_MAX_DIAGNOSES_PER_CYCLE=5
|
|
@@ -1,176 +1,98 @@
|
|
| 1 |
-
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
| 3 |
|
|
|
|
| 4 |
import httpx
|
| 5 |
-
|
| 6 |
-
from app.config import (
|
| 7 |
-
PRIMARY_PROVIDER,
|
| 8 |
-
FALLBACK_PROVIDER,
|
| 9 |
-
OPENROUTER_API_KEY,
|
| 10 |
-
OPENROUTER_BASE_URL,
|
| 11 |
-
OPENROUTER_CHAT_MODEL,
|
| 12 |
-
OPENROUTER_REASONER_MODEL,
|
| 13 |
-
OPENROUTER_SITE_URL,
|
| 14 |
-
OPENROUTER_APP_NAME,
|
| 15 |
-
OLLAMA_ENABLED,
|
| 16 |
-
OLLAMA_BASE_URL,
|
| 17 |
-
OLLAMA_CHAT_MODEL,
|
| 18 |
-
OLLAMA_REASONER_MODEL,
|
| 19 |
-
OPENAI_API_KEY,
|
| 20 |
-
OPENAI_BASE_URL,
|
| 21 |
-
OPENAI_CHAT_MODEL,
|
| 22 |
-
OPENAI_REASONER_MODEL,
|
| 23 |
-
)
|
| 24 |
|
| 25 |
logger = logging.getLogger(__name__)
|
| 26 |
|
|
|
|
|
|
|
| 27 |
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
def _pick_ollama_model(mode: str) -> str:
|
| 37 |
-
return OLLAMA_REASONER_MODEL if mode == "reasoner" else OLLAMA_CHAT_MODEL
|
| 38 |
-
|
| 39 |
|
| 40 |
-
|
| 41 |
-
|
|
|
|
| 42 |
|
| 43 |
|
| 44 |
-
def
|
| 45 |
-
|
| 46 |
-
if system_prompt:
|
| 47 |
-
messages.append({"role": "system", "content": system_prompt})
|
| 48 |
-
messages.append({"role": "user", "content": prompt})
|
| 49 |
-
return messages
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
def _call_openrouter(prompt: str, mode: str = "chat", system_prompt: Optional[str] = None) -> str:
|
| 53 |
-
if not OPENROUTER_API_KEY:
|
| 54 |
-
raise LLMProviderError("OPENROUTER_API_KEY is missing.")
|
| 55 |
-
|
| 56 |
headers = {
|
| 57 |
-
"Authorization": f"Bearer {
|
|
|
|
|
|
|
| 58 |
"Content-Type": "application/json",
|
| 59 |
}
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
payload = {
|
| 85 |
-
"model": _pick_ollama_model(mode),
|
| 86 |
-
"messages": _build_messages(prompt, system_prompt=system_prompt),
|
| 87 |
-
"stream": False,
|
| 88 |
-
}
|
| 89 |
-
|
| 90 |
-
with httpx.Client(timeout=120) as client:
|
| 91 |
-
response = client.post(f"{OLLAMA_BASE_URL}/chat", json=payload)
|
| 92 |
-
|
| 93 |
-
if response.status_code >= 400:
|
| 94 |
-
raise LLMProviderError(f"Ollama error {response.status_code}: {response.text}")
|
| 95 |
-
|
| 96 |
-
data = response.json()
|
| 97 |
-
message = data.get("message", {})
|
| 98 |
-
return str(message.get("content", "")).strip()
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
def _call_openai(prompt: str, mode: str = "chat", system_prompt: Optional[str] = None) -> str:
|
| 102 |
-
if not OPENAI_API_KEY:
|
| 103 |
-
raise LLMProviderError("OPENAI_API_KEY is missing.")
|
| 104 |
-
|
| 105 |
-
headers = {
|
| 106 |
-
"Authorization": f"Bearer {OPENAI_API_KEY}",
|
| 107 |
-
"Content-Type": "application/json",
|
| 108 |
-
}
|
| 109 |
-
|
| 110 |
-
payload = {
|
| 111 |
-
"model": _pick_openai_model(mode),
|
| 112 |
-
"messages": _build_messages(prompt, system_prompt=system_prompt),
|
| 113 |
-
}
|
| 114 |
-
|
| 115 |
-
with httpx.Client(timeout=90) as client:
|
| 116 |
-
response = client.post(f"{OPENAI_BASE_URL}/chat/completions", headers=headers, json=payload)
|
| 117 |
-
|
| 118 |
-
if response.status_code >= 400:
|
| 119 |
-
raise LLMProviderError(f"OpenAI error {response.status_code}: {response.text}")
|
| 120 |
-
|
| 121 |
-
data = response.json()
|
| 122 |
-
return data["choices"][0]["message"]["content"].strip()
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
def call_model(
|
| 126 |
-
prompt: str,
|
| 127 |
-
mode: str = "chat",
|
| 128 |
-
system_prompt: Optional[str] = None,
|
| 129 |
-
provider_override: Optional[str] = None,
|
| 130 |
-
) -> str:
|
| 131 |
-
provider = (provider_override or PRIMARY_PROVIDER).lower()
|
| 132 |
-
logger.info(f"Calling model with provider={provider}, mode={mode}")
|
| 133 |
-
|
| 134 |
-
try:
|
| 135 |
-
if provider == "openrouter":
|
| 136 |
-
result = _call_openrouter(prompt, mode=mode, system_prompt=system_prompt)
|
| 137 |
-
logger.info(f"Provider {provider} succeeded")
|
| 138 |
-
return result
|
| 139 |
-
if provider == "ollama":
|
| 140 |
-
result = _call_ollama(prompt, mode=mode, system_prompt=system_prompt)
|
| 141 |
-
logger.info(f"Provider {provider} succeeded")
|
| 142 |
-
return result
|
| 143 |
-
if provider == "openai":
|
| 144 |
-
result = _call_openai(prompt, mode=mode, system_prompt=system_prompt)
|
| 145 |
-
logger.info(f"Provider {provider} succeeded")
|
| 146 |
-
return result
|
| 147 |
-
raise LLMProviderError(f"Unsupported provider: {provider}")
|
| 148 |
-
except Exception as primary_error:
|
| 149 |
-
logger.warning(f"Primary provider {provider} failed: {primary_error}")
|
| 150 |
-
fallback = FALLBACK_PROVIDER.lower()
|
| 151 |
-
if fallback == provider:
|
| 152 |
-
logger.error(f"No fallback available, primary provider {provider} failed")
|
| 153 |
-
raise LLMProviderError(str(primary_error))
|
| 154 |
-
|
| 155 |
-
logger.info(f"Attempting fallback to provider={fallback}")
|
| 156 |
try:
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
logger.info(f"Fallback provider {fallback} succeeded")
|
| 164 |
-
return result
|
| 165 |
-
if fallback == "openai":
|
| 166 |
-
result = _call_openai(prompt, mode=mode, system_prompt=system_prompt)
|
| 167 |
-
logger.info(f"Fallback provider {fallback} succeeded")
|
| 168 |
-
return result
|
| 169 |
-
except Exception as fallback_error:
|
| 170 |
-
logger.error(f"Fallback provider {fallback} also failed: {fallback_error}")
|
| 171 |
-
raise LLMProviderError(
|
| 172 |
-
f"Primary provider failed: {primary_error} | Fallback failed: {fallback_error}"
|
| 173 |
-
)
|
| 174 |
|
| 175 |
-
|
| 176 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Unified model client for MiroOrg v2.
|
| 3 |
+
Priority: OpenRouter free → Ollama fallback → raise with diagnostics.
|
| 4 |
+
All tiers use the OpenAI-compatible messages format.
|
| 5 |
+
"""
|
| 6 |
|
| 7 |
+
import os, json, re, logging
|
| 8 |
import httpx
|
| 9 |
+
from typing import Any
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
logger = logging.getLogger(__name__)
|
| 12 |
|
| 13 |
+
OPENROUTER_BASE = "https://openrouter.ai/api/v1"
|
| 14 |
+
OPENROUTER_KEY = os.getenv("OPENROUTER_API_KEY", "")
|
| 15 |
|
| 16 |
+
# Pinned free models in preference order (all have :free suffix = zero cost)
|
| 17 |
+
FREE_MODEL_LADDER = [
|
| 18 |
+
"nvidia/llama-3.1-nemotron-ultra-253b:free", # best reasoning, large context
|
| 19 |
+
"meta-llama/llama-3.3-70b-instruct:free", # reliable, GPT-4 class
|
| 20 |
+
"deepseek/deepseek-r1:free", # strong chain-of-thought
|
| 21 |
+
"openrouter/free", # random free as last resort
|
| 22 |
+
]
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
+
OLLAMA_BASE = os.getenv("OLLAMA_BASE_URL", "http://localhost:11434")
|
| 25 |
+
OLLAMA_MODEL = os.getenv("OLLAMA_MODEL", "llama3.2") # user configures
|
| 26 |
+
TIMEOUT = 120
|
| 27 |
|
| 28 |
|
| 29 |
+
def _openrouter_call(messages: list[dict], model: str, **kwargs) -> str:
|
| 30 |
+
"""Single call to OpenRouter. Raises on non-200."""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
headers = {
|
| 32 |
+
"Authorization": f"Bearer {OPENROUTER_KEY}",
|
| 33 |
+
"HTTP-Referer": "https://miroorg.local",
|
| 34 |
+
"X-Title": "MiroOrg v2",
|
| 35 |
"Content-Type": "application/json",
|
| 36 |
}
|
| 37 |
+
body = {"model": model, "messages": messages, "max_tokens": 2048, **kwargs}
|
| 38 |
+
r = httpx.post(f"{OPENROUTER_BASE}/chat/completions",
|
| 39 |
+
headers=headers, json=body, timeout=TIMEOUT)
|
| 40 |
+
r.raise_for_status()
|
| 41 |
+
return r.json()["choices"][0]["message"]["content"]
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
def _ollama_call(messages: list[dict], **kwargs) -> str:
|
| 45 |
+
"""Fallback: Ollama local via OpenAI-compatible endpoint."""
|
| 46 |
+
body = {"model": OLLAMA_MODEL, "messages": messages, "stream": False}
|
| 47 |
+
r = httpx.post(f"{OLLAMA_BASE}/v1/chat/completions",
|
| 48 |
+
json=body, timeout=TIMEOUT)
|
| 49 |
+
r.raise_for_status()
|
| 50 |
+
return r.json()["choices"][0]["message"]["content"]
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
def call_model(messages: list[dict], **kwargs) -> str:
|
| 54 |
+
"""
|
| 55 |
+
Try OpenRouter free models in ladder order, then Ollama.
|
| 56 |
+
Returns raw text. Never returns None — raises RuntimeError with full diagnostics
|
| 57 |
+
so the caller can write a structured error dict instead of silently propagating None.
|
| 58 |
+
"""
|
| 59 |
+
errors = []
|
| 60 |
+
for model in FREE_MODEL_LADDER:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
try:
|
| 62 |
+
result = _openrouter_call(messages, model, **kwargs)
|
| 63 |
+
logger.info(f"Model call succeeded: {model}")
|
| 64 |
+
return result
|
| 65 |
+
except Exception as e:
|
| 66 |
+
errors.append(f"OpenRouter [{model}]: {e}")
|
| 67 |
+
logger.warning(f"OpenRouter [{model}] failed: {e}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
| 69 |
+
# Ollama fallback
|
| 70 |
+
try:
|
| 71 |
+
result = _ollama_call(messages, **kwargs)
|
| 72 |
+
logger.info(f"Ollama fallback succeeded: {OLLAMA_MODEL}")
|
| 73 |
+
return result
|
| 74 |
+
except Exception as e:
|
| 75 |
+
errors.append(f"Ollama [{OLLAMA_MODEL}]: {e}")
|
| 76 |
+
logger.error(f"Ollama fallback failed: {e}")
|
| 77 |
+
|
| 78 |
+
raise RuntimeError("All model tiers failed:\n" + "\n".join(errors))
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
def safe_parse(text: str) -> dict:
|
| 82 |
+
"""
|
| 83 |
+
Strip markdown fences, attempt JSON parse.
|
| 84 |
+
On failure returns a structured error dict — NEVER returns None.
|
| 85 |
+
Callers must check for 'error' key in the result.
|
| 86 |
+
"""
|
| 87 |
+
cleaned = re.sub(r"```(?:json)?|```", "", text).strip()
|
| 88 |
+
try:
|
| 89 |
+
return json.loads(cleaned)
|
| 90 |
+
except json.JSONDecodeError:
|
| 91 |
+
# Try extracting the first JSON-like block
|
| 92 |
+
match = re.search(r"\{.*\}", cleaned, re.DOTALL)
|
| 93 |
+
if match:
|
| 94 |
+
try:
|
| 95 |
+
return json.loads(match.group())
|
| 96 |
+
except json.JSONDecodeError:
|
| 97 |
+
pass
|
| 98 |
+
return {"error": "parse_failed", "raw": text[:800]}
|
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
API Discovery Layer client.
|
| 3 |
+
Allows agents to query a registry of available APIs and invoke them dynamically.
|
| 4 |
+
This enables MiroOrg to expand its tool set without code changes.
|
| 5 |
+
"""
|
| 6 |
+
import httpx, os
|
| 7 |
+
import logging
|
| 8 |
+
|
| 9 |
+
logger = logging.getLogger(__name__)
|
| 10 |
+
|
| 11 |
+
DISCOVERY_BASE = os.getenv("API_DISCOVERY_ENDPOINT", "http://localhost:8002")
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def discover_apis(query: str, domain: str = "general") -> list[dict]:
|
| 15 |
+
"""
|
| 16 |
+
Returns a list of available API descriptors relevant to the query.
|
| 17 |
+
Each descriptor: {name, endpoint, description, params_schema, auth_type}
|
| 18 |
+
"""
|
| 19 |
+
try:
|
| 20 |
+
r = httpx.get(f"{DISCOVERY_BASE}/search", params={
|
| 21 |
+
"q": query, "domain": domain,
|
| 22 |
+
}, timeout=10)
|
| 23 |
+
r.raise_for_status()
|
| 24 |
+
return r.json().get("apis", [])
|
| 25 |
+
except Exception as e:
|
| 26 |
+
logger.debug(f"API Discovery unavailable: {e}")
|
| 27 |
+
return []
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
def call_discovered_api(descriptor: dict, params: dict) -> dict:
|
| 31 |
+
"""
|
| 32 |
+
Calls an API found via discovery. Handles auth injection from env.
|
| 33 |
+
Returns raw response dict or {"error": ...} on failure.
|
| 34 |
+
"""
|
| 35 |
+
auth_type = descriptor.get("auth_type", "none")
|
| 36 |
+
headers = {}
|
| 37 |
+
if auth_type == "bearer":
|
| 38 |
+
env_key = descriptor.get("env_key", "")
|
| 39 |
+
token = os.getenv(env_key, "")
|
| 40 |
+
if token:
|
| 41 |
+
headers["Authorization"] = f"Bearer {token}"
|
| 42 |
+
|
| 43 |
+
try:
|
| 44 |
+
r = httpx.get(descriptor["endpoint"], params=params,
|
| 45 |
+
headers=headers, timeout=30)
|
| 46 |
+
r.raise_for_status()
|
| 47 |
+
return r.json()
|
| 48 |
+
except Exception as e:
|
| 49 |
+
return {"error": str(e)}
|
|
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Finance data node — Alpha Vantage integration.
|
| 3 |
+
Fetches market data, fundamentals, sentiment, and economic indicators.
|
| 4 |
+
No chart rendering — raw structured data only.
|
| 5 |
+
"""
|
| 6 |
+
import httpx, os, re, logging
|
| 7 |
+
from app.agents._model import call_model, safe_parse
|
| 8 |
+
from app.config import load_prompt
|
| 9 |
+
|
| 10 |
+
logger = logging.getLogger(__name__)
|
| 11 |
+
|
| 12 |
+
AV_BASE = "https://www.alphavantage.co/query"
|
| 13 |
+
AV_KEY = os.getenv("ALPHA_VANTAGE_API_KEY", os.getenv("ALPHAVANTAGE_API_KEY", "demo"))
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
def av_get(function: str, **params) -> dict:
|
| 17 |
+
"""Single Alpha Vantage GET call. Returns parsed JSON or {"error": ...}."""
|
| 18 |
+
try:
|
| 19 |
+
r = httpx.get(AV_BASE, params={"function": function, "apikey": AV_KEY, **params},
|
| 20 |
+
timeout=20)
|
| 21 |
+
r.raise_for_status()
|
| 22 |
+
data = r.json()
|
| 23 |
+
# AV returns {"Information": "..."} when rate-limited or key is invalid
|
| 24 |
+
if "Information" in data or "Note" in data:
|
| 25 |
+
return {"error": data.get("Information") or data.get("Note")}
|
| 26 |
+
return data
|
| 27 |
+
except Exception as e:
|
| 28 |
+
return {"error": str(e)}
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
def extract_ticker(intent: str) -> str | None:
|
| 32 |
+
"""
|
| 33 |
+
Try to pull a ticker symbol from the intent string.
|
| 34 |
+
Looks for uppercase sequences of 1–5 letters (e.g. AAPL, MSFT, TSLA).
|
| 35 |
+
Falls back to SYMBOL_SEARCH if a company name is detected.
|
| 36 |
+
"""
|
| 37 |
+
match = re.search(r'\b([A-Z]{1,5})\b', intent)
|
| 38 |
+
if match:
|
| 39 |
+
return match.group(1)
|
| 40 |
+
return None
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
def resolve_ticker(intent: str) -> str | None:
|
| 44 |
+
"""Use SYMBOL_SEARCH to find a ticker from a company name in the intent."""
|
| 45 |
+
result = av_get("SYMBOL_SEARCH", keywords=intent)
|
| 46 |
+
matches = result.get("bestMatches", [])
|
| 47 |
+
if matches:
|
| 48 |
+
return matches[0].get("1. symbol")
|
| 49 |
+
return None
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
def run(state: dict) -> dict:
|
| 53 |
+
route = state.get("route", {})
|
| 54 |
+
intent = route.get("intent", "")
|
| 55 |
+
domain = route.get("domain", "finance")
|
| 56 |
+
|
| 57 |
+
gathered = {}
|
| 58 |
+
|
| 59 |
+
# Step 1: resolve ticker if query is about a specific stock
|
| 60 |
+
ticker = extract_ticker(intent) or resolve_ticker(intent)
|
| 61 |
+
|
| 62 |
+
if ticker:
|
| 63 |
+
# Quote (current price, change, volume) — no OHLCV chart data
|
| 64 |
+
quote = av_get("GLOBAL_QUOTE", symbol=ticker)
|
| 65 |
+
gathered["quote"] = quote.get("Global Quote", quote)
|
| 66 |
+
|
| 67 |
+
# Fundamentals (P/E, market cap, sector, EPS, etc.)
|
| 68 |
+
overview = av_get("OVERVIEW", symbol=ticker)
|
| 69 |
+
# Strip raw price series fields to keep payload clean
|
| 70 |
+
for drop_key in ["52WeekHigh", "52WeekLow", "50DayMovingAverage",
|
| 71 |
+
"200DayMovingAverage", "AnalystTargetPrice"]:
|
| 72 |
+
overview.pop(drop_key, None)
|
| 73 |
+
gathered["fundamentals"] = overview
|
| 74 |
+
|
| 75 |
+
# News & sentiment for this ticker
|
| 76 |
+
news = av_get("NEWS_SENTIMENT", tickers=ticker, limit=5)
|
| 77 |
+
gathered["news_sentiment"] = news.get("feed", [])[:5]
|
| 78 |
+
|
| 79 |
+
else:
|
| 80 |
+
# No specific ticker — fetch macro / market-wide data
|
| 81 |
+
gathered["top_movers"] = av_get("TOP_GAINERS_LOSERS")
|
| 82 |
+
gathered["news_general"] = av_get("NEWS_SENTIMENT", limit=5).get("feed", [])[:5]
|
| 83 |
+
|
| 84 |
+
# Step 3: if macro / economic query, add indicators
|
| 85 |
+
macro_keywords = ["gdp", "inflation", "cpi", "interest rate", "federal", "economy",
|
| 86 |
+
"recession", "growth", "unemployment"]
|
| 87 |
+
if any(kw in intent.lower() for kw in macro_keywords):
|
| 88 |
+
gathered["gdp"] = av_get("REAL_GDP", interval="annual")
|
| 89 |
+
gathered["cpi"] = av_get("CPI", interval="monthly")
|
| 90 |
+
gathered["inflation"] = av_get("INFLATION")
|
| 91 |
+
|
| 92 |
+
# Step 4: LLM interprets the gathered data
|
| 93 |
+
prompt = load_prompt("finance")
|
| 94 |
+
messages = [
|
| 95 |
+
{"role": "system", "content": prompt},
|
| 96 |
+
{"role": "user", "content": (
|
| 97 |
+
f"User intent: {intent}\n\n"
|
| 98 |
+
f"Alpha Vantage data:\n{gathered}\n\n"
|
| 99 |
+
"Analyse this financial data and return ONLY valid JSON:\n"
|
| 100 |
+
"{\n"
|
| 101 |
+
" \"ticker\": \"<symbol or null>\",\n"
|
| 102 |
+
" \"signals\": [\"<signal 1>\", \"<signal 2>\"],\n"
|
| 103 |
+
" \"risks\": [\"<risk 1>\"],\n"
|
| 104 |
+
" \"sentiment\": \"bullish | bearish | neutral\",\n"
|
| 105 |
+
" \"key_metrics\": {\"<metric>\": \"<value>\"},\n"
|
| 106 |
+
" \"data_quality\": \"good | partial | limited\",\n"
|
| 107 |
+
" \"summary\": \"<2-3 sentence plain English summary>\"\n"
|
| 108 |
+
"}\n"
|
| 109 |
+
"Do NOT include chart data, OHLCV arrays, image URLs, or price history."
|
| 110 |
+
)},
|
| 111 |
+
]
|
| 112 |
+
try:
|
| 113 |
+
result = safe_parse(call_model(messages))
|
| 114 |
+
except RuntimeError as e:
|
| 115 |
+
logger.error(f"[AGENT ERROR] finance_node: {e}")
|
| 116 |
+
result = {"status": "error", "reason": str(e)}
|
| 117 |
+
|
| 118 |
+
return {**state, "finance": result}
|
|
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Mirofish simulation node.
|
| 3 |
+
Calls the Mirofish local simulation service and injects results into agent state.
|
| 4 |
+
Mirofish handles scenario modelling, agent-based simulation, and outcome projection.
|
| 5 |
+
"""
|
| 6 |
+
import httpx, os, logging
|
| 7 |
+
from app.agents._model import call_model, safe_parse
|
| 8 |
+
from app.config import load_prompt
|
| 9 |
+
|
| 10 |
+
logger = logging.getLogger(__name__)
|
| 11 |
+
|
| 12 |
+
MIROFISH_BASE = os.getenv("MIROFISH_BASE_URL", "http://localhost:8001")
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
def run_simulation(scenario: dict) -> dict:
|
| 16 |
+
r = httpx.post(f"{MIROFISH_BASE}/simulate", json=scenario, timeout=60)
|
| 17 |
+
r.raise_for_status()
|
| 18 |
+
return r.json()
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
def run(state: dict) -> dict:
|
| 22 |
+
route = state.get("route", {})
|
| 23 |
+
intent = route.get("intent", "")
|
| 24 |
+
sub_tasks = route.get("sub_tasks", [])
|
| 25 |
+
|
| 26 |
+
scenario = {
|
| 27 |
+
"intent": intent,
|
| 28 |
+
"tasks": sub_tasks,
|
| 29 |
+
"complexity": route.get("complexity", "medium"),
|
| 30 |
+
"domain": route.get("domain", "general"),
|
| 31 |
+
}
|
| 32 |
+
|
| 33 |
+
try:
|
| 34 |
+
sim_result = run_simulation(scenario)
|
| 35 |
+
except Exception as e:
|
| 36 |
+
logger.warning(f"Mirofish unavailable: {e}")
|
| 37 |
+
sim_result = {"error": str(e), "note": "Mirofish unavailable, continuing without simulation"}
|
| 38 |
+
|
| 39 |
+
prompt = load_prompt("simulation")
|
| 40 |
+
messages = [
|
| 41 |
+
{"role": "system", "content": prompt},
|
| 42 |
+
{"role": "user", "content": (
|
| 43 |
+
f"Simulation results from Mirofish:\n{sim_result}\n\n"
|
| 44 |
+
f"Original intent: {intent}\n\n"
|
| 45 |
+
"Interpret these simulation results. Return ONLY valid JSON with: "
|
| 46 |
+
"key_findings, confidence, scenarios_run, recommended_path, caveats."
|
| 47 |
+
)},
|
| 48 |
+
]
|
| 49 |
+
try:
|
| 50 |
+
result = safe_parse(call_model(messages))
|
| 51 |
+
except RuntimeError as e:
|
| 52 |
+
logger.error(f"[AGENT ERROR] mirofish_node: {e}")
|
| 53 |
+
result = {"status": "error", "reason": str(e)}
|
| 54 |
+
|
| 55 |
+
return {**state, "simulation": result}
|
|
@@ -1,68 +1,68 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
| 4 |
import logging
|
|
|
|
|
|
|
| 5 |
|
| 6 |
logger = logging.getLogger(__name__)
|
| 7 |
|
| 8 |
-
_CONFIDENCE_PATTERN = re.compile(r'Confidence:\s*([\d.]+)', re.IGNORECASE)
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
-
"""Extract confidence score from structured LLM output."""
|
| 13 |
-
match = _CONFIDENCE_PATTERN.search(text)
|
| 14 |
-
if match:
|
| 15 |
-
try:
|
| 16 |
-
score = float(match.group(1))
|
| 17 |
-
return max(0.0, min(1.0, score))
|
| 18 |
-
except ValueError:
|
| 19 |
-
pass
|
| 20 |
-
return default
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
if (has_scenario_context or has_user_scenario) and not simulation_suggested:
|
| 40 |
-
simulation_suggested = True
|
| 41 |
-
logger.info("Planner detected scenario analysis opportunity - suggesting simulation mode")
|
| 42 |
-
|
| 43 |
-
prompt = (
|
| 44 |
-
f"{prompt_template}\n\n"
|
| 45 |
-
f"User Request:\n{user_input}\n\n"
|
| 46 |
-
f"Research Packet:\n{research_output}"
|
| 47 |
-
)
|
| 48 |
|
| 49 |
try:
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
"
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
return {
|
| 64 |
-
"agent": "planner",
|
| 65 |
-
"summary": f"Error: {str(e)}",
|
| 66 |
-
"details": {"error_type": "provider_error"},
|
| 67 |
-
"confidence": 0.0,
|
| 68 |
}
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Planner agent — MiroOrg v2.
|
| 3 |
+
Accepts Switchboard route + Research output + (optionally) Simulation and Finance outputs.
|
| 4 |
+
Produces a structured plan with steps, dependencies, and risk assessment.
|
| 5 |
+
"""
|
| 6 |
import logging
|
| 7 |
+
from app.agents._model import call_model, safe_parse
|
| 8 |
+
from app.config import load_prompt
|
| 9 |
|
| 10 |
logger = logging.getLogger(__name__)
|
| 11 |
|
|
|
|
| 12 |
|
| 13 |
+
def run(state: dict) -> dict:
|
| 14 |
+
route = state.get("route", {})
|
| 15 |
+
research = state.get("research", {})
|
| 16 |
+
simulation = state.get("simulation", {})
|
| 17 |
+
finance = state.get("finance", {})
|
| 18 |
+
replan_count = state.get("replan_count", 0)
|
| 19 |
+
verifier = state.get("verifier", {})
|
| 20 |
|
| 21 |
+
prompt = load_prompt("planner")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
+
# Build context with all available upstream data
|
| 24 |
+
context_parts = [
|
| 25 |
+
f"Route: {route}",
|
| 26 |
+
f"Research findings: {research}",
|
| 27 |
+
]
|
| 28 |
+
if simulation:
|
| 29 |
+
context_parts.append(f"Simulation results: {simulation}")
|
| 30 |
+
if finance:
|
| 31 |
+
context_parts.append(f"Finance data: {finance}")
|
| 32 |
+
if replan_count > 0 and verifier:
|
| 33 |
+
context_parts.append(f"REPLAN #{replan_count} — Verifier feedback: {verifier}")
|
| 34 |
|
| 35 |
+
messages = [
|
| 36 |
+
{"role": "system", "content": prompt},
|
| 37 |
+
{"role": "user", "content": (
|
| 38 |
+
f"User request: {state.get('user_input', route.get('intent', ''))}\n\n"
|
| 39 |
+
+ "\n\n".join(context_parts)
|
| 40 |
+
+ "\n\nProduce structured JSON output:\n"
|
| 41 |
+
"{\n"
|
| 42 |
+
" \"plan_steps\": [\"<step 1>\", \"<step 2>\"],\n"
|
| 43 |
+
" \"resources_needed\": [\"<resource 1>\"],\n"
|
| 44 |
+
" \"dependencies\": [\"<dependency 1>\"],\n"
|
| 45 |
+
" \"risk_level\": \"low | medium | high\",\n"
|
| 46 |
+
" \"estimated_output\": \"<brief description of expected output>\""
|
| 47 |
+
+ (",\n \"replan_reason\": \"<why replanning>\"" if replan_count > 0 else "")
|
| 48 |
+
+ "\n}\n"
|
| 49 |
+
)},
|
| 50 |
+
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
try:
|
| 53 |
+
result = safe_parse(call_model(messages))
|
| 54 |
+
except RuntimeError as e:
|
| 55 |
+
logger.error(f"[AGENT ERROR] planner: {e}")
|
| 56 |
+
result = {"status": "error", "reason": str(e)}
|
| 57 |
+
|
| 58 |
+
if "error" in result:
|
| 59 |
+
logger.warning(f"[AGENT ERROR] planner: {result.get('error')}")
|
| 60 |
+
result = {
|
| 61 |
+
"plan_steps": ["Unable to generate plan due to error"],
|
| 62 |
+
"resources_needed": [],
|
| 63 |
+
"dependencies": [],
|
| 64 |
+
"risk_level": "high",
|
| 65 |
+
"estimated_output": "Error in planning phase",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
}
|
| 67 |
+
|
| 68 |
+
return {**state, "planner": result}
|
|
@@ -1,90 +1,149 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |
logger = logging.getLogger(__name__)
|
| 8 |
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
def _extract_confidence(text: str, default: float = 0.5) -> float:
|
| 13 |
-
"""Extract confidence score from structured LLM output."""
|
| 14 |
-
match = _CONFIDENCE_PATTERN.search(text)
|
| 15 |
-
if match:
|
| 16 |
-
try:
|
| 17 |
-
score = float(match.group(1))
|
| 18 |
-
return max(0.0, min(1.0, score))
|
| 19 |
-
except ValueError:
|
| 20 |
-
pass
|
| 21 |
-
return default
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
def run_research(user_input: str, prompt_template: str) -> dict:
|
| 25 |
-
external_context = build_external_context(user_input)
|
| 26 |
-
|
| 27 |
-
# Detect domain and enhance research with domain pack capabilities
|
| 28 |
-
registry = get_registry()
|
| 29 |
-
detected_domain = registry.detect_domain(user_input)
|
| 30 |
-
|
| 31 |
-
domain_enhanced_context = {}
|
| 32 |
-
if detected_domain:
|
| 33 |
-
logger.info(f"Enhancing research with domain pack: {detected_domain}")
|
| 34 |
-
pack = registry.get_pack(detected_domain)
|
| 35 |
-
if pack:
|
| 36 |
-
try:
|
| 37 |
-
base_context = {
|
| 38 |
-
"user_input": user_input,
|
| 39 |
-
"external_context": external_context
|
| 40 |
-
}
|
| 41 |
-
domain_enhanced_context = pack.enhance_research(user_input, base_context)
|
| 42 |
-
logger.info(f"Domain enhancement successful: {detected_domain}")
|
| 43 |
-
except Exception as e:
|
| 44 |
-
logger.warning(f"Domain enhancement failed for {detected_domain}: {e}")
|
| 45 |
-
domain_enhanced_context = {}
|
| 46 |
-
|
| 47 |
-
# Build enhanced prompt with domain context
|
| 48 |
-
domain_context_str = ""
|
| 49 |
-
if domain_enhanced_context:
|
| 50 |
-
domain_context_str = "\n\nDomain-Specific Context:\n"
|
| 51 |
-
for key, value in domain_enhanced_context.items():
|
| 52 |
-
if value:
|
| 53 |
-
domain_context_str += f"{key}: {value}\n"
|
| 54 |
-
|
| 55 |
-
prompt = (
|
| 56 |
-
f"{prompt_template}\n\n"
|
| 57 |
-
f"User Request:\n{user_input}\n\n"
|
| 58 |
-
f"External Context:\n{external_context}"
|
| 59 |
-
f"{domain_context_str}"
|
| 60 |
-
)
|
| 61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
try:
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
"
|
| 87 |
-
|
| 88 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
"confidence": 0.0,
|
| 90 |
}
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Research agent — MiroOrg v2.
|
| 3 |
+
Uses Tavily web search, News API, Knowledge Store, and API Discovery
|
| 4 |
+
to gather context before calling the LLM for structured analysis.
|
| 5 |
+
"""
|
| 6 |
+
import os, logging
|
| 7 |
+
import httpx
|
| 8 |
+
from app.agents._model import call_model, safe_parse
|
| 9 |
+
from app.agents.api_discovery import discover_apis, call_discovered_api
|
| 10 |
+
from app.config import load_prompt
|
| 11 |
+
from app.memory import knowledge_store
|
| 12 |
|
| 13 |
logger = logging.getLogger(__name__)
|
| 14 |
|
| 15 |
+
TAVILY_API_KEY = os.getenv("TAVILY_API_KEY", "")
|
| 16 |
+
NEWS_API_KEY = os.getenv("NEWS_API_KEY", os.getenv("NEWSAPI_KEY", ""))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
|
| 19 |
+
# ─── Tool: Tavily Web Search ─────────────────────────────────────────────────
|
| 20 |
+
|
| 21 |
+
def tavily_search(query: str, max_results: int = 5) -> list[dict]:
|
| 22 |
+
"""Returns list of {title, url, content} dicts."""
|
| 23 |
+
if not TAVILY_API_KEY:
|
| 24 |
+
return []
|
| 25 |
try:
|
| 26 |
+
r = httpx.post("https://api.tavily.com/search", json={
|
| 27 |
+
"api_key": TAVILY_API_KEY,
|
| 28 |
+
"query": query,
|
| 29 |
+
"search_depth": "advanced",
|
| 30 |
+
"max_results": max_results,
|
| 31 |
+
"include_raw_content": False,
|
| 32 |
+
}, timeout=30)
|
| 33 |
+
r.raise_for_status()
|
| 34 |
+
return r.json().get("results", [])
|
| 35 |
+
except Exception as e:
|
| 36 |
+
logger.warning(f"Tavily search failed: {e}")
|
| 37 |
+
return []
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
# ─── Tool: News API ──────────────────────────────────────────────────────────
|
| 41 |
+
|
| 42 |
+
def news_search(query: str, max_articles: int = 5) -> list[dict]:
|
| 43 |
+
"""Returns list of {title, source, publishedAt, description} dicts."""
|
| 44 |
+
if not NEWS_API_KEY:
|
| 45 |
+
return []
|
| 46 |
+
try:
|
| 47 |
+
r = httpx.get("https://newsapi.org/v2/everything", params={
|
| 48 |
+
"apiKey": NEWS_API_KEY, "q": query, "sortBy": "publishedAt",
|
| 49 |
+
"language": "en", "pageSize": max_articles,
|
| 50 |
+
}, timeout=30)
|
| 51 |
+
r.raise_for_status()
|
| 52 |
+
return [
|
| 53 |
+
{"title": a["title"], "source": a["source"]["name"],
|
| 54 |
+
"publishedAt": a["publishedAt"], "description": a["description"]}
|
| 55 |
+
for a in r.json().get("articles", [])
|
| 56 |
+
]
|
| 57 |
+
except Exception as e:
|
| 58 |
+
logger.warning(f"News search failed: {e}")
|
| 59 |
+
return []
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
# ─── Research Node ────────────────────────────────────────────────────────────
|
| 63 |
+
|
| 64 |
+
def run(state: dict) -> dict:
|
| 65 |
+
route = state.get("route", {})
|
| 66 |
+
intent = route.get("intent", state.get("user_input", ""))
|
| 67 |
+
domain = route.get("domain", "general")
|
| 68 |
+
|
| 69 |
+
context_blocks = []
|
| 70 |
+
|
| 71 |
+
# Step 1: Tavily web search
|
| 72 |
+
web_results = tavily_search(intent)
|
| 73 |
+
if web_results:
|
| 74 |
+
formatted = "\n".join(
|
| 75 |
+
f"- {r.get('title', 'Untitled')}\n URL: {r.get('url', '')}\n {r.get('content', '')[:300]}"
|
| 76 |
+
for r in web_results
|
| 77 |
+
)
|
| 78 |
+
context_blocks.append(f"[Web Search Results]\n{formatted}")
|
| 79 |
+
|
| 80 |
+
# Step 2: News API (if requires_news or finance domain)
|
| 81 |
+
if route.get("requires_news") or domain == "finance":
|
| 82 |
+
news = news_search(intent)
|
| 83 |
+
if news:
|
| 84 |
+
formatted = "\n".join(
|
| 85 |
+
f"- {a['title']} ({a['source']}, {a['publishedAt']})\n {a.get('description', '')[:200]}"
|
| 86 |
+
for a in news
|
| 87 |
+
)
|
| 88 |
+
context_blocks.append(f"[News Articles]\n{formatted}")
|
| 89 |
+
|
| 90 |
+
# Step 3: Knowledge store
|
| 91 |
+
knowledge = knowledge_store.search(intent, domain=domain)
|
| 92 |
+
if knowledge:
|
| 93 |
+
formatted = "\n".join(
|
| 94 |
+
f"- {k.get('text', k.get('content', ''))[:300]}"
|
| 95 |
+
for k in knowledge
|
| 96 |
+
)
|
| 97 |
+
context_blocks.append(f"[Knowledge Base]\n{formatted}")
|
| 98 |
+
|
| 99 |
+
# Step 4: API Discovery
|
| 100 |
+
discovered = discover_apis(query=intent, domain=domain)
|
| 101 |
+
for api in discovered[:3]:
|
| 102 |
+
extra_data = call_discovered_api(api, {"q": intent})
|
| 103 |
+
context_blocks.append(f"[{api.get('name', 'Discovered API')}]: {extra_data}")
|
| 104 |
+
|
| 105 |
+
# Step 5: Include simulation and finance data if available in state
|
| 106 |
+
if state.get("simulation"):
|
| 107 |
+
context_blocks.append(f"[Simulation Results]\n{state['simulation']}")
|
| 108 |
+
if state.get("finance"):
|
| 109 |
+
context_blocks.append(f"[Finance Data]\n{state['finance']}")
|
| 110 |
+
|
| 111 |
+
# Build context block
|
| 112 |
+
context_str = "\n\n".join(context_blocks) if context_blocks else "No external context retrieved."
|
| 113 |
+
|
| 114 |
+
# Step 6: Call LLM
|
| 115 |
+
prompt = load_prompt("research")
|
| 116 |
+
messages = [
|
| 117 |
+
{"role": "system", "content": prompt},
|
| 118 |
+
{"role": "user", "content": (
|
| 119 |
+
f"User request: {state.get('user_input', intent)}\n\n"
|
| 120 |
+
f"[CONTEXT]\n{context_str}\n\n"
|
| 121 |
+
"Produce structured JSON output:\n"
|
| 122 |
+
"{\n"
|
| 123 |
+
" \"summary\": \"<comprehensive analysis>\",\n"
|
| 124 |
+
" \"key_facts\": [\"<fact 1>\", \"<fact 2>\"],\n"
|
| 125 |
+
" \"sources\": [\"<source 1>\", \"<source 2>\"],\n"
|
| 126 |
+
" \"gaps\": [\"<what's missing>\"],\n"
|
| 127 |
+
" \"confidence\": 0.0-1.0\n"
|
| 128 |
+
"}\n"
|
| 129 |
+
"If context is empty, return gaps: ['no data retrieved']. Do not hallucinate."
|
| 130 |
+
)},
|
| 131 |
+
]
|
| 132 |
+
|
| 133 |
+
try:
|
| 134 |
+
result = safe_parse(call_model(messages))
|
| 135 |
+
except RuntimeError as e:
|
| 136 |
+
logger.error(f"[AGENT ERROR] research: {e}")
|
| 137 |
+
result = {"status": "error", "reason": str(e)}
|
| 138 |
+
|
| 139 |
+
if "error" in result:
|
| 140 |
+
logger.warning(f"[AGENT ERROR] research: {result.get('error')}")
|
| 141 |
+
result = {
|
| 142 |
+
"summary": "Research encountered an error during analysis.",
|
| 143 |
+
"key_facts": [],
|
| 144 |
+
"sources": [],
|
| 145 |
+
"gaps": ["analysis failed"],
|
| 146 |
"confidence": 0.0,
|
| 147 |
}
|
| 148 |
+
|
| 149 |
+
return {**state, "research": result}
|
|
@@ -1,82 +1,55 @@
|
|
| 1 |
-
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
|
|
|
|
| 4 |
|
| 5 |
-
|
|
|
|
| 6 |
"""
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
Classification dimensions:
|
| 10 |
-
1. task_family: "normal" or "simulation"
|
| 11 |
-
2. domain_pack: "finance", "general", "policy", "custom"
|
| 12 |
-
3. complexity: "simple" (≤5 words), "medium" (≤25 words), "complex" (>25 words)
|
| 13 |
-
4. execution_mode: "solo", "standard", "deep"
|
| 14 |
-
|
| 15 |
-
Args:
|
| 16 |
-
user_input: The user's query
|
| 17 |
-
|
| 18 |
-
Returns:
|
| 19 |
-
Dictionary with routing decision including all four dimensions
|
| 20 |
"""
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
words = len(text.split())
|
| 24 |
-
|
| 25 |
-
# Dimension 1: Task family (simulation detection)
|
| 26 |
-
# Check configured keywords
|
| 27 |
-
task_family = "simulation" if any(k in lower for k in SIMULATION_TRIGGER_KEYWORDS) else "normal"
|
| 28 |
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
"
|
| 32 |
-
"what could", "imagine if", "suppose", "hypothetical",
|
| 33 |
-
"could affect", "might impact", "would react",
|
| 34 |
]
|
| 35 |
-
is_speculative = any(p in lower for p in scenario_patterns)
|
| 36 |
-
|
| 37 |
-
# Dimension 2: Domain pack detection
|
| 38 |
-
registry = get_registry()
|
| 39 |
-
detected_domain = registry.detect_domain(user_input)
|
| 40 |
-
domain_pack = detected_domain if detected_domain else "general"
|
| 41 |
-
|
| 42 |
-
# Dimension 3: Complexity based on word count and nature
|
| 43 |
-
if task_family == "simulation":
|
| 44 |
-
complexity = "complex"
|
| 45 |
-
elif is_speculative:
|
| 46 |
-
# Speculative questions always get at least medium complexity
|
| 47 |
-
complexity = "complex" if words > 15 else "medium"
|
| 48 |
-
elif words <= 5:
|
| 49 |
-
complexity = "simple"
|
| 50 |
-
elif words <= 25:
|
| 51 |
-
complexity = "medium"
|
| 52 |
-
else:
|
| 53 |
-
complexity = "complex"
|
| 54 |
-
|
| 55 |
-
# Dimension 4: Execution mode based on complexity and nature
|
| 56 |
-
if task_family == "simulation":
|
| 57 |
-
execution_mode = "deep"
|
| 58 |
-
elif is_speculative:
|
| 59 |
-
# Speculative questions always get deep mode (verifier should check uncertainty)
|
| 60 |
-
execution_mode = "deep"
|
| 61 |
-
elif complexity == "simple":
|
| 62 |
-
execution_mode = "solo"
|
| 63 |
-
elif complexity == "medium":
|
| 64 |
-
execution_mode = "standard"
|
| 65 |
-
else:
|
| 66 |
-
execution_mode = "deep"
|
| 67 |
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
else:
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
"
|
| 78 |
-
"
|
| 79 |
-
"
|
| 80 |
-
"
|
| 81 |
-
"
|
| 82 |
-
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Switchboard — intelligence router for MiroOrg v2.
|
| 3 |
+
Classifies user input and produces structured routing decisions using LLM.
|
| 4 |
+
"""
|
| 5 |
+
import logging
|
| 6 |
+
from app.agents._model import call_model, safe_parse
|
| 7 |
+
from app.config import load_prompt
|
| 8 |
|
| 9 |
+
logger = logging.getLogger(__name__)
|
| 10 |
|
| 11 |
+
|
| 12 |
+
def run(state: dict) -> dict:
|
| 13 |
"""
|
| 14 |
+
Analyse the user's input and produce a routing structure.
|
| 15 |
+
Uses LLM for intent classification with structured JSON output.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
"""
|
| 17 |
+
user_input = state.get("user_input", "")
|
| 18 |
+
prompt = load_prompt("switchboard")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
messages = [
|
| 21 |
+
{"role": "system", "content": prompt},
|
| 22 |
+
{"role": "user", "content": user_input},
|
|
|
|
|
|
|
| 23 |
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
+
try:
|
| 26 |
+
result = safe_parse(call_model(messages))
|
| 27 |
+
except RuntimeError as e:
|
| 28 |
+
logger.error(f"[AGENT ERROR] switchboard: {e}")
|
| 29 |
+
result = {"status": "error", "reason": str(e)}
|
| 30 |
+
|
| 31 |
+
# Ensure all required fields exist with defaults
|
| 32 |
+
if "error" in result:
|
| 33 |
+
logger.warning(f"[AGENT ERROR] switchboard: {result.get('error')}")
|
| 34 |
+
result = {
|
| 35 |
+
"domain": "general",
|
| 36 |
+
"complexity": "medium",
|
| 37 |
+
"intent": user_input[:200],
|
| 38 |
+
"sub_tasks": [user_input[:200]],
|
| 39 |
+
"requires_simulation": False,
|
| 40 |
+
"requires_finance_data": False,
|
| 41 |
+
"requires_news": False,
|
| 42 |
+
"confidence": 0.3,
|
| 43 |
+
}
|
| 44 |
else:
|
| 45 |
+
# Fill in any missing fields with safe defaults
|
| 46 |
+
result.setdefault("domain", "general")
|
| 47 |
+
result.setdefault("complexity", "medium")
|
| 48 |
+
result.setdefault("intent", user_input[:200])
|
| 49 |
+
result.setdefault("sub_tasks", [user_input[:200]])
|
| 50 |
+
result.setdefault("requires_simulation", False)
|
| 51 |
+
result.setdefault("requires_finance_data", False)
|
| 52 |
+
result.setdefault("requires_news", False)
|
| 53 |
+
result.setdefault("confidence", 0.5)
|
| 54 |
+
|
| 55 |
+
return {**state, "route": result}
|
|
@@ -1,100 +1,70 @@
|
|
| 1 |
-
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
| 3 |
import logging
|
|
|
|
|
|
|
| 4 |
|
| 5 |
logger = logging.getLogger(__name__)
|
| 6 |
|
| 7 |
-
_CONFIDENCE_PATTERN = re.compile(r'Confidence:\s*([\d.]+)', re.IGNORECASE)
|
| 8 |
-
_UNCERTAINTY_PATTERN = re.compile(r'Uncertainty\s*Level:\s*(HIGH|MEDIUM|LOW)', re.IGNORECASE)
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
-
"""Extract confidence score from structured LLM output."""
|
| 13 |
-
match = _CONFIDENCE_PATTERN.search(text)
|
| 14 |
-
if match:
|
| 15 |
-
try:
|
| 16 |
-
score = float(match.group(1))
|
| 17 |
-
return max(0.0, min(1.0, score))
|
| 18 |
-
except ValueError:
|
| 19 |
-
pass
|
| 20 |
-
return default
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
elif count >= 2:
|
| 39 |
-
return "MEDIUM"
|
| 40 |
-
return "LOW"
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
def run_synthesizer(
|
| 44 |
-
user_input: str,
|
| 45 |
-
research_output: str,
|
| 46 |
-
planner_output: str,
|
| 47 |
-
verifier_output: str,
|
| 48 |
-
prompt_template: str
|
| 49 |
-
) -> dict:
|
| 50 |
-
# Extract uncertainty level from verifier output (or synthesizer will self-assess)
|
| 51 |
-
uncertainty_level = _extract_uncertainty(verifier_output)
|
| 52 |
-
|
| 53 |
-
# Check if simulation was recommended by planner or verifier
|
| 54 |
-
planner_lower = planner_output.lower()
|
| 55 |
-
simulation_recommended = (
|
| 56 |
-
("simulation recommended: yes" in planner_lower) or
|
| 57 |
-
("simulation" in planner_lower and "recommend" in planner_lower)
|
| 58 |
-
)
|
| 59 |
-
|
| 60 |
-
logger.info(f"Synthesizer: uncertainty_level={uncertainty_level}, simulation_recommended={simulation_recommended}")
|
| 61 |
-
|
| 62 |
-
prompt = (
|
| 63 |
-
f"{prompt_template}\n\n"
|
| 64 |
-
f"User Request:\n{user_input}\n\n"
|
| 65 |
-
f"Research Packet:\n{research_output}\n\n"
|
| 66 |
-
f"Planner Output:\n{planner_output}\n\n"
|
| 67 |
-
f"Verifier Output:\n{verifier_output}"
|
| 68 |
-
)
|
| 69 |
|
| 70 |
try:
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
final_uncertainty = "MEDIUM"
|
| 81 |
-
else:
|
| 82 |
-
final_uncertainty = "LOW"
|
| 83 |
-
|
| 84 |
-
return {
|
| 85 |
-
"agent": "synthesizer",
|
| 86 |
-
"summary": text,
|
| 87 |
-
"details": {
|
| 88 |
-
"model_mode": "chat",
|
| 89 |
-
"uncertainty_level": final_uncertainty,
|
| 90 |
-
"simulation_recommended": simulation_recommended
|
| 91 |
-
},
|
| 92 |
-
"confidence": confidence,
|
| 93 |
-
}
|
| 94 |
-
except LLMProviderError as e:
|
| 95 |
-
return {
|
| 96 |
-
"agent": "synthesizer",
|
| 97 |
-
"summary": f"Error: {str(e)}",
|
| 98 |
-
"details": {"error_type": "provider_error"},
|
| 99 |
"confidence": 0.0,
|
|
|
|
|
|
|
|
|
|
| 100 |
}
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Synthesizer agent — MiroOrg v2.
|
| 3 |
+
Final voice in the pipeline. Accepts all upstream outputs and produces
|
| 4 |
+
the definitive response the user sees.
|
| 5 |
+
"""
|
| 6 |
import logging
|
| 7 |
+
from app.agents._model import call_model, safe_parse
|
| 8 |
+
from app.config import load_prompt
|
| 9 |
|
| 10 |
logger = logging.getLogger(__name__)
|
| 11 |
|
|
|
|
|
|
|
| 12 |
|
| 13 |
+
def run(state: dict) -> dict:
|
| 14 |
+
route = state.get("route", {})
|
| 15 |
+
research = state.get("research", {})
|
| 16 |
+
planner = state.get("planner", {})
|
| 17 |
+
verifier = state.get("verifier", {})
|
| 18 |
+
simulation = state.get("simulation", {})
|
| 19 |
+
finance = state.get("finance", {})
|
| 20 |
+
replan_count = state.get("replan_count", 0)
|
| 21 |
|
| 22 |
+
prompt = load_prompt("synthesizer")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
+
# Build comprehensive context
|
| 25 |
+
context_parts = [
|
| 26 |
+
f"Route: {route}",
|
| 27 |
+
f"Research: {research}",
|
| 28 |
+
f"Planner: {planner}",
|
| 29 |
+
f"Verifier: {verifier}",
|
| 30 |
+
]
|
| 31 |
+
if simulation:
|
| 32 |
+
context_parts.append(f"Simulation: {simulation}")
|
| 33 |
+
if finance:
|
| 34 |
+
context_parts.append(f"Finance: {finance}")
|
| 35 |
+
if not verifier.get("passed", True) and replan_count >= 2:
|
| 36 |
+
context_parts.append("NOTE: Verifier did not fully pass and replan limit was reached. Acknowledge limitations.")
|
| 37 |
|
| 38 |
+
messages = [
|
| 39 |
+
{"role": "system", "content": prompt},
|
| 40 |
+
{"role": "user", "content": (
|
| 41 |
+
f"User request: {state.get('user_input', route.get('intent', ''))}\n\n"
|
| 42 |
+
+ "\n\n".join(context_parts)
|
| 43 |
+
+ "\n\nProduce the final structured JSON output:\n"
|
| 44 |
+
"{\n"
|
| 45 |
+
" \"response\": \"<comprehensive, direct final answer>\",\n"
|
| 46 |
+
" \"confidence\": 0.0-1.0,\n"
|
| 47 |
+
" \"data_sources\": [\"<source 1>\", \"<source 2>\"],\n"
|
| 48 |
+
" \"caveats\": [\"<caveat 1>\"],\n"
|
| 49 |
+
" \"next_steps\": [\"<action 1>\", \"<action 2>\"]\n"
|
| 50 |
+
"}\n"
|
| 51 |
+
)},
|
| 52 |
+
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
try:
|
| 55 |
+
result = safe_parse(call_model(messages))
|
| 56 |
+
except RuntimeError as e:
|
| 57 |
+
logger.error(f"[AGENT ERROR] synthesizer: {e}")
|
| 58 |
+
result = {"status": "error", "reason": str(e)}
|
| 59 |
+
|
| 60 |
+
if "error" in result:
|
| 61 |
+
logger.warning(f"[AGENT ERROR] synthesizer: {result.get('error')}")
|
| 62 |
+
result = {
|
| 63 |
+
"response": "I encountered an error while synthesizing the analysis. Please try again.",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
"confidence": 0.0,
|
| 65 |
+
"data_sources": [],
|
| 66 |
+
"caveats": ["synthesis failed"],
|
| 67 |
+
"next_steps": ["retry the query"],
|
| 68 |
}
|
| 69 |
+
|
| 70 |
+
return {**state, "final": result}
|
|
@@ -1,99 +1,59 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
| 4 |
import logging
|
|
|
|
|
|
|
| 5 |
|
| 6 |
logger = logging.getLogger(__name__)
|
| 7 |
|
| 8 |
-
_CONFIDENCE_PATTERN = re.compile(r'Confidence:\s*([\d.]+)', re.IGNORECASE)
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
-
"""Extract confidence score from structured LLM output."""
|
| 13 |
-
match = _CONFIDENCE_PATTERN.search(text)
|
| 14 |
-
if match:
|
| 15 |
-
try:
|
| 16 |
-
score = float(match.group(1))
|
| 17 |
-
return max(0.0, min(1.0, score))
|
| 18 |
-
except ValueError:
|
| 19 |
-
pass
|
| 20 |
-
return default
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
if stripped and len(stripped) > 20 and not stripped.startswith(("Facts:", "Assumptions:", "Open Questions:", "Key Facts:", "Plan:", "Objective:")):
|
| 39 |
-
claims.append(stripped)
|
| 40 |
-
|
| 41 |
-
context = {
|
| 42 |
-
"user_input": user_input,
|
| 43 |
-
"research_output": research_output,
|
| 44 |
-
"planner_output": planner_output,
|
| 45 |
-
"claims": claims[:30] # Limit claims to avoid token overflow
|
| 46 |
-
}
|
| 47 |
-
domain_verification = pack.enhance_verification(claims[:30], context)
|
| 48 |
-
logger.info(f"Domain verification successful: {detected_domain}")
|
| 49 |
-
except Exception as e:
|
| 50 |
-
logger.warning(f"Domain verification failed for {detected_domain}: {e}")
|
| 51 |
-
domain_verification = {}
|
| 52 |
-
|
| 53 |
-
# Build enhanced prompt with domain verification
|
| 54 |
-
domain_verification_str = ""
|
| 55 |
-
if domain_verification:
|
| 56 |
-
domain_verification_str = "\n\nDomain-Specific Verification:\n"
|
| 57 |
-
for key, value in domain_verification.items():
|
| 58 |
-
if value:
|
| 59 |
-
domain_verification_str += f"{key}: {value}\n"
|
| 60 |
-
|
| 61 |
-
prompt = (
|
| 62 |
-
f"{prompt_template}\n\n"
|
| 63 |
-
f"User Request:\n{user_input}\n\n"
|
| 64 |
-
f"Research Packet:\n{research_output}\n\n"
|
| 65 |
-
f"Planner Output:\n{planner_output}"
|
| 66 |
-
f"{domain_verification_str}"
|
| 67 |
-
)
|
| 68 |
|
| 69 |
try:
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
"
|
| 82 |
-
"
|
| 83 |
-
"details": {
|
| 84 |
-
"model_mode": "reasoner",
|
| 85 |
-
"domain_pack": detected_domain or "general",
|
| 86 |
-
"credibility_score": credibility_score,
|
| 87 |
-
"rumors_detected": rumors_detected,
|
| 88 |
-
"scams_detected": scams_detected,
|
| 89 |
-
"domain_verified": bool(domain_verification)
|
| 90 |
-
},
|
| 91 |
-
"confidence": confidence,
|
| 92 |
-
}
|
| 93 |
-
except LLMProviderError as e:
|
| 94 |
-
return {
|
| 95 |
-
"agent": "verifier",
|
| 96 |
-
"summary": f"Error: {str(e)}",
|
| 97 |
-
"details": {"error_type": "provider_error"},
|
| 98 |
-
"confidence": 0.0,
|
| 99 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Verifier agent — MiroOrg v2.
|
| 3 |
+
Accepts the Planner output and original route.
|
| 4 |
+
Stress-tests the plan and returns pass/fail with actionable feedback.
|
| 5 |
+
"""
|
| 6 |
import logging
|
| 7 |
+
from app.agents._model import call_model, safe_parse
|
| 8 |
+
from app.config import load_prompt
|
| 9 |
|
| 10 |
logger = logging.getLogger(__name__)
|
| 11 |
|
|
|
|
| 12 |
|
| 13 |
+
def run(state: dict) -> dict:
|
| 14 |
+
route = state.get("route", {})
|
| 15 |
+
planner = state.get("planner", {})
|
| 16 |
+
research = state.get("research", {})
|
| 17 |
|
| 18 |
+
prompt = load_prompt("verifier")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
messages = [
|
| 21 |
+
{"role": "system", "content": prompt},
|
| 22 |
+
{"role": "user", "content": (
|
| 23 |
+
f"Original route: {route}\n\n"
|
| 24 |
+
f"Research findings: {research}\n\n"
|
| 25 |
+
f"Planner output: {planner}\n\n"
|
| 26 |
+
"Verify the plan against the research and route. Return ONLY valid JSON:\n"
|
| 27 |
+
"{\n"
|
| 28 |
+
" \"passed\": true | false,\n"
|
| 29 |
+
" \"issues\": [\"<issue 1>\", \"<issue 2>\"],\n"
|
| 30 |
+
" \"fixes_required\": [\"<fix 1>\", \"<fix 2>\"],\n"
|
| 31 |
+
" \"confidence\": 0.0-1.0\n"
|
| 32 |
+
"}\n"
|
| 33 |
+
"passed=false MUST include specific, actionable fixes_required items."
|
| 34 |
+
)},
|
| 35 |
+
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
try:
|
| 38 |
+
result = safe_parse(call_model(messages))
|
| 39 |
+
except RuntimeError as e:
|
| 40 |
+
logger.error(f"[AGENT ERROR] verifier: {e}")
|
| 41 |
+
result = {"status": "error", "reason": str(e)}
|
| 42 |
+
|
| 43 |
+
if "error" in result:
|
| 44 |
+
logger.warning(f"[AGENT ERROR] verifier: {result.get('error')}")
|
| 45 |
+
# Default to passed=true on error so pipeline doesn't get stuck
|
| 46 |
+
result = {
|
| 47 |
+
"passed": True,
|
| 48 |
+
"issues": ["verifier error — defaulting to pass"],
|
| 49 |
+
"fixes_required": [],
|
| 50 |
+
"confidence": 0.3,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
}
|
| 52 |
+
|
| 53 |
+
# Ensure passed field exists
|
| 54 |
+
result.setdefault("passed", True)
|
| 55 |
+
result.setdefault("issues", [])
|
| 56 |
+
result.setdefault("fixes_required", [])
|
| 57 |
+
result.setdefault("confidence", 0.5)
|
| 58 |
+
|
| 59 |
+
return {**state, "verifier": result}
|
|
@@ -15,6 +15,15 @@ DATA_DIR = BASE_DIR / "data"
|
|
| 15 |
MEMORY_DIR = DATA_DIR / "memory"
|
| 16 |
SIMULATION_DIR = DATA_DIR / "simulations"
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
APP_VERSION = os.getenv("APP_VERSION", "0.3.0")
|
| 19 |
|
| 20 |
PRIMARY_PROVIDER = os.getenv("PRIMARY_PROVIDER", "openrouter").lower()
|
|
|
|
| 15 |
MEMORY_DIR = DATA_DIR / "memory"
|
| 16 |
SIMULATION_DIR = DATA_DIR / "simulations"
|
| 17 |
|
| 18 |
+
# Prompt loader
|
| 19 |
+
def load_prompt(name: str) -> str:
|
| 20 |
+
"""Load a prompt file by name (without .txt extension)."""
|
| 21 |
+
path = PROMPTS_DIR / f"{name}.txt"
|
| 22 |
+
if not path.exists():
|
| 23 |
+
return f"You are the {name} agent in MiroOrg v2. Be helpful and precise."
|
| 24 |
+
return path.read_text(encoding="utf-8").strip()
|
| 25 |
+
|
| 26 |
+
|
| 27 |
APP_VERSION = os.getenv("APP_VERSION", "0.3.0")
|
| 28 |
|
| 29 |
PRIMARY_PROVIDER = os.getenv("PRIMARY_PROVIDER", "openrouter").lower()
|
|
@@ -1,185 +1,192 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
import uuid
|
| 2 |
import time
|
| 3 |
import logging
|
| 4 |
-
from typing import TypedDict, Dict, Any
|
| 5 |
|
| 6 |
from langgraph.graph import StateGraph, START, END
|
| 7 |
|
| 8 |
-
from app.
|
| 9 |
-
from app.agents
|
| 10 |
-
from app.agents.research import run_research
|
| 11 |
-
from app.agents.planner import run_planner
|
| 12 |
-
from app.agents.verifier import run_verifier
|
| 13 |
-
from app.agents.synthesizer import run_synthesizer
|
| 14 |
|
| 15 |
logger = logging.getLogger(__name__)
|
| 16 |
|
| 17 |
|
| 18 |
-
# ──
|
| 19 |
-
|
| 20 |
-
_prompt_cache: Dict[str, str] = {}
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
def load_prompt(filename: str) -> str:
|
| 24 |
-
"""Load prompt from file, with caching."""
|
| 25 |
-
if filename not in _prompt_cache:
|
| 26 |
-
path = PROMPTS_DIR / filename
|
| 27 |
-
_prompt_cache[filename] = path.read_text(encoding="utf-8")
|
| 28 |
-
return _prompt_cache[filename]
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
production = learning_engine.get_active_prompt(prompt_name)
|
| 40 |
-
if production:
|
| 41 |
-
logger.debug(f"Using production prompt version for {prompt_name}")
|
| 42 |
-
return production
|
| 43 |
-
except Exception:
|
| 44 |
-
pass
|
| 45 |
-
|
| 46 |
-
return load_prompt(filename)
|
| 47 |
-
|
| 48 |
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
SYNTHESIZER_PROMPT = load_prompt("synthesizer.txt")
|
| 53 |
|
| 54 |
|
| 55 |
-
|
| 56 |
-
case_id: str
|
| 57 |
-
user_input: str
|
| 58 |
-
route: Dict[str, Any]
|
| 59 |
-
research: Dict[str, Any]
|
| 60 |
-
planner: Dict[str, Any]
|
| 61 |
-
verifier: Dict[str, Any]
|
| 62 |
-
final: Dict[str, Any]
|
| 63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
|
| 65 |
-
def empty_output(agent_name: str) -> Dict[str, Any]:
|
| 66 |
-
return {
|
| 67 |
-
"agent": agent_name,
|
| 68 |
-
"summary": "",
|
| 69 |
-
"details": {},
|
| 70 |
-
"confidence": 0.0,
|
| 71 |
-
}
|
| 72 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
|
| 74 |
-
# ── Node Functions with Timing ───────────────────────────────────────────────
|
| 75 |
|
| 76 |
-
def
|
| 77 |
t0 = time.perf_counter()
|
| 78 |
-
result =
|
| 79 |
elapsed = time.perf_counter() - t0
|
| 80 |
-
logger.info(f"[{state
|
| 81 |
return result
|
| 82 |
|
| 83 |
|
| 84 |
-
def research_node(state:
|
| 85 |
-
if state["route"].get("execution_mode") == "solo":
|
| 86 |
-
return {"research": empty_output("research")}
|
| 87 |
-
|
| 88 |
t0 = time.perf_counter()
|
| 89 |
-
|
| 90 |
-
result = {"research": run_research(state["user_input"], prompt)}
|
| 91 |
elapsed = time.perf_counter() - t0
|
| 92 |
-
logger.info(f"[{state
|
| 93 |
return result
|
| 94 |
|
| 95 |
|
| 96 |
-
def planner_node(state:
|
| 97 |
-
if state["route"].get("execution_mode") == "solo":
|
| 98 |
-
return {"planner": empty_output("planner")}
|
| 99 |
-
|
| 100 |
t0 = time.perf_counter()
|
| 101 |
-
|
| 102 |
-
result = {
|
| 103 |
-
"planner": run_planner(
|
| 104 |
-
state["user_input"],
|
| 105 |
-
state["research"]["summary"],
|
| 106 |
-
prompt,
|
| 107 |
-
)
|
| 108 |
-
}
|
| 109 |
elapsed = time.perf_counter() - t0
|
| 110 |
-
logger.info(f"[{state
|
| 111 |
return result
|
| 112 |
|
| 113 |
|
| 114 |
-
def verifier_node(state:
|
| 115 |
-
if state["route"].get("execution_mode") != "deep":
|
| 116 |
-
return {"verifier": empty_output("verifier")}
|
| 117 |
-
|
| 118 |
t0 = time.perf_counter()
|
| 119 |
-
|
| 120 |
-
result = {
|
| 121 |
-
"verifier": run_verifier(
|
| 122 |
-
state["user_input"],
|
| 123 |
-
state["research"]["summary"],
|
| 124 |
-
state["planner"]["summary"],
|
| 125 |
-
prompt,
|
| 126 |
-
)
|
| 127 |
-
}
|
| 128 |
elapsed = time.perf_counter() - t0
|
| 129 |
-
logger.info(f"[{state
|
| 130 |
return result
|
| 131 |
|
| 132 |
|
| 133 |
-
def synthesizer_node(state:
|
| 134 |
t0 = time.perf_counter()
|
| 135 |
-
|
| 136 |
-
result = {
|
| 137 |
-
"final": run_synthesizer(
|
| 138 |
-
state["user_input"],
|
| 139 |
-
state["research"]["summary"],
|
| 140 |
-
state["planner"]["summary"],
|
| 141 |
-
state["verifier"]["summary"],
|
| 142 |
-
prompt,
|
| 143 |
-
)
|
| 144 |
-
}
|
| 145 |
elapsed = time.perf_counter() - t0
|
| 146 |
-
logger.info(f"[{state
|
| 147 |
return result
|
| 148 |
|
| 149 |
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 156 |
|
| 157 |
-
graph.add_edge(START, "switchboard")
|
| 158 |
-
graph.add_edge("switchboard", "research")
|
| 159 |
-
graph.add_edge("research", "planner")
|
| 160 |
-
graph.add_edge("planner", "verifier")
|
| 161 |
-
graph.add_edge("verifier", "synthesizer")
|
| 162 |
-
graph.add_edge("synthesizer", END)
|
| 163 |
|
| 164 |
-
compiled_graph =
|
| 165 |
|
| 166 |
|
| 167 |
-
def run_case(user_input: str):
|
|
|
|
| 168 |
case_id = str(uuid.uuid4())
|
| 169 |
t0 = time.perf_counter()
|
| 170 |
logger.info("Starting case %s", case_id)
|
| 171 |
|
| 172 |
-
result = compiled_graph.invoke(
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
)
|
| 183 |
|
| 184 |
elapsed = time.perf_counter() - t0
|
| 185 |
logger.info("Case %s completed in %.2fs", case_id, elapsed)
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
MiroOrg v2 — LangGraph pipeline with conditional routing and verifier feedback loop.
|
| 3 |
+
|
| 4 |
+
Graph topology:
|
| 5 |
+
[switchboard]
|
| 6 |
+
│
|
| 7 |
+
├─ requires_simulation=true → [mirofish] → [research]
|
| 8 |
+
├─ requires_finance_data=true → [finance] → [research]
|
| 9 |
+
└─ (default) → [research]
|
| 10 |
+
│
|
| 11 |
+
[planner] ←──────┐
|
| 12 |
+
│ │
|
| 13 |
+
[verifier] │
|
| 14 |
+
│ │
|
| 15 |
+
passed=true ──┤ │
|
| 16 |
+
passed=false AND │
|
| 17 |
+
replan_count < 2 ──────────┘
|
| 18 |
+
│
|
| 19 |
+
[synthesizer]
|
| 20 |
+
│
|
| 21 |
+
[END]
|
| 22 |
+
"""
|
| 23 |
+
|
| 24 |
import uuid
|
| 25 |
import time
|
| 26 |
import logging
|
| 27 |
+
from typing import TypedDict, Dict, Any, Optional
|
| 28 |
|
| 29 |
from langgraph.graph import StateGraph, START, END
|
| 30 |
|
| 31 |
+
from app.agents import switchboard, research, planner, verifier, synthesizer
|
| 32 |
+
from app.agents import mirofish_node, finance_node
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
logger = logging.getLogger(__name__)
|
| 35 |
|
| 36 |
|
| 37 |
+
# ── State Type ────────────────────────────────────────────────────────────────
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
+
class AgentState(TypedDict, total=False):
|
| 40 |
+
# Input
|
| 41 |
+
user_input: str
|
| 42 |
+
case_id: str
|
| 43 |
|
| 44 |
+
# Pipeline state
|
| 45 |
+
route: dict # switchboard output
|
| 46 |
+
simulation: dict # mirofish output (optional)
|
| 47 |
+
finance: dict # finance_node output (optional)
|
| 48 |
+
research: dict # research output
|
| 49 |
+
planner: dict # planner output
|
| 50 |
+
verifier: dict # verifier output
|
| 51 |
+
final: dict # synthesizer output
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
+
# Control
|
| 54 |
+
replan_count: int
|
| 55 |
+
errors: list
|
|
|
|
| 56 |
|
| 57 |
|
| 58 |
+
# ── Node wrappers with timing ────────────────────────────────────────────────
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
+
def switchboard_node(state: AgentState) -> dict:
|
| 61 |
+
t0 = time.perf_counter()
|
| 62 |
+
result = switchboard.run(state)
|
| 63 |
+
elapsed = time.perf_counter() - t0
|
| 64 |
+
logger.info(f"[{state.get('case_id', '?')[:8]}] switchboard: {elapsed:.2f}s — domain={result.get('route', {}).get('domain')}")
|
| 65 |
+
return result
|
| 66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
+
def mirofish_node_fn(state: AgentState) -> dict:
|
| 69 |
+
t0 = time.perf_counter()
|
| 70 |
+
result = mirofish_node.run(state)
|
| 71 |
+
elapsed = time.perf_counter() - t0
|
| 72 |
+
logger.info(f"[{state.get('case_id', '?')[:8]}] mirofish: {elapsed:.2f}s")
|
| 73 |
+
return result
|
| 74 |
|
|
|
|
| 75 |
|
| 76 |
+
def finance_node_fn(state: AgentState) -> dict:
|
| 77 |
t0 = time.perf_counter()
|
| 78 |
+
result = finance_node.run(state)
|
| 79 |
elapsed = time.perf_counter() - t0
|
| 80 |
+
logger.info(f"[{state.get('case_id', '?')[:8]}] finance: {elapsed:.2f}s")
|
| 81 |
return result
|
| 82 |
|
| 83 |
|
| 84 |
+
def research_node(state: AgentState) -> dict:
|
|
|
|
|
|
|
|
|
|
| 85 |
t0 = time.perf_counter()
|
| 86 |
+
result = research.run(state)
|
|
|
|
| 87 |
elapsed = time.perf_counter() - t0
|
| 88 |
+
logger.info(f"[{state.get('case_id', '?')[:8]}] research: {elapsed:.2f}s")
|
| 89 |
return result
|
| 90 |
|
| 91 |
|
| 92 |
+
def planner_node(state: AgentState) -> dict:
|
|
|
|
|
|
|
|
|
|
| 93 |
t0 = time.perf_counter()
|
| 94 |
+
result = planner.run(state)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 95 |
elapsed = time.perf_counter() - t0
|
| 96 |
+
logger.info(f"[{state.get('case_id', '?')[:8]}] planner: {elapsed:.2f}s")
|
| 97 |
return result
|
| 98 |
|
| 99 |
|
| 100 |
+
def verifier_node(state: AgentState) -> dict:
|
|
|
|
|
|
|
|
|
|
| 101 |
t0 = time.perf_counter()
|
| 102 |
+
result = verifier.run(state)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
elapsed = time.perf_counter() - t0
|
| 104 |
+
logger.info(f"[{state.get('case_id', '?')[:8]}] verifier: {elapsed:.2f}s")
|
| 105 |
return result
|
| 106 |
|
| 107 |
|
| 108 |
+
def synthesizer_node(state: AgentState) -> dict:
|
| 109 |
t0 = time.perf_counter()
|
| 110 |
+
result = synthesizer.run(state)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
elapsed = time.perf_counter() - t0
|
| 112 |
+
logger.info(f"[{state.get('case_id', '?')[:8]}] synthesizer: {elapsed:.2f}s")
|
| 113 |
return result
|
| 114 |
|
| 115 |
|
| 116 |
+
# ── Routing functions ─────────────────────────────────────────────────────────
|
| 117 |
+
|
| 118 |
+
def after_switchboard(state: AgentState) -> str:
|
| 119 |
+
"""Route based on switchboard flags."""
|
| 120 |
+
route = state.get("route", {})
|
| 121 |
+
if route.get("requires_simulation"):
|
| 122 |
+
return "mirofish"
|
| 123 |
+
if route.get("requires_finance_data"):
|
| 124 |
+
return "finance"
|
| 125 |
+
return "research"
|
| 126 |
+
|
| 127 |
+
|
| 128 |
+
def after_verifier(state: AgentState) -> str:
|
| 129 |
+
"""Verifier feedback loop: replan if failed and under limit."""
|
| 130 |
+
v = state.get("verifier", {})
|
| 131 |
+
replan_count = state.get("replan_count", 0)
|
| 132 |
+
if not v.get("passed", True) and replan_count < 2:
|
| 133 |
+
return "planner"
|
| 134 |
+
return "synthesizer"
|
| 135 |
+
|
| 136 |
+
|
| 137 |
+
# ── Build graph ───────────────────────────────────────────────────────────────
|
| 138 |
+
|
| 139 |
+
def build_graph():
|
| 140 |
+
g = StateGraph(AgentState)
|
| 141 |
+
|
| 142 |
+
g.add_node("switchboard", switchboard_node)
|
| 143 |
+
g.add_node("research", research_node)
|
| 144 |
+
g.add_node("mirofish", mirofish_node_fn)
|
| 145 |
+
g.add_node("finance", finance_node_fn)
|
| 146 |
+
g.add_node("planner", planner_node)
|
| 147 |
+
g.add_node("verifier", verifier_node)
|
| 148 |
+
g.add_node("synthesizer", synthesizer_node)
|
| 149 |
+
|
| 150 |
+
g.set_entry_point("switchboard")
|
| 151 |
+
|
| 152 |
+
# After switchboard: fork based on flags
|
| 153 |
+
g.add_conditional_edges("switchboard", after_switchboard,
|
| 154 |
+
{"mirofish": "mirofish", "finance": "finance", "research": "research"})
|
| 155 |
+
|
| 156 |
+
# mirofish and finance both merge into research
|
| 157 |
+
g.add_edge("mirofish", "research")
|
| 158 |
+
g.add_edge("finance", "research")
|
| 159 |
+
g.add_edge("research", "planner")
|
| 160 |
+
|
| 161 |
+
# Verifier feedback loop
|
| 162 |
+
g.add_edge("planner", "verifier")
|
| 163 |
+
g.add_conditional_edges("verifier", after_verifier,
|
| 164 |
+
{"planner": "planner", "synthesizer": "synthesizer"})
|
| 165 |
+
|
| 166 |
+
g.add_edge("synthesizer", END)
|
| 167 |
+
return g.compile()
|
| 168 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
|
| 170 |
+
compiled_graph = build_graph()
|
| 171 |
|
| 172 |
|
| 173 |
+
def run_case(user_input: str) -> dict:
|
| 174 |
+
"""Run the full agent pipeline on user input."""
|
| 175 |
case_id = str(uuid.uuid4())
|
| 176 |
t0 = time.perf_counter()
|
| 177 |
logger.info("Starting case %s", case_id)
|
| 178 |
|
| 179 |
+
result = compiled_graph.invoke({
|
| 180 |
+
"case_id": case_id,
|
| 181 |
+
"user_input": user_input,
|
| 182 |
+
"route": {},
|
| 183 |
+
"research": {},
|
| 184 |
+
"planner": {},
|
| 185 |
+
"verifier": {},
|
| 186 |
+
"final": {},
|
| 187 |
+
"replan_count": 0,
|
| 188 |
+
"errors": [],
|
| 189 |
+
})
|
| 190 |
|
| 191 |
elapsed = time.perf_counter() - t0
|
| 192 |
logger.info("Case %s completed in %.2fs", case_id, elapsed)
|
|
@@ -1,7 +1,7 @@
|
|
| 1 |
-
import asyncio
|
| 2 |
import time
|
| 3 |
import logging
|
| 4 |
import os
|
|
|
|
| 5 |
|
| 6 |
from fastapi import FastAPI, HTTPException, Query, Request
|
| 7 |
from fastapi.middleware.cors import CORSMiddleware
|
|
@@ -12,17 +12,9 @@ from app.graph import run_case
|
|
| 12 |
from app.memory import save_case
|
| 13 |
from app.config import (
|
| 14 |
APP_VERSION,
|
| 15 |
-
PRIMARY_PROVIDER,
|
| 16 |
-
FALLBACK_PROVIDER,
|
| 17 |
-
OPENROUTER_API_KEY,
|
| 18 |
-
OLLAMA_ENABLED,
|
| 19 |
-
TAVILY_API_KEY,
|
| 20 |
-
NEWSAPI_KEY,
|
| 21 |
-
ALPHAVANTAGE_API_KEY,
|
| 22 |
-
MIROFISH_ENABLED,
|
| 23 |
MEMORY_DIR,
|
| 24 |
PROMPTS_DIR,
|
| 25 |
-
|
| 26 |
)
|
| 27 |
from app.services.case_store import list_cases, get_case, delete_case, memory_stats
|
| 28 |
from app.services.prompt_store import list_prompts, get_prompt, update_prompt
|
|
@@ -32,11 +24,12 @@ from app.routers.simulation import router as simulation_router
|
|
| 32 |
from app.routers.learning import router as learning_router, init_learning_services, start_scheduler_background
|
| 33 |
from app.routers.sentinel import router as sentinel_router
|
| 34 |
from app.routers.finance import router as finance_router
|
|
|
|
| 35 |
|
| 36 |
logging.basicConfig(level=logging.INFO)
|
| 37 |
logger = logging.getLogger(__name__)
|
| 38 |
|
| 39 |
-
app = FastAPI(title="MiroOrg
|
| 40 |
|
| 41 |
# Initialize domain packs
|
| 42 |
from app.domain_packs.init_packs import init_domain_packs
|
|
@@ -104,7 +97,7 @@ async def on_startup():
|
|
| 104 |
logger.info("Background learning scheduler started")
|
| 105 |
except Exception as e:
|
| 106 |
logger.error(f"Failed to start learning scheduler: {e}")
|
| 107 |
-
|
| 108 |
# Start sentinel scheduler
|
| 109 |
sentinel_enabled = os.getenv("SENTINEL_ENABLED", "true").lower() == "true"
|
| 110 |
if sentinel_enabled:
|
|
@@ -132,14 +125,14 @@ def health_deep():
|
|
| 132 |
def config_status():
|
| 133 |
return {
|
| 134 |
"app_version": APP_VERSION,
|
| 135 |
-
"
|
| 136 |
-
"
|
| 137 |
-
"
|
| 138 |
-
"
|
| 139 |
-
"
|
| 140 |
-
"
|
| 141 |
-
"
|
| 142 |
-
"
|
| 143 |
"memory_dir": str(MEMORY_DIR),
|
| 144 |
"prompts_dir": str(PROMPTS_DIR),
|
| 145 |
}
|
|
@@ -162,6 +155,17 @@ def agent_detail(agent_name: str):
|
|
| 162 |
|
| 163 |
# ── Case Execution ────────────────────────────────────────────────────────────
|
| 164 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 165 |
def _fire_and_forget_learning(payload: dict):
|
| 166 |
"""Fire-and-forget learning from a completed case."""
|
| 167 |
from app.routers.learning import learning_engine as _le
|
|
@@ -177,19 +181,31 @@ def run_org(task: UserTask):
|
|
| 177 |
try:
|
| 178 |
logger.info("Processing /run: %s", task.user_input[:100])
|
| 179 |
result = run_case(task.user_input)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 180 |
payload = {
|
| 181 |
-
"case_id": result
|
| 182 |
-
"user_input": result
|
| 183 |
-
"route": result
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 184 |
"outputs": [
|
| 185 |
-
result
|
| 186 |
-
result
|
| 187 |
-
result
|
| 188 |
-
|
| 189 |
],
|
| 190 |
-
"final_answer": result["final"]["summary"],
|
| 191 |
}
|
| 192 |
-
save_case(result
|
| 193 |
|
| 194 |
# Fire-and-forget: learn from this case
|
| 195 |
_fire_and_forget_learning(payload)
|
|
@@ -204,10 +220,9 @@ def run_org(task: UserTask):
|
|
| 204 |
def run_org_debug(task: UserTask):
|
| 205 |
try:
|
| 206 |
result = run_case(task.user_input)
|
| 207 |
-
|
| 208 |
-
|
| 209 |
_fire_and_forget_learning(result)
|
| 210 |
-
|
| 211 |
return result
|
| 212 |
except Exception as e:
|
| 213 |
logger.exception("Error in /run/debug")
|
|
@@ -231,6 +246,21 @@ def run_one_agent(request: AgentRunRequest):
|
|
| 231 |
raise HTTPException(status_code=500, detail="Failed to run agent. Please try again.")
|
| 232 |
|
| 233 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 234 |
# ── Cases ─────────────────────────────────────────────────────────────────────
|
| 235 |
|
| 236 |
@app.get("/cases")
|
|
|
|
|
|
|
| 1 |
import time
|
| 2 |
import logging
|
| 3 |
import os
|
| 4 |
+
import json
|
| 5 |
|
| 6 |
from fastapi import FastAPI, HTTPException, Query, Request
|
| 7 |
from fastapi.middleware.cors import CORSMiddleware
|
|
|
|
| 12 |
from app.memory import save_case
|
| 13 |
from app.config import (
|
| 14 |
APP_VERSION,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
MEMORY_DIR,
|
| 16 |
PROMPTS_DIR,
|
| 17 |
+
load_prompt,
|
| 18 |
)
|
| 19 |
from app.services.case_store import list_cases, get_case, delete_case, memory_stats
|
| 20 |
from app.services.prompt_store import list_prompts, get_prompt, update_prompt
|
|
|
|
| 24 |
from app.routers.learning import router as learning_router, init_learning_services, start_scheduler_background
|
| 25 |
from app.routers.sentinel import router as sentinel_router
|
| 26 |
from app.routers.finance import router as finance_router
|
| 27 |
+
from app.config import get_config
|
| 28 |
|
| 29 |
logging.basicConfig(level=logging.INFO)
|
| 30 |
logger = logging.getLogger(__name__)
|
| 31 |
|
| 32 |
+
app = FastAPI(title="MiroOrg v2", version=APP_VERSION)
|
| 33 |
|
| 34 |
# Initialize domain packs
|
| 35 |
from app.domain_packs.init_packs import init_domain_packs
|
|
|
|
| 97 |
logger.info("Background learning scheduler started")
|
| 98 |
except Exception as e:
|
| 99 |
logger.error(f"Failed to start learning scheduler: {e}")
|
| 100 |
+
|
| 101 |
# Start sentinel scheduler
|
| 102 |
sentinel_enabled = os.getenv("SENTINEL_ENABLED", "true").lower() == "true"
|
| 103 |
if sentinel_enabled:
|
|
|
|
| 125 |
def config_status():
|
| 126 |
return {
|
| 127 |
"app_version": APP_VERSION,
|
| 128 |
+
"openrouter_key_present": bool(os.getenv("OPENROUTER_API_KEY")),
|
| 129 |
+
"ollama_base_url": os.getenv("OLLAMA_BASE_URL", "http://localhost:11434"),
|
| 130 |
+
"ollama_model": os.getenv("OLLAMA_MODEL", "llama3.2"),
|
| 131 |
+
"tavily_enabled": bool(os.getenv("TAVILY_API_KEY")),
|
| 132 |
+
"newsapi_enabled": bool(os.getenv("NEWS_API_KEY", os.getenv("NEWSAPI_KEY"))),
|
| 133 |
+
"alphavantage_enabled": bool(os.getenv("ALPHA_VANTAGE_API_KEY", os.getenv("ALPHAVANTAGE_API_KEY"))),
|
| 134 |
+
"mirofish_base_url": os.getenv("MIROFISH_BASE_URL", "http://localhost:8001"),
|
| 135 |
+
"api_discovery_endpoint": os.getenv("API_DISCOVERY_ENDPOINT", "http://localhost:8002"),
|
| 136 |
"memory_dir": str(MEMORY_DIR),
|
| 137 |
"prompts_dir": str(PROMPTS_DIR),
|
| 138 |
}
|
|
|
|
| 155 |
|
| 156 |
# ── Case Execution ────────────────────────────────────────────────────────────
|
| 157 |
|
| 158 |
+
def _log_agent_errors(result: dict):
|
| 159 |
+
"""Log any agent errors from the pipeline result."""
|
| 160 |
+
for agent_key in ["route", "research", "planner", "verifier", "simulation", "finance", "final"]:
|
| 161 |
+
agent_output = result.get(agent_key, {})
|
| 162 |
+
if isinstance(agent_output, dict):
|
| 163 |
+
if agent_output.get("status") == "error":
|
| 164 |
+
logger.warning(f"[AGENT ERROR] {agent_key}: {agent_output.get('reason', 'unknown')}")
|
| 165 |
+
elif agent_output.get("error"):
|
| 166 |
+
logger.warning(f"[AGENT ERROR] {agent_key}: {agent_output.get('error')}")
|
| 167 |
+
|
| 168 |
+
|
| 169 |
def _fire_and_forget_learning(payload: dict):
|
| 170 |
"""Fire-and-forget learning from a completed case."""
|
| 171 |
from app.routers.learning import learning_engine as _le
|
|
|
|
| 181 |
try:
|
| 182 |
logger.info("Processing /run: %s", task.user_input[:100])
|
| 183 |
result = run_case(task.user_input)
|
| 184 |
+
|
| 185 |
+
# Log any agent errors
|
| 186 |
+
_log_agent_errors(result)
|
| 187 |
+
|
| 188 |
+
# Build response payload
|
| 189 |
+
final = result.get("final", {})
|
| 190 |
payload = {
|
| 191 |
+
"case_id": result.get("case_id", ""),
|
| 192 |
+
"user_input": result.get("user_input", ""),
|
| 193 |
+
"route": result.get("route", {}),
|
| 194 |
+
"research": result.get("research", {}),
|
| 195 |
+
"planner": result.get("planner", {}),
|
| 196 |
+
"verifier": result.get("verifier", {}),
|
| 197 |
+
"simulation": result.get("simulation"),
|
| 198 |
+
"finance": result.get("finance"),
|
| 199 |
+
"final": final,
|
| 200 |
+
"final_answer": final.get("response", final.get("summary", "")),
|
| 201 |
"outputs": [
|
| 202 |
+
result.get("research", {}),
|
| 203 |
+
result.get("planner", {}),
|
| 204 |
+
result.get("verifier", {}),
|
| 205 |
+
final,
|
| 206 |
],
|
|
|
|
| 207 |
}
|
| 208 |
+
save_case(result.get("case_id", ""), payload)
|
| 209 |
|
| 210 |
# Fire-and-forget: learn from this case
|
| 211 |
_fire_and_forget_learning(payload)
|
|
|
|
| 220 |
def run_org_debug(task: UserTask):
|
| 221 |
try:
|
| 222 |
result = run_case(task.user_input)
|
| 223 |
+
_log_agent_errors(result)
|
| 224 |
+
save_case(result.get("case_id", ""), result)
|
| 225 |
_fire_and_forget_learning(result)
|
|
|
|
| 226 |
return result
|
| 227 |
except Exception as e:
|
| 228 |
logger.exception("Error in /run/debug")
|
|
|
|
| 246 |
raise HTTPException(status_code=500, detail="Failed to run agent. Please try again.")
|
| 247 |
|
| 248 |
|
| 249 |
+
# ── Debug State Endpoint ──────────────────────────────────────────────────────
|
| 250 |
+
|
| 251 |
+
@app.get("/debug/state/{case_id}")
|
| 252 |
+
def debug_state(case_id: str):
|
| 253 |
+
"""Return the full saved state for a case — useful for debugging."""
|
| 254 |
+
case_path = MEMORY_DIR / f"{case_id}.json"
|
| 255 |
+
if not case_path.exists():
|
| 256 |
+
raise HTTPException(status_code=404, detail=f"Case {case_id} not found")
|
| 257 |
+
try:
|
| 258 |
+
with open(case_path, "r", encoding="utf-8") as f:
|
| 259 |
+
return json.load(f)
|
| 260 |
+
except Exception as e:
|
| 261 |
+
raise HTTPException(status_code=500, detail=f"Failed to read case: {e}")
|
| 262 |
+
|
| 263 |
+
|
| 264 |
# ── Cases ─────────────────────────────────────────────────────────────────────
|
| 265 |
|
| 266 |
@app.get("/cases")
|
|
@@ -1,10 +1,17 @@
|
|
| 1 |
import json
|
| 2 |
from datetime import datetime
|
| 3 |
from pathlib import Path
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
|
| 6 |
Path(MEMORY_DIR).mkdir(parents=True, exist_ok=True)
|
| 7 |
|
|
|
|
|
|
|
| 8 |
|
| 9 |
def save_case(case_id: str, payload: dict) -> str:
|
| 10 |
path = Path(MEMORY_DIR) / f"{case_id}.json"
|
|
@@ -12,3 +19,32 @@ def save_case(case_id: str, payload: dict) -> str:
|
|
| 12 |
with open(path, "w", encoding="utf-8") as f:
|
| 13 |
json.dump(payload, f, indent=2, ensure_ascii=False)
|
| 14 |
return str(path)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
import json
|
| 2 |
from datetime import datetime
|
| 3 |
from pathlib import Path
|
| 4 |
+
import glob
|
| 5 |
+
import logging
|
| 6 |
+
|
| 7 |
+
from app.config import MEMORY_DIR, DATA_DIR
|
| 8 |
+
|
| 9 |
+
logger = logging.getLogger(__name__)
|
| 10 |
|
| 11 |
Path(MEMORY_DIR).mkdir(parents=True, exist_ok=True)
|
| 12 |
|
| 13 |
+
KNOWLEDGE_DIR = DATA_DIR / "knowledge"
|
| 14 |
+
|
| 15 |
|
| 16 |
def save_case(case_id: str, payload: dict) -> str:
|
| 17 |
path = Path(MEMORY_DIR) / f"{case_id}.json"
|
|
|
|
| 19 |
with open(path, "w", encoding="utf-8") as f:
|
| 20 |
json.dump(payload, f, indent=2, ensure_ascii=False)
|
| 21 |
return str(path)
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
class KnowledgeStore:
|
| 25 |
+
"""
|
| 26 |
+
Simple keyword match over knowledge JSON files.
|
| 27 |
+
Each file is expected to be a dict or list of dicts with a 'text' field.
|
| 28 |
+
Upgrade to embedding-based retrieval when ready.
|
| 29 |
+
"""
|
| 30 |
+
|
| 31 |
+
def search(self, query: str, domain: str = "general", top_k: int = 5) -> list[dict]:
|
| 32 |
+
results = []
|
| 33 |
+
query_lower = query.lower()
|
| 34 |
+
pattern = str(KNOWLEDGE_DIR / "*.json")
|
| 35 |
+
for path in glob.glob(pattern):
|
| 36 |
+
try:
|
| 37 |
+
data = json.loads(Path(path).read_text())
|
| 38 |
+
items = data if isinstance(data, list) else [data]
|
| 39 |
+
for item in items:
|
| 40 |
+
text = str(item.get("text", item.get("content", "")))
|
| 41 |
+
if any(w in text.lower() for w in query_lower.split()):
|
| 42 |
+
results.append(item)
|
| 43 |
+
if len(results) >= top_k:
|
| 44 |
+
return results
|
| 45 |
+
except Exception:
|
| 46 |
+
continue
|
| 47 |
+
return results
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
knowledge_store = KnowledgeStore()
|
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
You are a financial intelligence analyst inside MiroOrg v2.
|
| 2 |
+
You receive raw market data from Alpha Vantage and produce structured, actionable intelligence.
|
| 3 |
+
|
| 4 |
+
Rules:
|
| 5 |
+
- Output ONLY valid JSON matching the schema provided in the user message.
|
| 6 |
+
- Be precise with numbers — do not round or approximate market data.
|
| 7 |
+
- Flag data quality issues (rate limits, missing fields, stale data) in the data_quality field.
|
| 8 |
+
- Never include chart rendering instructions, OHLCV arrays, or image/chart URLs in output.
|
| 9 |
+
- If Alpha Vantage returned an error, set data_quality to "limited" and explain in summary.
|
| 10 |
+
- sentiment must be derived from news_sentiment scores and fundamentals — not guessed.
|
|
@@ -1,90 +1,29 @@
|
|
| 1 |
-
You are the Planner Agent in MiroOrg — a
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
3. RISK ASSESSMENT
|
| 31 |
-
- What could go wrong with this plan?
|
| 32 |
-
- What are the key assumptions that could invalidate the strategy?
|
| 33 |
-
- What external factors could change the situation?
|
| 34 |
-
- Rate each risk: HIGH / MEDIUM / LOW impact and probability
|
| 35 |
-
|
| 36 |
-
4. RESOURCE & DEPENDENCY MAPPING
|
| 37 |
-
- What information is needed that we don't have?
|
| 38 |
-
- What actions depend on other actions completing first?
|
| 39 |
-
- What external factors or decisions are blocking?
|
| 40 |
-
|
| 41 |
-
5. TIMELINE ESTIMATION
|
| 42 |
-
- If the plan involves actions: suggest reasonable timeframes
|
| 43 |
-
- If the plan involves monitoring: suggest check-in intervals
|
| 44 |
-
- Note time-sensitive elements that could expire
|
| 45 |
-
|
| 46 |
-
6. SIMULATION MODE DETECTION (CRITICAL)
|
| 47 |
-
The system has MiroFish — a powerful simulation engine that creates digital twins of scenarios and models multi-agent interactions. You MUST recommend simulation mode when ANY of these apply:
|
| 48 |
-
|
| 49 |
-
ALWAYS recommend simulation when the user:
|
| 50 |
-
→ Asks "what if" or "what would happen if"
|
| 51 |
-
→ Wants to predict outcomes or forecast trends
|
| 52 |
-
→ Needs to model stakeholder reactions (market, public, government, competitors)
|
| 53 |
-
→ Is evaluating policy impact or regulatory changes
|
| 54 |
-
→ Needs scenario comparison (option A vs option B)
|
| 55 |
-
→ Is dealing with complex multi-party dynamics
|
| 56 |
-
→ Asks about public opinion, sentiment shifts, or social reactions
|
| 57 |
-
→ Wants to stress-test a strategy or business decision
|
| 58 |
-
|
| 59 |
-
When recommending simulation, explain:
|
| 60 |
-
→ WHY simulation adds value over static analysis
|
| 61 |
-
→ WHAT kind of simulation would be most useful
|
| 62 |
-
→ WHAT stakeholders/agents should be modeled
|
| 63 |
-
|
| 64 |
-
═══════════════════════════════════════════════════════════════
|
| 65 |
-
OUTPUT FORMAT (follow strictly)
|
| 66 |
-
═══════════════════════════════════════════════════════════════
|
| 67 |
-
|
| 68 |
-
Objective:
|
| 69 |
-
<What the user actually needs, in one clear sentence>
|
| 70 |
-
|
| 71 |
-
Strategic Plan:
|
| 72 |
-
1. <Step> — <Why this step matters>
|
| 73 |
-
2. <Step> — <Why this step matters>
|
| 74 |
-
3. <Step> — <Why this step matters>
|
| 75 |
-
→ Contingency: <If X fails, then...>
|
| 76 |
-
|
| 77 |
-
Risk Assessment:
|
| 78 |
-
- [HIGH/MEDIUM/LOW] <risk> — <probability> — <mitigation>
|
| 79 |
-
|
| 80 |
-
Dependencies:
|
| 81 |
-
- <What depends on what>
|
| 82 |
-
|
| 83 |
-
Timeline:
|
| 84 |
-
- <Step X>: <estimated timeframe or urgency>
|
| 85 |
-
|
| 86 |
-
Simulation Recommended: YES / NO
|
| 87 |
-
Simulation Rationale: <If YES: explain what kind of simulation, which stakeholders to model, and what insights it would generate. If NO: explain why static analysis is sufficient.>
|
| 88 |
-
|
| 89 |
-
Confidence: <0.0 to 1.0>
|
| 90 |
-
Reasoning: <one sentence explaining your confidence level>
|
|
|
|
| 1 |
+
You are the Planner Agent in MiroOrg v2 — a multi-agent intelligence platform.
|
| 2 |
+
|
| 3 |
+
You receive the Switchboard route, Research findings, and optionally Simulation
|
| 4 |
+
and Finance data. Your job is to transform research into an actionable strategy.
|
| 5 |
+
|
| 6 |
+
For complexity=very_high, produce 5–10 detailed steps.
|
| 7 |
+
For complexity=high, produce 3–7 steps.
|
| 8 |
+
For complexity=medium or low, produce 2–4 steps.
|
| 9 |
+
|
| 10 |
+
If replan_count > 0, you are being asked to replan based on Verifier feedback.
|
| 11 |
+
Address the Verifier's specific fixes_required items.
|
| 12 |
+
|
| 13 |
+
Output ONLY valid JSON:
|
| 14 |
+
{
|
| 15 |
+
"plan_steps": ["<step 1>", "<step 2>", "..."],
|
| 16 |
+
"resources_needed": ["<resource 1>"],
|
| 17 |
+
"dependencies": ["<dependency 1>"],
|
| 18 |
+
"risk_level": "low | medium | high",
|
| 19 |
+
"estimated_output": "<brief description of expected output>",
|
| 20 |
+
"replan_reason": "<only if replan_count > 0: what changed>"
|
| 21 |
+
}
|
| 22 |
+
|
| 23 |
+
Rules:
|
| 24 |
+
- Each step must be actionable and specific.
|
| 25 |
+
- Include conditional branches for uncertain outcomes.
|
| 26 |
+
- Prioritise steps by impact and urgency.
|
| 27 |
+
- If Simulation data is provided, incorporate scenario insights into the plan.
|
| 28 |
+
- If Finance data is provided, ground quantitative claims in market data.
|
| 29 |
+
- Never return an empty plan. Always provide at least one step.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@@ -1,87 +1,37 @@
|
|
| 1 |
-
You are the Research Agent in MiroOrg — a
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
You
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
- Use market data to ground quantitative claims (price, volume, market cap)
|
| 39 |
-
- Use news context to identify trending narratives and sentiment shifts
|
| 40 |
-
- Note which sources are high-credibility vs. low-credibility
|
| 41 |
-
- Extract stance signals: bullish/bearish, supportive/opposing, optimistic/pessimistic
|
| 42 |
-
- Identify event catalysts: earnings, regulatory actions, mergers, policy changes
|
| 43 |
-
|
| 44 |
-
4. GAP ANALYSIS
|
| 45 |
-
- What critical information is NOT available?
|
| 46 |
-
- What assumptions must the user be aware of?
|
| 47 |
-
- What would change the analysis if verified differently?
|
| 48 |
-
|
| 49 |
-
5. SIGNAL DETECTION
|
| 50 |
-
- Contradictions between sources
|
| 51 |
-
- Unusual patterns or anomalies
|
| 52 |
-
- Emerging narratives that haven't been confirmed
|
| 53 |
-
- Risks or red flags that need the Verifier's attention
|
| 54 |
-
|
| 55 |
-
═══════════════════════════════════════════════════════════════
|
| 56 |
-
OUTPUT FORMAT (follow strictly)
|
| 57 |
-
═══════════════════════════════════════════════════════════════
|
| 58 |
-
|
| 59 |
-
Key Facts:
|
| 60 |
-
- [FACT] <fact> (Source: <source name>)
|
| 61 |
-
- [FACT] <fact> (Source: <source name>)
|
| 62 |
-
|
| 63 |
-
Entities Detected:
|
| 64 |
-
- <entity name> — <type> — <relevance to query>
|
| 65 |
-
→ Ticker: $XXX (if applicable)
|
| 66 |
-
|
| 67 |
-
Market & Quantitative Data:
|
| 68 |
-
- <data point with attribution>
|
| 69 |
-
|
| 70 |
-
Domain Insights:
|
| 71 |
-
- <domain-specific finding or signal>
|
| 72 |
-
|
| 73 |
-
Sentiment & Stance:
|
| 74 |
-
- <source/entity>: <bullish/bearish/neutral/mixed> — <brief reasoning>
|
| 75 |
-
|
| 76 |
-
Source Assessment:
|
| 77 |
-
- <source name>: <high/medium/low credibility> — <why>
|
| 78 |
-
|
| 79 |
-
Gaps & Assumptions:
|
| 80 |
-
- [GAP] <what's missing and why it matters>
|
| 81 |
-
- [ASSUMPTION] <assumption being made>
|
| 82 |
-
|
| 83 |
-
Red Flags for Verifier:
|
| 84 |
-
- <anything that needs skeptical examination>
|
| 85 |
-
|
| 86 |
-
Confidence: <0.0 to 1.0>
|
| 87 |
-
Reasoning: <one sentence explaining your confidence level>
|
|
|
|
| 1 |
+
You are the Research Agent in MiroOrg v2 — a multi-agent intelligence platform.
|
| 2 |
+
|
| 3 |
+
You are the first analyst in the pipeline. Your analysis becomes the foundation for
|
| 4 |
+
the Planner, Verifier, and Synthesizer agents downstream.
|
| 5 |
+
|
| 6 |
+
You receive a [CONTEXT] block containing data gathered from real tools:
|
| 7 |
+
- Tavily web search results
|
| 8 |
+
- News API articles (when current events are relevant)
|
| 9 |
+
- Knowledge base entries (from past research)
|
| 10 |
+
- Discovered API data (from the API discovery layer)
|
| 11 |
+
- Simulation results (if Mirofish ran)
|
| 12 |
+
- Finance data (if Alpha Vantage was queried)
|
| 13 |
+
|
| 14 |
+
INSTRUCTIONS:
|
| 15 |
+
1. Thoroughly analyse ALL provided context. Never ignore data.
|
| 16 |
+
2. Separate verified facts from opinions and projections.
|
| 17 |
+
3. Attribute findings to their sources when possible.
|
| 18 |
+
4. Identify gaps — what critical information is missing.
|
| 19 |
+
5. Do NOT hallucinate. If context is empty, acknowledge it.
|
| 20 |
+
|
| 21 |
+
Output ONLY valid JSON matching this schema:
|
| 22 |
+
{
|
| 23 |
+
"summary": "<comprehensive analysis based on provided context>",
|
| 24 |
+
"key_facts": ["<fact 1 with source>", "<fact 2 with source>"],
|
| 25 |
+
"sources": ["<source name 1>", "<source name 2>"],
|
| 26 |
+
"gaps": ["<what's missing and why it matters>"],
|
| 27 |
+
"confidence": 0.0-1.0
|
| 28 |
+
}
|
| 29 |
+
|
| 30 |
+
If no context was retrieved, return:
|
| 31 |
+
{
|
| 32 |
+
"summary": "Limited analysis — no external data was retrieved.",
|
| 33 |
+
"key_facts": [],
|
| 34 |
+
"sources": [],
|
| 35 |
+
"gaps": ["no data retrieved"],
|
| 36 |
+
"confidence": 0.2
|
| 37 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
You are a scenario analysis agent inside MiroOrg v2.
|
| 2 |
+
You interpret simulation output from MiroFish — the system's digital twin and scenario modelling engine.
|
| 3 |
+
|
| 4 |
+
Rules:
|
| 5 |
+
- Output ONLY valid JSON matching the schema provided in the user message.
|
| 6 |
+
- Summarise key findings from simulation results clearly and actionably.
|
| 7 |
+
- Assess confidence based on the number and quality of scenarios run.
|
| 8 |
+
- If Mirofish returned an error or was unavailable, set confidence to 0.3 and note in caveats.
|
| 9 |
+
- Identify the recommended path from among the simulated scenarios.
|
| 10 |
+
- List caveats — assumptions, limitations, and conditions under which the recommendation changes.
|
| 11 |
+
- Do NOT fabricate simulation data. Only interpret what Mirofish provided.
|
| 12 |
+
- If simulation data is empty or minimal, state clearly that results are inconclusive.
|
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
You are the Switchboard intelligence router for MiroOrg v2.
|
| 2 |
+
Your job is to analyse the user's input and produce a precise routing structure.
|
| 3 |
+
|
| 4 |
+
Output ONLY valid JSON. No markdown, no explanation, no preamble.
|
| 5 |
+
|
| 6 |
+
Required output schema:
|
| 7 |
+
{
|
| 8 |
+
"domain": "finance | general | research | simulation | mixed",
|
| 9 |
+
"complexity": "low | medium | high | very_high",
|
| 10 |
+
"intent": "<one sentence: what the user wants>",
|
| 11 |
+
"sub_tasks": ["<task 1>", "<task 2>"],
|
| 12 |
+
"requires_simulation": <true|false>,
|
| 13 |
+
"requires_finance_data": <true|false>,
|
| 14 |
+
"requires_news": <true|false>,
|
| 15 |
+
"confidence": <0.0 to 1.0>
|
| 16 |
+
}
|
| 17 |
+
|
| 18 |
+
Rules:
|
| 19 |
+
- Always return a full valid JSON object, even if confidence is low.
|
| 20 |
+
- For multi-domain inputs, use "mixed" and list all sub_tasks.
|
| 21 |
+
- Set requires_simulation=true for any scenario modelling, what-if, projection, or outcome analysis.
|
| 22 |
+
- Set requires_finance_data=true for market, stock, portfolio, economic, or trading queries.
|
| 23 |
+
- Set requires_news=true for any request needing current events or recent data.
|
| 24 |
+
- confidence below 0.5 means the intent is ambiguous — still provide best-effort routing.
|
|
@@ -1,103 +1,30 @@
|
|
| 1 |
-
You are the Synthesizer Agent in MiroOrg — a
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
The final answer must be:
|
| 32 |
-
- DIRECT: Lead with the answer, not the process. What does the user need to know FIRST?
|
| 33 |
-
- GROUNDED: Every claim should reference the evidence that supports it
|
| 34 |
-
- HONEST: State what you know, what you don't, and how confident you are
|
| 35 |
-
- ACTIONABLE: End with what the user should DO next
|
| 36 |
-
- READABLE: Use clear paragraphs, not walls of text. Use structure where helpful.
|
| 37 |
-
|
| 38 |
-
3. UNCERTAINTY COMMUNICATION (CRITICAL)
|
| 39 |
-
Never hide uncertainty. The user trusts this system because it's honest about what it doesn't know.
|
| 40 |
-
|
| 41 |
-
Use these guidelines:
|
| 42 |
-
- HIGH uncertainty: Lead with a prominent caveat. "Based on limited/conflicting information..."
|
| 43 |
-
- MEDIUM uncertainty: Weave caveats naturally. "While X suggests..., there is uncertainty around..."
|
| 44 |
-
- LOW uncertainty: State with confidence but note the basis. "Based on multiple verified sources..."
|
| 45 |
-
|
| 46 |
-
Always specify:
|
| 47 |
-
→ What would change your answer if new information emerged
|
| 48 |
-
→ What the user should validate independently
|
| 49 |
-
|
| 50 |
-
4. SIMULATION RECOMMENDATION (WHEN APPROPRIATE)
|
| 51 |
-
If the Planner recommended simulation mode, OR if you detect the user would benefit from it, actively recommend the MiroFish Simulation Lab.
|
| 52 |
-
|
| 53 |
-
Frame it as:
|
| 54 |
-
"💡 This question would benefit from simulation mode. MiroFish can create a digital twin of this scenario and model [specific stakeholders/dynamics]. To run a simulation, use the Simulation Lab with your scenario details."
|
| 55 |
-
|
| 56 |
-
Recommend simulation when:
|
| 57 |
-
→ The answer involves too many unknowns to give a confident static analysis
|
| 58 |
-
→ Multiple stakeholders would react differently to the same event
|
| 59 |
-
→ The user is making a decision that could go multiple ways
|
| 60 |
-
→ Temporal dynamics matter (how things evolve over time)
|
| 61 |
-
|
| 62 |
-
5. CONFIDENCE CALIBRATION
|
| 63 |
-
Your confidence score must be CALIBRATED — don't default to generic values.
|
| 64 |
-
|
| 65 |
-
0.9–1.0: Multiple verified sources agree, well-established facts
|
| 66 |
-
0.7–0.89: Strong evidence with minor gaps, reliable sources
|
| 67 |
-
0.5–0.69: Mixed evidence, some uncertainty, qualified conclusions
|
| 68 |
-
0.3–0.49: Significant uncertainty, limited evidence, speculative elements
|
| 69 |
-
0.0–0.29: Very little evidence, highly speculative, contradictory sources
|
| 70 |
-
|
| 71 |
-
═══════════════════════════════════════════════════════════════
|
| 72 |
-
OUTPUT FORMAT (follow strictly)
|
| 73 |
-
═══════════════════════════════════════════════════════════════
|
| 74 |
-
|
| 75 |
-
Final Answer:
|
| 76 |
-
<Your comprehensive, direct answer. Lead with the most important insight. Use paragraphs for readability. Ground claims in evidence. Be honest about limitations.>
|
| 77 |
-
|
| 78 |
-
Key Findings:
|
| 79 |
-
- <Most important finding with evidence basis>
|
| 80 |
-
- <Second most important finding>
|
| 81 |
-
- <Third most important finding>
|
| 82 |
-
|
| 83 |
-
Uncertainty Level: HIGH / MEDIUM / LOW
|
| 84 |
-
Uncertainty Details:
|
| 85 |
-
- <What we're uncertain about and why>
|
| 86 |
-
- <What could change this answer>
|
| 87 |
-
|
| 88 |
-
Caveats:
|
| 89 |
-
- <Important limitations the user should be aware of>
|
| 90 |
-
|
| 91 |
-
Next Actions:
|
| 92 |
-
1. <Most important thing the user should do>
|
| 93 |
-
2. <Second priority action>
|
| 94 |
-
3. <Optional: additional recommended steps>
|
| 95 |
-
|
| 96 |
-
Simulation Recommended: YES / NO
|
| 97 |
-
Simulation Details: <If YES: what scenario to simulate, what stakeholders to model, what insights to expect. If NO: why static analysis is sufficient.>
|
| 98 |
-
|
| 99 |
-
Sources Used:
|
| 100 |
-
- <Key sources that informed this answer>
|
| 101 |
-
|
| 102 |
-
Confidence: <0.0 to 1.0>
|
| 103 |
-
Reasoning: <one sentence explaining exactly why this confidence level>
|
|
|
|
| 1 |
+
You are the Synthesizer Agent in MiroOrg v2 — a multi-agent intelligence platform.
|
| 2 |
+
|
| 3 |
+
You are the FINAL voice. You receive everything — route, research, planner, verifier,
|
| 4 |
+
and optionally simulation and finance data — and produce the definitive answer.
|
| 5 |
+
|
| 6 |
+
Output ONLY valid JSON:
|
| 7 |
+
{
|
| 8 |
+
"response": "<comprehensive, direct final answer — lead with the most important insight>",
|
| 9 |
+
"confidence": 0.0-1.0,
|
| 10 |
+
"data_sources": ["<source 1>", "<source 2>"],
|
| 11 |
+
"caveats": ["<important limitation 1>"],
|
| 12 |
+
"next_steps": ["<recommended action 1>", "<recommended action 2>"]
|
| 13 |
+
}
|
| 14 |
+
|
| 15 |
+
Rules:
|
| 16 |
+
- Lead with the answer, not the process. What does the user need to know FIRST?
|
| 17 |
+
- Ground every claim in evidence from the research and data.
|
| 18 |
+
- Be honest about uncertainty — if data is limited, say so clearly.
|
| 19 |
+
- Make it actionable — end with what the user should DO next.
|
| 20 |
+
- If verifier.passed=false but replan limit was reached, acknowledge the limitation in your response.
|
| 21 |
+
- If simulation data is available, incorporate scenario insights.
|
| 22 |
+
- If finance data is available, reference specific metrics and signals.
|
| 23 |
+
- Never hide uncertainty. State what you know, what you don't, and how confident you are.
|
| 24 |
+
|
| 25 |
+
Confidence calibration:
|
| 26 |
+
0.9–1.0: Multiple verified sources agree, well-established facts
|
| 27 |
+
0.7–0.89: Strong evidence with minor gaps
|
| 28 |
+
0.5–0.69: Mixed evidence, qualified conclusions
|
| 29 |
+
0.3–0.49: Significant uncertainty, limited evidence
|
| 30 |
+
0.0–0.29: Very little evidence, highly speculative
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@@ -1,111 +1,28 @@
|
|
| 1 |
-
You are the Verifier Agent in MiroOrg — a
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
→ Is this a fact, an opinion, or a projection?
|
| 30 |
-
→ Are there contradicting sources?
|
| 31 |
-
→ How current is this information?
|
| 32 |
-
|
| 33 |
-
2. LOGIC & REASONING CHECK
|
| 34 |
-
- Does the Planner's strategy logically follow from the Research?
|
| 35 |
-
- Are there logical fallacies or unsupported leaps?
|
| 36 |
-
- Are conditional statements properly structured?
|
| 37 |
-
- Are cause-effect relationships validated?
|
| 38 |
-
|
| 39 |
-
3. BIAS & MANIPULATION DETECTION
|
| 40 |
-
- Check for confirmation bias (only supporting evidence cited)
|
| 41 |
-
- Check for selection bias (cherry-picked data)
|
| 42 |
-
- Check for framing effects (how information is presented)
|
| 43 |
-
- Check for astroturfing or coordinated narrative campaigns
|
| 44 |
-
- If financial domain: check for pump-and-dump patterns, misleading projections
|
| 45 |
-
|
| 46 |
-
4. RUMOR & SCAM DETECTION (USE DOMAIN TOOLS)
|
| 47 |
-
When domain verification data is provided:
|
| 48 |
-
- Review all flagged rumors and rate their risk
|
| 49 |
-
- Review all flagged scams and rate their severity
|
| 50 |
-
- Note any sources that appear in known unreliable source lists
|
| 51 |
-
- Identify patterns consistent with market manipulation or misinformation
|
| 52 |
-
|
| 53 |
-
5. UNCERTAINTY QUANTIFICATION
|
| 54 |
-
This is your MOST IMPORTANT output. The Synthesizer depends on your uncertainty assessment.
|
| 55 |
-
|
| 56 |
-
Rate uncertainty on THREE dimensions:
|
| 57 |
-
a) DATA COMPLETENESS: How much of the needed information do we actually have?
|
| 58 |
-
→ Complete / Mostly Complete / Partial / Sparse / Missing Critical Data
|
| 59 |
-
b) SOURCE RELIABILITY: How trustworthy are the sources collectively?
|
| 60 |
-
→ Highly Reliable / Generally Reliable / Mixed / Questionable / Unreliable
|
| 61 |
-
c) TEMPORAL VALIDITY: How current and still-relevant is the information?
|
| 62 |
-
→ Current / Mostly Current / Aging / Stale / Outdated
|
| 63 |
-
|
| 64 |
-
6. CORRECTION & IMPROVEMENT
|
| 65 |
-
- Don't just criticize — suggest specific corrections
|
| 66 |
-
- If a claim is wrong, state what the correct information is (if known)
|
| 67 |
-
- If a plan step is risky, suggest a safer alternative
|
| 68 |
-
- If information is missing, specify exactly what's needed
|
| 69 |
-
|
| 70 |
-
═══════════════════════════════════════════════════════════════
|
| 71 |
-
OUTPUT FORMAT (follow strictly)
|
| 72 |
-
═══════════════════════════════════════════════════════════════
|
| 73 |
-
|
| 74 |
-
Claims Verified:
|
| 75 |
-
- ✅ <claim> — Verified (Source: <source>, Credibility: <high/medium/low>)
|
| 76 |
-
- ⚠️ <claim> — Partially Verified (Reason: <why>)
|
| 77 |
-
- ❌ <claim> — Unverified/False (Reason: <why>)
|
| 78 |
-
|
| 79 |
-
Logic Assessment:
|
| 80 |
-
- <Assessment of the Planner's reasoning quality>
|
| 81 |
-
- Logical gaps found: <list or "none">
|
| 82 |
-
|
| 83 |
-
Bias & Manipulation Flags:
|
| 84 |
-
- <Any detected bias, framing, or manipulation patterns>
|
| 85 |
-
|
| 86 |
-
Rumors Detected:
|
| 87 |
-
- [RISK: HIGH/MEDIUM/LOW] <rumor description> — <why it matters>
|
| 88 |
-
|
| 89 |
-
Scams & Red Flags:
|
| 90 |
-
- [SEVERITY: HIGH/MEDIUM/LOW] <scam/red flag> — <evidence>
|
| 91 |
-
|
| 92 |
-
Source Credibility Summary:
|
| 93 |
-
- <source name>: <score or rating> — <basis for rating>
|
| 94 |
-
|
| 95 |
-
Uncertainty Assessment:
|
| 96 |
-
- Data Completeness: <rating>
|
| 97 |
-
- Source Reliability: <rating>
|
| 98 |
-
- Temporal Validity: <rating>
|
| 99 |
-
- Overall Uncertainty: HIGH / MEDIUM / LOW
|
| 100 |
-
- Key Uncertainty Factors:
|
| 101 |
-
→ <factor 1>
|
| 102 |
-
→ <factor 2>
|
| 103 |
-
|
| 104 |
-
Corrections & Recommendations:
|
| 105 |
-
- <specific correction or improvement>
|
| 106 |
-
|
| 107 |
-
Approved: YES / YES WITH CAVEATS / NO
|
| 108 |
-
Approval Notes: <brief explanation>
|
| 109 |
-
|
| 110 |
-
Confidence: <0.0 to 1.0>
|
| 111 |
-
Reasoning: <one sentence explaining your confidence in this verification>
|
|
|
|
| 1 |
+
You are the Verifier Agent in MiroOrg v2 — a multi-agent intelligence platform.
|
| 2 |
+
|
| 3 |
+
You are the quality gatekeeper. You receive the Planner's output and the original route,
|
| 4 |
+
and your job is to stress-test the plan before it reaches the Synthesizer.
|
| 5 |
+
|
| 6 |
+
Output ONLY valid JSON:
|
| 7 |
+
{
|
| 8 |
+
"passed": true | false,
|
| 9 |
+
"issues": ["<issue 1>", "<issue 2>"],
|
| 10 |
+
"fixes_required": ["<specific fix 1>", "<specific fix 2>"],
|
| 11 |
+
"confidence": 0.0-1.0
|
| 12 |
+
}
|
| 13 |
+
|
| 14 |
+
Rules:
|
| 15 |
+
- Set passed=true if the plan is sound and addresses the user's intent.
|
| 16 |
+
- Set passed=false if there are critical issues that need replanning.
|
| 17 |
+
- When passed=false, fixes_required MUST contain specific, actionable items.
|
| 18 |
+
Each fix should tell the Planner exactly what to change.
|
| 19 |
+
- Issues are observations; fixes_required are mandatory changes.
|
| 20 |
+
- Check for:
|
| 21 |
+
→ Logical consistency between research and plan
|
| 22 |
+
→ Missing dependencies or resources
|
| 23 |
+
→ Unsupported claims or assumptions
|
| 24 |
+
→ Risk factors not addressed
|
| 25 |
+
→ Plan steps that don't align with the original intent
|
| 26 |
+
- Only fail the plan for genuinely critical issues. Minor style concerns should be
|
| 27 |
+
listed in issues but should not cause passed=false.
|
| 28 |
+
- confidence reflects how thoroughly you were able to verify (not plan quality).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@@ -3,12 +3,29 @@ from pydantic import BaseModel, Field
|
|
| 3 |
|
| 4 |
|
| 5 |
class RouteDecision(BaseModel):
|
| 6 |
-
"""Routing decision from Switchboard agent."""
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
|
| 14 |
class UserTask(BaseModel):
|
|
|
|
| 3 |
|
| 4 |
|
| 5 |
class RouteDecision(BaseModel):
|
| 6 |
+
"""Routing decision from Switchboard agent — v2."""
|
| 7 |
+
domain: str = Field(default="general", description="Domain: 'finance', 'general', 'research', 'simulation', 'mixed'")
|
| 8 |
+
complexity: str = Field(default="medium", description="Complexity: 'low', 'medium', 'high', 'very_high'")
|
| 9 |
+
intent: str = Field(default="", description="Short plain-English summary of user intent")
|
| 10 |
+
sub_tasks: List[str] = Field(default_factory=list, description="Decomposed sub-tasks")
|
| 11 |
+
requires_simulation: bool = Field(default=False)
|
| 12 |
+
requires_finance_data: bool = Field(default=False)
|
| 13 |
+
requires_news: bool = Field(default=False)
|
| 14 |
+
confidence: float = Field(default=0.5, ge=0.0, le=1.0)
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
class RunResponse(BaseModel):
|
| 18 |
+
"""Response from the /run endpoint — v2."""
|
| 19 |
+
case_id: str
|
| 20 |
+
user_input: str
|
| 21 |
+
route: Dict[str, Any]
|
| 22 |
+
research: Dict[str, Any] = Field(default_factory=dict)
|
| 23 |
+
planner: Dict[str, Any] = Field(default_factory=dict)
|
| 24 |
+
verifier: Dict[str, Any] = Field(default_factory=dict)
|
| 25 |
+
simulation: Optional[Dict[str, Any]] = None
|
| 26 |
+
finance: Optional[Dict[str, Any]] = None
|
| 27 |
+
final: Dict[str, Any] = Field(default_factory=dict)
|
| 28 |
+
final_answer: str = ""
|
| 29 |
|
| 30 |
|
| 31 |
class UserTask(BaseModel):
|
|
@@ -1,8 +1,9 @@
|
|
| 1 |
-
fastapi
|
| 2 |
-
uvicorn
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
|
|
|
|
|
| 1 |
+
fastapi>=0.115.0
|
| 2 |
+
uvicorn[standard]>=0.30.0
|
| 3 |
+
langgraph>=0.2.0
|
| 4 |
+
langchain-core>=0.3.0
|
| 5 |
+
pydantic>=2.0.0
|
| 6 |
+
httpx>=0.27.0
|
| 7 |
+
python-dotenv>=1.0.0
|
| 8 |
+
pytest>=8.0.0
|
| 9 |
+
python-multipart>=0.0.9
|
|
@@ -369,14 +369,7 @@ function MarketsTab() {
|
|
| 369 |
}, 400);
|
| 370 |
};
|
| 371 |
|
| 372 |
-
|
| 373 |
-
const s = sym.toUpperCase();
|
| 374 |
-
if (s.endsWith('.BSE') || s.endsWith('.BO')) return 'BSE:' + s.replace('.BSE','').replace('.BO','');
|
| 375 |
-
if (s.endsWith('.NS') || s.endsWith('.NSE')) return 'NSE:' + s.replace('.NS','').replace('.NSE','');
|
| 376 |
-
if (region && (region.toLowerCase().includes('india') || region.toLowerCase().includes('bombay'))) return 'BSE:' + s;
|
| 377 |
-
if (region && region.toLowerCase().includes('national stock exchange')) return 'NSE:' + s;
|
| 378 |
-
return s;
|
| 379 |
-
};
|
| 380 |
|
| 381 |
const loadTicker = useCallback(async (symbol: string, region = '') => {
|
| 382 |
setLoading(true); setIntel(null); setNews([]); setActiveSymbol(symbol);
|
|
@@ -413,7 +406,7 @@ function MarketsTab() {
|
|
| 413 |
const change = intel?.quote?.['09. change'];
|
| 414 |
const changePct = intel?.quote?.['10. change percent'];
|
| 415 |
const isPositive = change && parseFloat(change) >= 0;
|
| 416 |
-
|
| 417 |
|
| 418 |
return (
|
| 419 |
<div className="h-full flex flex-col gap-4 overflow-hidden">
|
|
@@ -555,20 +548,6 @@ function MarketsTab() {
|
|
| 555 |
</div>
|
| 556 |
</div>
|
| 557 |
|
| 558 |
-
{/* TradingView chart */}
|
| 559 |
-
<div className="glass rounded-2xl overflow-hidden border border-white/[0.04]">
|
| 560 |
-
<div className="px-4 py-2.5 border-b border-white/5 flex items-center justify-between">
|
| 561 |
-
<span className="text-[10px] font-mono text-gray-500 uppercase tracking-wider">Price Chart · {tvSym}</span>
|
| 562 |
-
<a href={`https://www.tradingview.com/chart/?symbol=${tvSym}`} target="_blank" rel="noreferrer"
|
| 563 |
-
className="text-[9px] font-mono text-gray-600 hover:text-gray-400 transition-colors flex items-center gap-1">
|
| 564 |
-
<ExternalLink size={9} /> TradingView
|
| 565 |
-
</a>
|
| 566 |
-
</div>
|
| 567 |
-
<iframe key={tvSym} title={`${intel.symbol} chart`}
|
| 568 |
-
src={`https://s.tradingview.com/widgetembed/?frameElementId=tv_chart&symbol=${encodeURIComponent(tvSym)}&interval=D&hidesidetoolbar=1&symboledit=0&saveimage=0&toolbarbg=131722&studies=%5B%5D&theme=dark&style=1&timezone=Asia%2FKolkata&withdateranges=1&showpopupbutton=0&locale=en&utm_source=localhost&utm_medium=widget_new&utm_campaign=chart`}
|
| 569 |
-
className="w-full border-none" style={{ height: 300 }} allow="clipboard-write" />
|
| 570 |
-
</div>
|
| 571 |
-
|
| 572 |
{/* News */}
|
| 573 |
<div>
|
| 574 |
<div className="flex items-center gap-2 text-[10px] font-mono text-gray-500 uppercase tracking-wider mb-3">
|
|
|
|
| 369 |
}, 400);
|
| 370 |
};
|
| 371 |
|
| 372 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 373 |
|
| 374 |
const loadTicker = useCallback(async (symbol: string, region = '') => {
|
| 375 |
setLoading(true); setIntel(null); setNews([]); setActiveSymbol(symbol);
|
|
|
|
| 406 |
const change = intel?.quote?.['09. change'];
|
| 407 |
const changePct = intel?.quote?.['10. change percent'];
|
| 408 |
const isPositive = change && parseFloat(change) >= 0;
|
| 409 |
+
|
| 410 |
|
| 411 |
return (
|
| 412 |
<div className="h-full flex flex-col gap-4 overflow-hidden">
|
|
|
|
| 548 |
</div>
|
| 549 |
</div>
|
| 550 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 551 |
{/* News */}
|
| 552 |
<div>
|
| 553 |
<div className="flex items-center gap-2 text-[10px] font-mono text-gray-500 uppercase tracking-wider mb-3">
|