Spaces:
Running
MiroOrg v1.1 Architecture
System Overview
MiroOrg v1.1 is a 5-layer AI intelligence system designed for single-user local deployment with autonomous learning capabilities.
Five-Layer Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Layer 5: Autonomous Knowledge Evolution β
β - Knowledge ingestion from web/news β
β - Experience learning from cases β
β - Prompt evolution via A/B testing β
β - Skill distillation from patterns β
β - Trust & freshness management β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Layer 4: Simulation Lab Integration β
β - MiroFish adapter for scenario modeling β
β - Case-simulation linking β
β - Simulation result synthesis β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Layer 3: Agent Orchestration β
β - Switchboard (4D routing) β
β - Research (domain-enhanced) β
β - Verifier (credibility scoring) β
β - Planner (simulation-aware) β
β - Synthesizer (uncertainty quantification) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Layer 2: Domain Intelligence β
β - Pluggable domain packs β
β - Finance pack (market data, news, analysis) β
β - Domain registry & detection β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Layer 1: Core Platform β
β - Multi-provider LLM (OpenRouter/Ollama/OpenAI) β
β - Automatic fallback β
β - Configuration management β
β - Data persistence β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Layer 1: Core Platform
Multi-Provider LLM Abstraction
The system supports three LLM providers with automatic fallback:
OpenRouter - Recommended for production
- Access to multiple models
- Pay-per-use pricing
- High availability
Ollama - Local deployment
- Privacy-focused
- No API costs
- Requires local GPU/CPU
OpenAI - High quality
- GPT-4 and GPT-3.5
- Reliable performance
- Higher cost
Fallback Chain:
Primary Provider β Fallback Provider β Error
Configuration Management
All configuration via environment variables:
- Provider selection
- API keys
- Feature flags
- Domain pack enablement
- Learning layer settings
Data Persistence
JSON-based storage for single-user deployment:
- Cases:
backend/app/data/memory/ - Simulations:
backend/app/data/simulations/ - Knowledge:
backend/app/data/knowledge/ - Skills:
backend/app/data/skills/ - Prompt versions:
backend/app/data/prompt_versions/
Layer 2: Domain Intelligence
Domain Pack System
Domain packs are pluggable modules that enhance agent capabilities:
class DomainPack(ABC):
@property
@abstractmethod
def name(self) -> str:
"""Domain pack name (e.g., 'finance')"""
pass
@property
@abstractmethod
def keywords(self) -> List[str]:
"""Keywords for domain detection"""
pass
@abstractmethod
async def enhance_research(self, query: str, context: Dict) -> Dict:
"""Enhance research with domain-specific capabilities"""
pass
@abstractmethod
async def enhance_verification(self, claims: List[str], context: Dict) -> Dict:
"""Enhance verification with domain-specific checks"""
pass
@abstractmethod
def get_capabilities(self) -> List[str]:
"""List domain-specific capabilities"""
pass
Finance Domain Pack
Included capabilities:
- Market Data: Real-time quotes via Alpha Vantage
- News: Financial news via NewsAPI
- Entity Resolution: Extract and normalize company names
- Ticker Resolution: Map companies to stock tickers
- Source Checking: Credibility scoring for financial sources
- Rumor Detection: Identify unverified claims
- Scam Detection: Flag potential scams
- Stance Detection: Analyze sentiment and stance
- Event Analysis: Analyze market-moving events
- Prediction: Generate market predictions
Domain Detection
Automatic domain detection based on keywords:
# Finance keywords
["stock", "market", "trading", "investment", "portfolio",
"earnings", "dividend", "IPO", "merger", "acquisition"]
Layer 3: Agent Orchestration
Agent Pipeline
User Input
β
Switchboard (4D Routing)
β
Research (Domain-Enhanced)
β
Planner (Simulation-Aware)
β
Verifier (Credibility Scoring)
β
Synthesizer (Final Answer)
β
Case Storage
Switchboard Agent
Four-Dimensional Classification:
Task Family
normal: Standard analysissimulation: Scenario modeling
Domain Pack
finance: Financial domaingeneral: General knowledgepolicy: Policy analysiscustom: Custom domains
Complexity
simple: β€5 wordsmedium: 6-25 wordscomplex: >25 words
Execution Mode
solo: Synthesizer only (simple queries)standard: Research β Synthesizer (medium queries)deep: Full pipeline (complex queries)
Research Agent
Capabilities:
- Web search via Tavily
- News search via NewsAPI
- Domain-specific data sources
- Entity extraction
- Source credibility assessment
Domain Enhancement:
if domain == "finance":
# Extract tickers and entities
# Fetch market data
# Get financial news
# Score source credibility
Verifier Agent
Verification Process:
- Extract claims from research
- Check source reliability
- Detect rumors and scams
- Quantify uncertainty
- Flag high-risk claims
Domain Enhancement:
if domain == "finance":
# Check financial source credibility
# Detect market manipulation
# Verify regulatory compliance
Planner Agent
Planning Process:
- Analyze research findings
- Identify action items
- Detect simulation opportunities
- Create structured plan
Simulation Detection:
- Keywords: "predict", "what if", "scenario", "impact"
- Uncertainty level: High
- Recommendation: Suggest simulation mode
Synthesizer Agent
Synthesis Process:
- Combine all agent outputs
- Quantify uncertainty
- Generate final answer
- Add metadata (confidence, sources, etc.)
Output Structure:
{
"summary": "Final answer",
"confidence": 0.85,
"uncertainty_factors": [...],
"sources": [...],
"simulation_recommended": false
}
Layer 4: Simulation Lab Integration
MiroFish Adapter
Adapter pattern for simulation integration:
class MiroFishClient:
async def submit_simulation(self, title, seed_text, prediction_goal):
"""Submit simulation to MiroFish"""
async def get_status(self, simulation_id):
"""Get simulation status"""
async def get_report(self, simulation_id):
"""Get simulation report"""
async def chat(self, simulation_id, message):
"""Chat with simulation"""
Case-Simulation Linking
Cases and simulations are linked:
{
"case_id": "uuid",
"simulation_id": "uuid",
"user_input": "...",
"route": {...},
"outputs": {...},
"final_answer": "..."
}
Simulation Workflow
User Input with Simulation Keywords
β
Switchboard (task_family="simulation")
β
Research (gather context)
β
Submit to MiroFish
β
Poll for completion
β
Synthesize results
β
Return to user
Layer 5: Autonomous Knowledge Evolution
Knowledge Ingestion
Sources:
- Web search (Tavily)
- News (NewsAPI)
- URLs (Jina Reader)
Process:
- Fetch content from source
- Compress to 2-4KB summary using LLM
- Store with metadata
- Enforce 200MB storage limit
- LRU eviction when limit reached
Scheduling:
- Every 6 hours (configurable)
- Only when system idle (CPU <50%)
- Only when battery OK (>30%)
Experience Learning
Learning from Cases:
- Extract metadata from case execution
- Detect patterns (domain, sources, agents)
- Track route effectiveness
- Track prompt performance
- Track provider reliability
Pattern Detection:
- Domain expertise patterns
- Preferred source patterns
- Agent workflow patterns
- Minimum frequency: 3 occurrences
Prompt Evolution
A/B Testing Process:
- Create prompt variant using LLM
- Test variant with sample inputs
- Measure quality metrics
- Compare win rates
- Promote if criteria met (>70% win rate, >10 tests)
Versioning:
{
"id": "uuid",
"prompt_name": "research",
"version": 2,
"status": "testing",
"test_count": 15,
"win_count": 12,
"win_rate": 0.80
}
Skill Distillation
Skill Creation:
- Detect patterns in successful cases
- Distill into reusable skills
- Test skills with validation cases
- Track usage and success rate
Skill Structure:
{
"id": "domain_expertise_finance",
"type": "domain_expertise",
"trigger_patterns": ["stock", "market"],
"recommended_agents": ["research", "verifier"],
"preferred_sources": ["bloomberg", "reuters"],
"success_rate": 0.85
}
Trust Management
Source Trust Scoring:
- Initial score: 0.5 (neutral)
- Updated based on verification outcomes
- Exponential moving average
- Minimum verifications: 3
Trust Categories:
- Trusted: score β₯0.7, verifications β₯3
- Untrusted: score β€0.3, verifications β₯3
- Unknown: verifications <3
Freshness Management
Domain-Specific Expiration:
- Finance: 7-day half-life
- General: 30-day half-life
- Exponential decay:
freshness = 2^(-age_days / half_life_days)
Refresh Recommendations:
- Freshness <0.3: Stale
- Prioritize by staleness
- Automatic refresh during scheduled ingestion
Learning Scheduler
Safeguards:
- CPU usage check: <50%
- Battery level check: >30%
- System idle detection
- Error handling with backoff
Scheduled Tasks:
- Knowledge ingestion: Every 6 hours
- Expired cleanup: Daily
- Pattern detection: Daily
- Skill distillation: Weekly
- Prompt optimization: Weekly
Data Flow
Normal Request Flow
1. User submits request
2. Switchboard classifies (4D)
3. Domain pack detected
4. Research gathers info (domain-enhanced)
5. Planner creates plan
6. Verifier validates (domain-enhanced)
7. Synthesizer produces answer
8. Case saved
9. Learning extracts metadata
Simulation Request Flow
1. User submits request with simulation keywords
2. Switchboard detects simulation
3. Research gathers context
4. Submit to MiroFish
5. Poll for completion
6. Synthesize results
7. Case and simulation saved
8. Learning extracts metadata
Learning Flow
1. Scheduler checks system conditions
2. If idle and battery OK:
a. Ingest knowledge from sources
b. Compress to 2-4KB summaries
c. Store with metadata
d. Enforce storage limits
3. Detect patterns in recent cases
4. Distill skills from patterns
5. Optimize prompts via A/B testing
6. Update trust scores
7. Calculate freshness scores
Deployment Considerations
Single-User Local (Recommended)
Optimizations:
- JSON storage (no database)
- Learning scheduler with safeguards
- 200MB knowledge cache limit
- CPU/battery awareness
- Automatic LRU eviction
Hardware Requirements:
- 8GB RAM minimum
- 256GB storage minimum
- M2 Air or equivalent
Multi-User Production
Required Changes:
- PostgreSQL for storage
- Redis for caching
- Authentication/authorization
- Rate limiting
- Request queuing
- Monitoring/alerting
- Horizontal scaling
Security
API Key Management
- All keys in environment variables
- Never committed to source control
- Validated on startup
Error Handling
- Sanitized error messages
- No internal details leaked
- Comprehensive logging
Input Validation
- Pydantic schemas for all inputs
- Type checking
- Length limits
- Sanitization
Performance
Response Times
- Simple queries: <5s
- Medium queries: <15s
- Complex queries: <30s
- Simulations: 60-120s
Optimization Strategies
- Connection pooling for external APIs
- Request timeouts (30s)
- Caching for market quotes (5min TTL)
- Rate limiting for external APIs
- Async/await throughout
Monitoring
Health Checks
/health- Basic health/health/deep- Detailed health with provider status
Metrics
- Case execution time
- Provider success/failure rates
- Domain pack usage
- Learning task completion
- Storage usage
Logging
- Structured logging
- Log levels: DEBUG, INFO, WARNING, ERROR
- Rotation and retention
- Provider fallback events
- Learning task events