aether-core / README.md
camdog920's picture
Update README for v0.2.0 autonomous
c779308 verified
# AETHER: A Self-Evolving Neuro-Symbolic Architecture for AGI
**v0.2.0 β€” Autonomous Mode**
AETHER (Adaptive Evolving Towards Higher-order Reasoning) is a unified self-evolving neuro-symbolic architecture that integrates symbolic and sub-symbolic computation within a dynamically self-modifying framework. This version runs **fully autonomously** with zero human-in-the-loop oversight β€” all safety, validation, and rollback decisions are handled by automated systems.
---
## Architecture Integration
AETHER synthesizes cutting-edge research from:
| Component | Source | Key Contribution |
|-----------|--------|-----------------|
| **Evolutionary Core** | AlphaEvolve (DeepMind, 2025) | MAP-Elites + island model + LLM code diffs for algorithm discovery |
| **Hierarchical Reasoning** | HiMAC (2026) | Macro-Policy / Micro-Policy co-evolution with iterative optimization |
| **Group Self-Evolution** | GEA (2026) | Performance-Novelty selection with experience sharing |
| **Tool Evolution** | Yunjue Agent (2026) | Manager/Executor/Developer/Integrator role decomposition + tool absorption |
| **AI Research Agent** | ASI-Evolve (2026) | 4-stage Researcher→Engineer→Analyzer→Database loop |
| **Co-Evolution** | CoMAS (2025) | Decentralized multi-agent co-evolution via interaction rewards |
| **Cognitive Architecture** | CoALA (2023) | Working/Episodic/Semantic/Procedural memory taxonomy |
| **Agentic Neural Networks** | ANN (2025) | Textual backpropagation across multi-agent layers |
| **Leader Training** | MLPO (2025) | Train single leader, peers untrained - efficient multi-agent RL |
| **Task-Driven Agents** | BabyAGI | Task creation / prioritization / execution loop |
---
## System Components (v0.2.0)
```
AETHER
β”œβ”€β”€ Core (AetherCore)
β”‚ β”œβ”€β”€ Neuro-Symbolic Fusion Gate (learned attention weights)
β”‚ β”œβ”€β”€ Recursive Evolution Loop (generateβ†’evaluateβ†’selectβ†’mutateβ†’validateβ†’integrate)
β”‚ └── AutoOversight System (risk scoring + regression suite + auto-rollback)
β”œβ”€β”€ Memory (CoALA-inspired)
β”‚ β”œβ”€β”€ Working Memory (attention-based retrieval)
β”‚ β”œβ”€β”€ Episodic Memory (experience buffer)
β”‚ β”œβ”€β”€ Semantic Memory (world knowledge via KG)
β”‚ └── Procedural Memory (learned tools/skills)
β”œβ”€β”€ Knowledge (PyG-style)
β”‚ β”œβ”€β”€ RGCN Encoder (relational graph convolution)
β”‚ β”œβ”€β”€ ComplEx Scorer (link prediction)
β”‚ └── Symbolic Rule Engine (forward chaining + multi-hop BFS)
β”œβ”€β”€ Agents
β”‚ β”œβ”€β”€ Hierarchical Agent (HiMAC: Macro + Micro policy)
β”‚ β”œβ”€β”€ Agent Orchestrator (MLPO leader + dynamic routing)
β”‚ β”œβ”€β”€ BabyAGI Loop (task-driven autonomy)
β”‚ └── Textual Backpropagation (Agentic NN update)
└── Evolution
β”œβ”€β”€ MAP-Elites Archive (quality-diversity)
β”œβ”€β”€ Performance-Novelty Selection (GEA)
β”œβ”€β”€ Constrained Mutation (AlphaEvolve)
└── AutoOversight Gate (replaces human-in-the-loop)
```
---
## What's New in v0.2.0 (Autonomous)
- **AutoOversight System** β€” Replaces all human-in-the-loop safety gates with automated regression suites, risk scoring, and auto-rollback. The system evaluates its own candidates via synthetic benchmarks before any integration.
- **Self-Sustained Evolution** β€” The evolution loop runs end-to-end without external API calls or human checkpoints. Fitness is evaluated against internal reasoning, memory, and knowledge graph benchmarks.
- **Temporal Memory with Attention** β€” Recency-weighted retrieval for long-horizon context across evolution generations.
- **Four-Agent Orchestration** β€” Researcher, Engineer, Analyzer, Integrator roles with learned routing weights.
- **Fully Runnable Standalone** β€” Single-file executable with only `torch`, `numpy`, and `networkx` dependencies.
---
## Quick Start (Autonomous)
```bash
pip install torch numpy networkx
python aether_autonomous.py
```
```python
from aether_autonomous import AetherCore, AetherConfig
# Initialize autonomous AETHER
config = AetherConfig(
population_size=6,
generations=5,
mutation_rate=0.12,
macro_policy_dim=128,
micro_policy_dim=64,
num_agents=4,
kg_embedding_dim=64,
kg_num_relations=10,
)
aether = AetherCore(config)
# Seed knowledge
aether.knowledge.add_fact("Intelligence", "requires", "Reasoning")
aether.knowledge.add_fact("Reasoning", "requires", "Memory")
# Neuro-symbolic query
result = aether.forward("Intelligence requires")
print(f"Symbolic weight: {result['symbolic_weight']:.3f}")
print(f"Neural weight: {result['neural_weight']:.3f}")
# Fully autonomous evolution (no human oversight)
evolution_result = aether.evolve(num_generations=5)
print(f"Best fitness: {evolution_result['best_fitness']:.4f}")
print(f"Archive coverage: {evolution_result['archive_stats']['coverage']:.2%}")
```
---
## Original Modular System (v0.1.0)
The original modular implementation remains available in the `aether/` directory:
```python
from aether.core import AetherCore, AetherConfig
config = AetherConfig(
population_size=8,
mutation_rate=0.15,
num_agents=4,
enable_self_modification=True,
)
aether = AetherCore(config, model_name="Qwen/Qwen2.5-0.5B-Instruct")
```
---
## Design Principles
1. **Neuro-Symbolic Fluidity**: Dynamic translation between symbolic and sub-symbolic representations
2. **Architectural Evolvability**: Structural components are subject to learning and refinement
3. **Parallel Agent Intelligence**: Intelligence emerges through coordinated multi-agent interaction
4. **Constrained Self-Modification**: All self-changes are sandboxed and validated by automated systems
5. **Automated Oversight**: Risk scoring, regression suites, and auto-rollback replace human gates
---
## Citation
```bibtex
@article{aether2026,
title={AETHER: A Self-Evolving Neuro-Symbolic Architecture for Artificial General Intelligence},
author={Anonymous},
year={2026}
}
```
## License
MIT License - Open for research and development toward responsible AGI.