YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
AETHER: A Self-Evolving Neuro-Symbolic Architecture for AGI
v0.2.0 β Autonomous Mode
AETHER (Adaptive Evolving Towards Higher-order Reasoning) is a unified self-evolving neuro-symbolic architecture that integrates symbolic and sub-symbolic computation within a dynamically self-modifying framework. This version runs fully autonomously with zero human-in-the-loop oversight β all safety, validation, and rollback decisions are handled by automated systems.
Architecture Integration
AETHER synthesizes cutting-edge research from:
| Component | Source | Key Contribution |
|---|---|---|
| Evolutionary Core | AlphaEvolve (DeepMind, 2025) | MAP-Elites + island model + LLM code diffs for algorithm discovery |
| Hierarchical Reasoning | HiMAC (2026) | Macro-Policy / Micro-Policy co-evolution with iterative optimization |
| Group Self-Evolution | GEA (2026) | Performance-Novelty selection with experience sharing |
| Tool Evolution | Yunjue Agent (2026) | Manager/Executor/Developer/Integrator role decomposition + tool absorption |
| AI Research Agent | ASI-Evolve (2026) | 4-stage ResearcherβEngineerβAnalyzerβDatabase loop |
| Co-Evolution | CoMAS (2025) | Decentralized multi-agent co-evolution via interaction rewards |
| Cognitive Architecture | CoALA (2023) | Working/Episodic/Semantic/Procedural memory taxonomy |
| Agentic Neural Networks | ANN (2025) | Textual backpropagation across multi-agent layers |
| Leader Training | MLPO (2025) | Train single leader, peers untrained - efficient multi-agent RL |
| Task-Driven Agents | BabyAGI | Task creation / prioritization / execution loop |
System Components (v0.2.0)
AETHER
βββ Core (AetherCore)
β βββ Neuro-Symbolic Fusion Gate (learned attention weights)
β βββ Recursive Evolution Loop (generateβevaluateβselectβmutateβvalidateβintegrate)
β βββ AutoOversight System (risk scoring + regression suite + auto-rollback)
βββ Memory (CoALA-inspired)
β βββ Working Memory (attention-based retrieval)
β βββ Episodic Memory (experience buffer)
β βββ Semantic Memory (world knowledge via KG)
β βββ Procedural Memory (learned tools/skills)
βββ Knowledge (PyG-style)
β βββ RGCN Encoder (relational graph convolution)
β βββ ComplEx Scorer (link prediction)
β βββ Symbolic Rule Engine (forward chaining + multi-hop BFS)
βββ Agents
β βββ Hierarchical Agent (HiMAC: Macro + Micro policy)
β βββ Agent Orchestrator (MLPO leader + dynamic routing)
β βββ BabyAGI Loop (task-driven autonomy)
β βββ Textual Backpropagation (Agentic NN update)
βββ Evolution
βββ MAP-Elites Archive (quality-diversity)
βββ Performance-Novelty Selection (GEA)
βββ Constrained Mutation (AlphaEvolve)
βββ AutoOversight Gate (replaces human-in-the-loop)
What's New in v0.2.0 (Autonomous)
- AutoOversight System β Replaces all human-in-the-loop safety gates with automated regression suites, risk scoring, and auto-rollback. The system evaluates its own candidates via synthetic benchmarks before any integration.
- Self-Sustained Evolution β The evolution loop runs end-to-end without external API calls or human checkpoints. Fitness is evaluated against internal reasoning, memory, and knowledge graph benchmarks.
- Temporal Memory with Attention β Recency-weighted retrieval for long-horizon context across evolution generations.
- Four-Agent Orchestration β Researcher, Engineer, Analyzer, Integrator roles with learned routing weights.
- Fully Runnable Standalone β Single-file executable with only
torch,numpy, andnetworkxdependencies.
Quick Start (Autonomous)
pip install torch numpy networkx
python aether_autonomous.py
from aether_autonomous import AetherCore, AetherConfig
# Initialize autonomous AETHER
config = AetherConfig(
population_size=6,
generations=5,
mutation_rate=0.12,
macro_policy_dim=128,
micro_policy_dim=64,
num_agents=4,
kg_embedding_dim=64,
kg_num_relations=10,
)
aether = AetherCore(config)
# Seed knowledge
aether.knowledge.add_fact("Intelligence", "requires", "Reasoning")
aether.knowledge.add_fact("Reasoning", "requires", "Memory")
# Neuro-symbolic query
result = aether.forward("Intelligence requires")
print(f"Symbolic weight: {result['symbolic_weight']:.3f}")
print(f"Neural weight: {result['neural_weight']:.3f}")
# Fully autonomous evolution (no human oversight)
evolution_result = aether.evolve(num_generations=5)
print(f"Best fitness: {evolution_result['best_fitness']:.4f}")
print(f"Archive coverage: {evolution_result['archive_stats']['coverage']:.2%}")
Original Modular System (v0.1.0)
The original modular implementation remains available in the aether/ directory:
from aether.core import AetherCore, AetherConfig
config = AetherConfig(
population_size=8,
mutation_rate=0.15,
num_agents=4,
enable_self_modification=True,
)
aether = AetherCore(config, model_name="Qwen/Qwen2.5-0.5B-Instruct")
Design Principles
- Neuro-Symbolic Fluidity: Dynamic translation between symbolic and sub-symbolic representations
- Architectural Evolvability: Structural components are subject to learning and refinement
- Parallel Agent Intelligence: Intelligence emerges through coordinated multi-agent interaction
- Constrained Self-Modification: All self-changes are sandboxed and validated by automated systems
- Automated Oversight: Risk scoring, regression suites, and auto-rollback replace human gates
Citation
@article{aether2026,
title={AETHER: A Self-Evolving Neuro-Symbolic Architecture for Artificial General Intelligence},
author={Anonymous},
year={2026}
}
License
MIT License - Open for research and development toward responsible AGI.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support