camdog920 commited on
Commit
c779308
Β·
verified Β·
1 Parent(s): e34db8d

Update README for v0.2.0 autonomous

Browse files
Files changed (1) hide show
  1. README.md +82 -52
README.md CHANGED
@@ -1,6 +1,10 @@
1
  # AETHER: A Self-Evolving Neuro-Symbolic Architecture for AGI
2
 
3
- AETHER (Adaptive Evolving Towards Higher-order Reasoning) is a unified self-evolving neuro-symbolic architecture that integrates symbolic and sub-symbolic computation within a dynamically self-modifying framework.
 
 
 
 
4
 
5
  ## Architecture Integration
6
 
@@ -19,91 +23,117 @@ AETHER synthesizes cutting-edge research from:
19
  | **Leader Training** | MLPO (2025) | Train single leader, peers untrained - efficient multi-agent RL |
20
  | **Task-Driven Agents** | BabyAGI | Task creation / prioritization / execution loop |
21
 
22
- ## System Components
 
 
23
 
24
  ```
25
  AETHER
26
  β”œβ”€β”€ Core (AetherCore)
27
- β”‚ β”œβ”€β”€ Neuro-Symbolic Fusion Gate (learned attention weights)
28
- β”‚ β”œβ”€β”€ Recursive Evolution Loop
29
- β”‚ └── Safety Sandbox
30
  β”œβ”€β”€ Memory (CoALA-inspired)
31
- β”‚ β”œβ”€β”€ Working Memory (attention-based retrieval)
32
- β”‚ β”œβ”€β”€ Episodic Memory (experience buffer)
33
- β”‚ β”œβ”€β”€ Semantic Memory (world knowledge via KG)
34
- β”‚ └── Procedural Memory (learned tools/skills)
35
  β”œβ”€β”€ Knowledge (PyG-style)
36
- β”‚ β”œβ”€β”€ RGCN Encoder (relational graph convolution)
37
- β”‚ β”œβ”€β”€ ComplEx Scorer (link prediction)
38
- β”‚ └── Symbolic Rule Engine (forward chaining)
39
  β”œβ”€β”€ Agents
40
- β”‚ β”œβ”€β”€ Hierarchical Agent (HiMAC: Macro + Micro policy)
41
- β”‚ β”œβ”€β”€ Agent Orchestrator (MLPO leader + dynamic routing)
42
- β”‚ β”œβ”€β”€ BabyAGI Loop (task-driven autonomy)
43
- β”‚ └── Textual Backpropagation (Agentic NN update)
44
  └── Evolution
45
- β”œβ”€β”€ MAP-Elites Archive (quality-diversity)
46
- β”œβ”€β”€ Performance-Novelty Selection (GEA)
47
- β”œβ”€β”€ Constrained Mutation (AlphaEvolve)
48
- └── Experience Sharing (group evolution)
49
  ```
50
 
51
- ## Installation
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  ```bash
54
- pip install torch transformers trl datasets accelerate peft networkx
55
- # Optional: pip install torch-geometric pyribs smolagents
56
  ```
57
 
58
- ## Quick Start
59
-
60
  ```python
61
- from aether.core import AetherCore, AetherConfig
62
 
63
- # Initialize AETHER
64
  config = AetherConfig(
65
- population_size=8,
66
- mutation_rate=0.15,
 
 
 
67
  num_agents=4,
68
- enable_self_modification=True,
 
69
  )
70
- aether = AetherCore(config, model_name="Qwen/Qwen2.5-0.5B-Instruct")
71
 
72
- # Execute task with neuro-symbolic fusion
73
- result = aether.forward("What is the relationship between learning and reasoning?")
 
 
 
 
 
 
74
  print(f"Symbolic weight: {result['symbolic_weight']:.3f}")
75
- print(f"Neural weight: {result['neural_weight']:.3f}")
76
 
77
- # Self-reflection
78
- reflection = aether.self_reflect()
79
- print(json.dumps(reflection, indent=2))
 
80
  ```
81
 
82
- ## Training with TRL GRPO
83
 
84
- ```bash
85
- python aether_train.py \
86
- --model_name Qwen/Qwen2.5-0.5B-Instruct \
87
- --num_train_epochs 1 \
88
- --per_device_train_batch_size 1 \
89
- --gradient_accumulation_steps 8 \
90
- --num_agents 4 \
91
- --enable_evolution
92
- ```
93
 
94
- ## Demo
95
 
96
- ```bash
97
- python -c "import aether_demo; aether_demo.main()"
 
 
 
 
 
 
 
 
98
  ```
99
 
 
 
100
  ## Design Principles
101
 
102
  1. **Neuro-Symbolic Fluidity**: Dynamic translation between symbolic and sub-symbolic representations
103
  2. **Architectural Evolvability**: Structural components are subject to learning and refinement
104
  3. **Parallel Agent Intelligence**: Intelligence emerges through coordinated multi-agent interaction
105
- 4. **Constrained Self-Modification**: All self-changes are sandboxed and validated
106
- 5. **Responsible Development**: Interpretability, auditability, and safety are first-class constraints
 
 
107
 
108
  ## Citation
109
 
 
1
  # AETHER: A Self-Evolving Neuro-Symbolic Architecture for AGI
2
 
3
+ **v0.2.0 β€” Autonomous Mode**
4
+
5
+ AETHER (Adaptive Evolving Towards Higher-order Reasoning) is a unified self-evolving neuro-symbolic architecture that integrates symbolic and sub-symbolic computation within a dynamically self-modifying framework. This version runs **fully autonomously** with zero human-in-the-loop oversight β€” all safety, validation, and rollback decisions are handled by automated systems.
6
+
7
+ ---
8
 
9
  ## Architecture Integration
10
 
 
23
  | **Leader Training** | MLPO (2025) | Train single leader, peers untrained - efficient multi-agent RL |
24
  | **Task-Driven Agents** | BabyAGI | Task creation / prioritization / execution loop |
25
 
26
+ ---
27
+
28
+ ## System Components (v0.2.0)
29
 
30
  ```
31
  AETHER
32
  β”œβ”€β”€ Core (AetherCore)
33
+ β”‚ β”œβ”€β”€ Neuro-Symbolic Fusion Gate (learned attention weights)
34
+ β”‚ β”œβ”€β”€ Recursive Evolution Loop (generateβ†’evaluateβ†’selectβ†’mutateβ†’validateβ†’integrate)
35
+ β”‚ └── AutoOversight System (risk scoring + regression suite + auto-rollback)
36
  β”œβ”€β”€ Memory (CoALA-inspired)
37
+ β”‚ β”œβ”€β”€ Working Memory (attention-based retrieval)
38
+ β”‚ β”œβ”€β”€ Episodic Memory (experience buffer)
39
+ β”‚ β”œβ”€β”€ Semantic Memory (world knowledge via KG)
40
+ β”‚ └── Procedural Memory (learned tools/skills)
41
  β”œβ”€β”€ Knowledge (PyG-style)
42
+ β”‚ β”œβ”€β”€ RGCN Encoder (relational graph convolution)
43
+ β”‚ β”œβ”€β”€ ComplEx Scorer (link prediction)
44
+ β”‚ └── Symbolic Rule Engine (forward chaining + multi-hop BFS)
45
  β”œβ”€β”€ Agents
46
+ β”‚ β”œβ”€β”€ Hierarchical Agent (HiMAC: Macro + Micro policy)
47
+ β”‚ β”œβ”€β”€ Agent Orchestrator (MLPO leader + dynamic routing)
48
+ β”‚ β”œβ”€β”€ BabyAGI Loop (task-driven autonomy)
49
+ β”‚ └── Textual Backpropagation (Agentic NN update)
50
  └── Evolution
51
+ β”œβ”€β”€ MAP-Elites Archive (quality-diversity)
52
+ β”œβ”€β”€ Performance-Novelty Selection (GEA)
53
+ β”œβ”€β”€ Constrained Mutation (AlphaEvolve)
54
+ └── AutoOversight Gate (replaces human-in-the-loop)
55
  ```
56
 
57
+ ---
58
+
59
+ ## What's New in v0.2.0 (Autonomous)
60
+
61
+ - **AutoOversight System** β€” Replaces all human-in-the-loop safety gates with automated regression suites, risk scoring, and auto-rollback. The system evaluates its own candidates via synthetic benchmarks before any integration.
62
+ - **Self-Sustained Evolution** β€” The evolution loop runs end-to-end without external API calls or human checkpoints. Fitness is evaluated against internal reasoning, memory, and knowledge graph benchmarks.
63
+ - **Temporal Memory with Attention** β€” Recency-weighted retrieval for long-horizon context across evolution generations.
64
+ - **Four-Agent Orchestration** β€” Researcher, Engineer, Analyzer, Integrator roles with learned routing weights.
65
+ - **Fully Runnable Standalone** β€” Single-file executable with only `torch`, `numpy`, and `networkx` dependencies.
66
+
67
+ ---
68
+
69
+ ## Quick Start (Autonomous)
70
 
71
  ```bash
72
+ pip install torch numpy networkx
73
+ python aether_autonomous.py
74
  ```
75
 
 
 
76
  ```python
77
+ from aether_autonomous import AetherCore, AetherConfig
78
 
79
+ # Initialize autonomous AETHER
80
  config = AetherConfig(
81
+ population_size=6,
82
+ generations=5,
83
+ mutation_rate=0.12,
84
+ macro_policy_dim=128,
85
+ micro_policy_dim=64,
86
  num_agents=4,
87
+ kg_embedding_dim=64,
88
+ kg_num_relations=10,
89
  )
 
90
 
91
+ aether = AetherCore(config)
92
+
93
+ # Seed knowledge
94
+ aether.knowledge.add_fact("Intelligence", "requires", "Reasoning")
95
+ aether.knowledge.add_fact("Reasoning", "requires", "Memory")
96
+
97
+ # Neuro-symbolic query
98
+ result = aether.forward("Intelligence requires")
99
  print(f"Symbolic weight: {result['symbolic_weight']:.3f}")
100
+ print(f"Neural weight: {result['neural_weight']:.3f}")
101
 
102
+ # Fully autonomous evolution (no human oversight)
103
+ evolution_result = aether.evolve(num_generations=5)
104
+ print(f"Best fitness: {evolution_result['best_fitness']:.4f}")
105
+ print(f"Archive coverage: {evolution_result['archive_stats']['coverage']:.2%}")
106
  ```
107
 
108
+ ---
109
 
110
+ ## Original Modular System (v0.1.0)
 
 
 
 
 
 
 
 
111
 
112
+ The original modular implementation remains available in the `aether/` directory:
113
 
114
+ ```python
115
+ from aether.core import AetherCore, AetherConfig
116
+
117
+ config = AetherConfig(
118
+ population_size=8,
119
+ mutation_rate=0.15,
120
+ num_agents=4,
121
+ enable_self_modification=True,
122
+ )
123
+ aether = AetherCore(config, model_name="Qwen/Qwen2.5-0.5B-Instruct")
124
  ```
125
 
126
+ ---
127
+
128
  ## Design Principles
129
 
130
  1. **Neuro-Symbolic Fluidity**: Dynamic translation between symbolic and sub-symbolic representations
131
  2. **Architectural Evolvability**: Structural components are subject to learning and refinement
132
  3. **Parallel Agent Intelligence**: Intelligence emerges through coordinated multi-agent interaction
133
+ 4. **Constrained Self-Modification**: All self-changes are sandboxed and validated by automated systems
134
+ 5. **Automated Oversight**: Risk scoring, regression suites, and auto-rollback replace human gates
135
+
136
+ ---
137
 
138
  ## Citation
139