camdog920 commited on
Commit
5fefc15
Β·
verified Β·
1 Parent(s): 93f6542

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -0
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AETHER: A Self-Evolving Neuro-Symbolic Architecture for AGI
2
+
3
+ AETHER (Adaptive Evolving Towards Higher-order Reasoning) is a unified self-evolving neuro-symbolic architecture that integrates symbolic and sub-symbolic computation within a dynamically self-modifying framework.
4
+
5
+ ## Architecture Integration
6
+
7
+ AETHER synthesizes cutting-edge research from:
8
+
9
+ | Component | Source | Key Contribution |
10
+ |-----------|--------|-----------------|
11
+ | **Evolutionary Core** | AlphaEvolve (DeepMind, 2025) | MAP-Elites + island model + LLM code diffs for algorithm discovery |
12
+ | **Hierarchical Reasoning** | HiMAC (2026) | Macro-Policy / Micro-Policy co-evolution with iterative optimization |
13
+ | **Group Self-Evolution** | GEA (2026) | Performance-Novelty selection with experience sharing |
14
+ | **Tool Evolution** | Yunjue Agent (2026) | Manager/Executor/Developer/Integrator role decomposition + tool absorption |
15
+ | **AI Research Agent** | ASI-Evolve (2026) | 4-stage Researcher→Engineer→Analyzer→Database loop |
16
+ | **Co-Evolution** | CoMAS (2025) | Decentralized multi-agent co-evolution via interaction rewards |
17
+ | **Cognitive Architecture** | CoALA (2023) | Working/Episodic/Semantic/Procedural memory taxonomy |
18
+ | **Agentic Neural Networks** | ANN (2025) | Textual backpropagation across multi-agent layers |
19
+ | **Leader Training** | MLPO (2025) | Train single leader, peers untrained - efficient multi-agent RL |
20
+ | **Task-Driven Agents** | BabyAGI | Task creation / prioritization / execution loop |
21
+
22
+ ## System Components
23
+
24
+ ```
25
+ AETHER
26
+ β”œβ”€β”€ Core (AetherCore)
27
+ β”‚ β”œβ”€β”€ Neuro-Symbolic Fusion Gate (learned attention weights)
28
+ β”‚ β”œβ”€β”€ Recursive Evolution Loop
29
+ β”‚ └── Safety Sandbox
30
+ β”œβ”€β”€ Memory (CoALA-inspired)
31
+ β”‚ β”œβ”€β”€ Working Memory (attention-based retrieval)
32
+ β”‚ β”œβ”€β”€ Episodic Memory (experience buffer)
33
+ β”‚ β”œβ”€β”€ Semantic Memory (world knowledge via KG)
34
+ β”‚ └── Procedural Memory (learned tools/skills)
35
+ β”œβ”€β”€ Knowledge (PyG-style)
36
+ β”‚ β”œβ”€β”€ RGCN Encoder (relational graph convolution)
37
+ β”‚ β”œβ”€β”€ ComplEx Scorer (link prediction)
38
+ β”‚ └── Symbolic Rule Engine (forward chaining)
39
+ β”œβ”€β”€ Agents
40
+ β”‚ β”œβ”€β”€ Hierarchical Agent (HiMAC: Macro + Micro policy)
41
+ β”‚ β”œβ”€β”€ Agent Orchestrator (MLPO leader + dynamic routing)
42
+ β”‚ β”œβ”€β”€ BabyAGI Loop (task-driven autonomy)
43
+ β”‚ └── Textual Backpropagation (Agentic NN update)
44
+ └── Evolution
45
+ β”œβ”€β”€ MAP-Elites Archive (quality-diversity)
46
+ β”œβ”€β”€ Performance-Novelty Selection (GEA)
47
+ β”œβ”€β”€ Constrained Mutation (AlphaEvolve)
48
+ └── Experience Sharing (group evolution)
49
+ ```
50
+
51
+ ## Installation
52
+
53
+ ```bash
54
+ pip install torch transformers trl datasets accelerate peft networkx
55
+ # Optional: pip install torch-geometric pyribs smolagents
56
+ ```
57
+
58
+ ## Quick Start
59
+
60
+ ```python
61
+ from aether.core import AetherCore, AetherConfig
62
+
63
+ # Initialize AETHER
64
+ config = AetherConfig(
65
+ population_size=8,
66
+ mutation_rate=0.15,
67
+ num_agents=4,
68
+ enable_self_modification=True,
69
+ )
70
+ aether = AetherCore(config, model_name="Qwen/Qwen2.5-0.5B-Instruct")
71
+
72
+ # Execute task with neuro-symbolic fusion
73
+ result = aether.forward("What is the relationship between learning and reasoning?")
74
+ print(f"Symbolic weight: {result['symbolic_weight']:.3f}")
75
+ print(f"Neural weight: {result['neural_weight']:.3f}")
76
+
77
+ # Self-reflection
78
+ reflection = aether.self_reflect()
79
+ print(json.dumps(reflection, indent=2))
80
+ ```
81
+
82
+ ## Training with TRL GRPO
83
+
84
+ ```bash
85
+ python aether_train.py \
86
+ --model_name Qwen/Qwen2.5-0.5B-Instruct \
87
+ --num_train_epochs 1 \
88
+ --per_device_train_batch_size 1 \
89
+ --gradient_accumulation_steps 8 \
90
+ --num_agents 4 \
91
+ --enable_evolution
92
+ ```
93
+
94
+ ## Demo
95
+
96
+ ```bash
97
+ python -c "import aether_demo; aether_demo.main()"
98
+ ```
99
+
100
+ ## Design Principles
101
+
102
+ 1. **Neuro-Symbolic Fluidity**: Dynamic translation between symbolic and sub-symbolic representations
103
+ 2. **Architectural Evolvability**: Structural components are subject to learning and refinement
104
+ 3. **Parallel Agent Intelligence**: Intelligence emerges through coordinated multi-agent interaction
105
+ 4. **Constrained Self-Modification**: All self-changes are sandboxed and validated
106
+ 5. **Responsible Development**: Interpretability, auditability, and safety are first-class constraints
107
+
108
+ ## Citation
109
+
110
+ ```bibtex
111
+ @article{aether2026,
112
+ title={AETHER: A Self-Evolving Neuro-Symbolic Architecture for Artificial General Intelligence},
113
+ author={Anonymous},
114
+ year={2026}
115
+ }
116
+ ```
117
+
118
+ ## License
119
+
120
+ MIT License - Open for research and development toward responsible AGI.