# HippocampAIF — End-to-End Codebase Guide **A Biologically Grounded Cognitive Architecture for One-Shot Learning & Active Inference** License: © 2026 Algorembrant, Rembrant Oyangoren Albeos --- ## Table of Contents 1. [What This Is](#what-this-is) 2. [Theoretical Foundations](#theoretical-foundations) 3. [Architecture Map](#architecture-map) 4. [Setup](#setup) 5. [Module Reference](#module-reference) 6. [How the Pipeline Works](#how-the-pipeline-works) 7. [Using the MNIST Agent](#using-the-mnist-agent) 8. [Using the Breakout Agent](#using-the-breakout-agent) 9. [Running Tests](#running-tests) 10. [Extending the Framework](#extending-the-framework) 11. [Design Decisions & Rationale](#design-decisions--rationale) 12. [File Map](#file-map) --- ## What This Is HippocampAIF is a **complete cognitive architecture** implemented in pure Python (NumPy + SciPy only — no PyTorch, no TensorFlow, no JAX). Every module corresponds to a real brain structure with citations to the computational neuroscience literature. The framework does two things that conventional ML cannot: 1. **One-shot classification** — learn to recognize a new category from a single example (like humans do) 2. **Fast game mastery** — play Atari Breakout using innate physics priors (like infants understand gravity before they can walk) ### Key Innovation Instead of POMDP/VI/MCMC (traditional AI approaches), HippocampAIF uses: - **Free-Energy Minimization** (Friston) for perception and action - **Hippocampal Fast-Binding** for instant one-shot memory - **Spelke's Core Knowledge** systems as hardcoded innate priors - **Distortable Canvas** for elastic image comparison --- ## Theoretical Foundations ### Three Source Papers | Paper | What It Provides | Where in Code | |-------|-----------------|---------------| | **Friston (2009)** "The free-energy principle: a rough guide to the brain" | Free energy F = Energy − Entropy, recognition dynamics, active inference | `core/free_energy.py`, `core/message_passing.py`, `neocortex/predictive_coding.py`, `action/active_inference.py` | | **Lake et al. (2015)** "Human-level concept learning through probabilistic program induction" (BPL) | One-shot learning from single examples, compositional representations | `learning/one_shot_classifier.py`, `hippocampus/index_memory.py`, `agent/mnist_agent.py` | | **Distortable Canvas** (oneandtrulyone) | Elastic canvas deformation, dual distance metric, AMGD optimization | `learning/distortable_canvas.py`, `learning/amgd.py`, `core_knowledge/geometry_system.py` | ### Core Equations **Free Energy (Friston Box 1):** ``` F = −⟨ln p(y,ϑ|m)⟩_q + ⟨ln q(ϑ|μ)⟩_q ``` Under Laplace approximation: `F ≈ −ln p(y,μ) + ½ ln|Π(μ)|` **Recognition Dynamics (Friston Box 3):** ``` μ̇ = −∂F/∂μ (perception: update internal model) ȧ = −∂F/∂a (action: change world to match predictions) λ̇ = −∂F/∂λ (attention: optimize precision) ``` **Dual Distance (Distortable Canvas):** ``` D(I₁, I₂) = min_u,v [ color_dist(warp(I₁, u, v), I₂) + λ × canvas_dist(u, v) ] ``` --- ## Architecture Map ``` ┌────────────────────────────┐ │ Prefrontal Cortex (PFC) │ │ • Working memory (7±2) │ │ • Executive control │ │ • Goal stack │ └──────────┬─────────────────┘ │ top-down control ┌────────────────────────┼────────────────────────┐ │ │ │ ▼ ▼ ▼ ┌─────────────────┐ ┌──────────────────┐ ┌────────────────────┐ │ Temporal Cortex │ │ Predictive Coding│ │ Parietal Cortex │ │ • Recognition │ │ • Friston Box 3 │ │ • Priority maps │ │ • Categories │◄──│ • Free-energy min│──► │ • Coord. transforms│ │ • Semantic mem. │ │ • Error signals │ │ • Sensorimotor │ └────────┬────────┘ └────────┬─────────┘ └────────┬───────────┘ │ │ │ │ ┌───────────────┼───────────────┐ │ │ ▼ ▼ ▼ │ │ ┌─────┐ ┌──────────────┐ ┌──────────┐ │ │ │ SC │ │ Precision │ │ Biased │ │ │ │Saccade│ │ Modulator │ │ Compete │ │ │ └──┬──┘ └──────┬──────┘ └────┬─────┘ │ │ └─────────────┼───────────────┘ │ │ │ attention │ │ ┌─────────────┼─────────────┐ │ ▼ ▼ ▼ ▼ ▼ ┌──────────────────────────────────────────────────────┐ │ H I P P O C A M P U S │ │ ┌────────┐ ┌─────┐ ┌─────┐ ┌──────────────┐ │ │ │ DG │→ │ CA3 │→ │ CA1 │→│ Index Memory │ │ │ │Separate │ │Complete│ │Match│ │ Fast-binding │ │ │ └────────┘ └─────┘ └─────┘ └──────────────┘ │ │ ┌───────────────┐ ┌───────────────┐ │ │ │ Entorhinal EC │ │ Replay Buffer │ │ │ │ Grid cells │ │ Consolidation │ │ │ └───────────────┘ └───────────────┘ │ └──────────────────────────┬───────────────────────────┘ │ features ┌──────────────────────────┴───────────────────────────┐ │ V I S U A L C O R T E X │ │ ┌───────────┐ ┌──────────────┐ ┌───────────────┐ │ │ │ V1 Simple │→ │ V1 Complex │→ │ HMAX Hierarchy│ │ │ │ Gabor │ │ Max-pooling │ │ V2→V4→IT │ │ │ └───────────┘ └──────────────┘ └───────────────┘ │ └──────────────────────────┬───────────────────────────┘ │ ON/OFF sparse ┌──────────────────────────┴───────────────────────────┐ │ R E T I N A │ │ ┌──────────────┐ ┌──────────┐ ┌────────────────┐ │ │ │ Photoreceptors│ │ Ganglion │ │ Spatiotemporal │ │ │ │ Adaptation │ │ DoG │ │ Motion energy │ │ │ └──────────────┘ └──────────┘ └────────────────┘ │ └──────────────────────────┬───────────────────────────┘ │ raw image ═════╧═════ │ SENSES │ ═══════════ ┌──────────────────────────────────────────────────────┐ │ C O R E K N O W L E D G E │ │ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ │ │ │Objects │ │Physics │ │Number │ │Geometry│ │ │ │Perm/Coh│ │Gravity │ │ANS/Sub │ │Canvas │ │ │ └────────┘ └────────┘ └────────┘ └────────┘ │ │ ┌────────┐ ┌────────┐ │ │ │Agent │ │Social │ ← INNATE, NOT LEARNED │ │ │Goals │ │Helper │ │ │ └────────┘ └────────┘ │ └──────────────────────────────────────────────────────┘ ┌──────────────────────────────────────────────────────┐ │ A C T I O N S Y S T E M │ │ ┌──────────────────┐ ┌────────────┐ ┌──────────┐ │ │ │ Active Inference │ │ Motor │ │ Reflex │ │ │ │ ȧ = −∂F/∂a │ │ Primitives │ │ Arc │ │ │ │ Expected FE min. │ │ L/R/Fire │ │ Track │ │ │ └──────────────────┘ └────────────┘ └──────────┘ │ └──────────────────────────────────────────────────────┘ ``` --- ## Setup ### Prerequisites - Python ≥ 3.10 - NumPy ≥ 1.24 - SciPy ≥ 1.10 - Pillow ≥ 9.0 ### Installation ```powershell # 1. Clone or navigate to the project cd c:\Users\User\Desktop\debugrem\clawd-one-and-only-one-shot # 2. Create virtual environment python -m venv .venv # 3. Activate .venv\Scripts\activate # 4. Install dependencies pip install -r requirements.txt # 5. Set PYTHONPATH (REQUIRED — PowerShell syntax) $env:PYTHONPATH = "c:\Users\User\Desktop\debugrem\clawd-one-and-only-one-shot" ``` > **CMD users:** Use `set PYTHONPATH=c:\Users\User\Desktop\debugrem\clawd-one-and-only-one-shot` > **Linux/Mac users:** Use `export PYTHONPATH=$(pwd)` ### Verify Installation ```powershell python -c "import hippocampaif; print(f'HippocampAIF v{hippocampaif.__version__}')" # Expected: HippocampAIF v1.0.0 ``` --- ## Module Reference ### Phase 1: Core Infrastructure (`core/`) | Module | Class | Purpose | |--------|-------|---------| | `tensor.py` | `SparseTensor` | Sparse ndarray wrapper — the brain is "lazy and sparse" | | `free_energy.py` | `FreeEnergyEngine` | Variational free-energy computation and gradient descent | | `message_passing.py` | `HierarchicalMessagePassing` | Forward (errors) + Backward (predictions) message passing | | `dynamics.py` | `ContinuousDynamics` | Euler integration of recognition dynamics | **Usage:** ```python from hippocampaif.core.free_energy import FreeEnergyEngine fe = FreeEnergyEngine(learning_rate=0.01) F = fe.compute_free_energy(sensory_input, prediction, precision) new_state = fe.perception_update(state, sensory_input, generative_fn, precision) ``` ### Phase 2: Retina (`retina/`) | Module | Class | Purpose | |--------|-------|---------| | `photoreceptor.py` | `PhotoreceptorArray` | Luminance adaptation, Weber's law | | `ganglion.py` | `GanglionCellLayer` | DoG center-surround → ON/OFF sparse channels | | `spatiotemporal_energy.py` | `SpatiotemporalEnergyBank` | Adelson-Bergen motion energy | **Usage:** ```python from hippocampaif.retina.ganglion import GanglionCellLayer retina = GanglionCellLayer(center_sigma=1.0, surround_sigma=3.0) st_on, st_off = retina.process(image) # Returns SparseTensors on_array = st_on.data # Dense numpy array ``` ### Phase 3: Visual Cortex (`v1_v5/`) | Module | Class | Purpose | |--------|-------|---------| | `gabor_filters.py` | `V1SimpleCells` | 2D Gabor filter bank (multi-orientation, multi-scale) | | `sparse_coding.py` | `V1ComplexCells` | Max-pooling for shift invariance + hypercolumn sparsity | | `hmax_pooling.py` | `HMAXHierarchy` | S-cell/C-cell hierarchy: V1→V2→V4→IT | **Usage:** ```python from hippocampaif.v1_v5.gabor_filters import V1SimpleCells from hippocampaif.v1_v5.sparse_coding import V1ComplexCells from hippocampaif.v1_v5.hmax_pooling import HMAXHierarchy v1 = V1SimpleCells(n_orientations=8, n_scales=2, kernel_size=11, frequency=0.25) v1c = V1ComplexCells(pool_size=3) hmax = HMAXHierarchy(pool_sizes=[2, 2]) simple = v1.process(on_center_image) # (n_filters, H, W) complex_maps = v1c.process(simple) # list[SparseTensor] hierarchy = hmax.process(complex_maps) # list[list[SparseTensor]] ``` ### Phase 4: Hippocampus (`hippocampus/`) | Module | Class | Purpose | |--------|-------|---------| | `dg.py` | `DentateGyrus` | Pattern separation — sparse expansion coding | | `ca3.py` | `CA3` | Pattern completion — attractor network | | `ca1.py` | `CA1` | Match/mismatch detection → novelty signals | | `entorhinal.py` | `EntorhinalCortex` | Grid cells, spatial coding | | `index_memory.py` | `HippocampalIndex` | **One-shot fast-binding** — store and retrieve in 1 exposure | | `replay.py` | `ReplayBuffer` | Memory consolidation via offline replay | **Usage (one-shot memory):** ```python from hippocampaif.hippocampus.index_memory import HippocampalIndex mem = HippocampalIndex(cortical_size=128, index_size=256) mem.store(features_vector) # Instant! No training loops result = mem.retrieve(query_features) # Nearest match ``` ### Phase 5: Core Knowledge (`core_knowledge/`) These are **innate priors** — hardcoded "common sense" that constrains perception, NOT learned from data. | Module | Class | What It Encodes | |--------|-------|----------------| | `object_system.py` | `ObjectSystem` | Objects persist when occluded, can't teleport, don't pass through each other | | `physics_system.py` | `PhysicsSystem` | Gravity pulls down, objects bounce elastically, friction slows things | | `number_system.py` | `NumberSystem` | Exact count ≤4 (subitizing), Weber ratio for larger sets | | `geometry_system.py` | `GeometrySystem` | Spatial relations + Distortable Canvas deformation fields | | `agent_system.py` | `AgentSystem` | Self-propelled entities with direction changes = intentional agents | | `social_system.py` | `SocialSystem` | Helpers are preferred over hinderers | **Usage (physics prediction for Breakout):** ```python from hippocampaif.core_knowledge.physics_system import PhysicsSystem, PhysicsState phys = PhysicsSystem(gravity=0.0, elasticity=1.0) ball = PhysicsState(position=[50, 100], velocity=[3, -2]) trajectory = phys.predict_trajectory(ball, steps=50, bounds=([0,0], [160,210])) # → Predicts ball path with wall bounces ``` ### Phase 6: Neocortex + Attention (`neocortex/`, `attention/`) | Module | Class | Purpose | |--------|-------|---------| | `predictive_coding.py` | `PredictiveCodingHierarchy` | Hierarchical free-energy minimization (Friston Box 3) | | `prefrontal.py` | `PrefrontalCortex` | Working memory (7±2 items), executive control | | `temporal.py` | `TemporalCortex` | Object recognition, one-shot categories | | `parietal.py` | `ParietalCortex` | Priority maps, coordinate transforms | | `superior_colliculus.py` | `SuperiorColliculus` | Saccade target selection via WTA competition | | `precision.py` | `PrecisionModulator` | Attention = precision weighting (attend/suppress channels) | | `competition.py` | `BiasedCompetition` | Desimone & Duncan biased competition model | ### Phase 7: One-Shot Learning (`learning/`) | Module | Class | Purpose | |--------|-------|---------| | `distortable_canvas.py` | `DistortableCanvas` | Elastic image warping + dual distance metric | | `amgd.py` | `AMGD` | Coarse-to-fine deformation optimization | | `one_shot_classifier.py` | `OneShotClassifier` | Full pipeline: features → match → canvas refine | | `hebbian.py` | `HebbianLearning` | Basic/Oja/BCM/anti-Hebbian plasticity rules | ### Phase 8: Action (`action/`) | Module | Class | Purpose | |--------|-------|---------| | `active_inference.py` | `ActiveInferenceController` | ȧ = −∂F/∂a — choose actions that minimize surprise | | `motor_primitives.py` | `MotorPrimitives` | NOOP/FIRE/LEFT/RIGHT for Breakout | | `reflex_arc.py` | `ReflexArc` | Tracking, withdrawal, orienting, intercept reflexes | ### Phase 9: Integrated Agent (`agent/`) | Module | Class | Purpose | |--------|-------|---------| | `brain.py` | `Brain` | Wires ALL modules together: sense→remember→predict→attend→act | | `mnist_agent.py` | `MNISTAgent` | One-shot MNIST: 1 exemplar per digit → classify | | `breakout_agent.py` | `BreakoutAgent` | Breakout: physics priors + reflex tracking | --- ## How the Pipeline Works ### Perception Pipeline (seeing) ``` Raw Image (28×28 or 84×84) │ ▼ GanglionCellLayer.process() ON/OFF SparseTensors (DoG filtered) │ ▼ V1SimpleCells.process() Gabor responses (n_orientations × n_scales, H, W) │ ▼ V1ComplexCells.process() Shift-invariant sparse maps: list[SparseTensor] │ ▼ HMAXHierarchy.process() Hierarchical features: list[list[SparseTensor]] │ ▼ Flatten + truncate to feature_size Feature vector (128-dim) │ ├──► PredictiveCodingHierarchy.process() → free energy minimization ├──► TemporalCortex.recognize() → category label ├──► PrefrontalCortex.store() → working memory └──► HippocampalIndex.store() → one-shot binding ``` ### Action Pipeline (doing) ``` Current internal state (from predictive coding) │ ▼ ActiveInferenceController.select_action() Expected free energy G(a) for each action │ ▼ softmax(−β × G) Action probabilities │ ▼ argmin or sample Discrete action (0-3) │ ▼ MotorPrimitives.get_action_name() "LEFT" / "RIGHT" / "FIRE" / "NOOP" ``` ### One-Shot Learning Pipeline (classifying) ``` Test Image │ ▼ Full perception pipeline Feature vector │ ▼ OneShotClassifier.classify() │ ├── Compare to all stored exemplar features ├── If confidence > threshold → return label └── If ambiguous → DistortableCanvas refinement: ├── AMGD optimizes deformation field ├── Dual distance = color_dist + λ × canvas_dist └── Choose exemplar with lowest dual distance ``` --- ## Using the MNIST Agent ### Quick Start ```python import numpy as np from hippocampaif.agent.mnist_agent import MNISTAgent # Create agent (feature_size=128 is the default) agent = MNISTAgent(feature_size=128, use_canvas=True) # === TRAINING: Learn 1 exemplar per digit === # Load your MNIST data (10 training images, one per digit) for digit in range(10): image = training_images[digit] # 28×28 numpy array, values 0-255 agent.learn_digit(image, label=digit) print(f"Learned {agent.exemplars_stored} digits") # === TESTING: Classify new images === result = agent.classify(test_image) print(f"Predicted: {result['label_int']}, Confidence: {result['confidence']:.2f}") # === EVALUATION: Batch accuracy === stats = agent.evaluate(test_images, test_labels) print(f"Accuracy: {stats['accuracy']*100:.1f}%") print(f"Per-class: {stats['per_class_accuracy']}") ``` ### Loading MNIST Data ```python # Option 1: From sklearn from sklearn.datasets import fetch_openml mnist = fetch_openml('mnist_784', version=1) images = mnist.data.values.reshape(-1, 28, 28) labels = mnist.target.values.astype(int) # Option 2: From local .npy files images = np.load('mnist_images.npy') labels = np.load('mnist_labels.npy') # Select 1 training exemplar per digit train_indices = [] for d in range(10): idx = np.where(labels == d)[0][0] train_indices.append(idx) train_images = images[train_indices] train_labels = labels[train_indices] ``` --- ## Using the Breakout Agent ### Quick Start ```python import numpy as np from hippocampaif.agent.breakout_agent import BreakoutAgent # Create agent agent = BreakoutAgent(screen_height=210, screen_width=160) # === Game Loop === agent.new_episode() observation = env.reset() # From gymnasium for step in range(10000): action = agent.act(observation, reward=0.0) observation, reward, done, _, info = env.step(action) if done: print(f"Episode {agent.episode}: reward = {agent.episode_reward}") agent.new_episode() observation = env.reset() ``` ### With Gymnasium (requires optional deps) ```powershell pip install gymnasium[atari] ale-py ``` ```python import gymnasium as gym from hippocampaif.agent.breakout_agent import BreakoutAgent env = gym.make('BreakoutNoFrameskip-v4', render_mode='human') agent = BreakoutAgent() for episode in range(5): agent.new_episode() obs, _ = env.reset() total_reward = 0 while True: action = agent.act(obs) obs, reward, term, trunc, _ = env.step(action) total_reward += reward if term or trunc: break print(f"Episode {episode+1}: {total_reward} reward") print(agent.get_stats()) env.close() ``` --- ## Running Tests ### All Phases ```powershell # Set PYTHONPATH first! $env:PYTHONPATH = "c:\Users\User\Desktop\debugrem\clawd-one-and-only-one-shot" # Phase 1-4 (Core, Retina, Visual Cortex, Hippocampus) python -m hippocampaif.tests.test_core python -m hippocampaif.tests.test_retina python -m hippocampaif.tests.test_v1_v5 python -m hippocampaif.tests.test_hippocampus # Phase 5-8 (Core Knowledge, Neocortex, Learning, Action) python -m hippocampaif.tests.test_core_knowledge python -m hippocampaif.tests.test_neocortex_attention python -m hippocampaif.tests.test_learning python -m hippocampaif.tests.test_action ``` ### What Each Test Suite Validates | Test Suite | # Tests | What It Checks | |-----------|---------|----------------| | `test_core` | — | Free-energy convergence, message passing stability, sparse tensor ops | | `test_retina` | — | DoG center-surround, motion energy detection | | `test_v1_v5` | — | Gabor orientations, HMAX invariant features | | `test_hippocampus` | — | Pattern separation orthgonality, completion from partial cues | | `test_core_knowledge` | 11 | Object permanence, continuity, gravity, bounce, support, subitizing, Weber, geometry, deformation, agency, social | | `test_neocortex_attention` | 10 | PC convergence, PC learning, WM capacity, WM decay, one-shot recognition, coord transforms, priority maps, saccades, precision, biased competition | | `test_learning` | 7 | Canvas warp identity, dual distance, same-class distance, AMGD, Hebbian basic, Oja bounded, one-shot classifier | | `test_action` | 6 | Active inference goal-seeking, forward model learning, motor primitives, reflex tracking, intercept, habituation | --- ## Extending the Framework ### Adding a New Core Knowledge System ```python # hippocampaif/core_knowledge/my_new_system.py import numpy as np class TemporalSystem: """Core knowledge of time and causality.""" def __init__(self): self.causal_chains = [] def detect_causality(self, event_a, event_b, time_gap): """Innate prior: causes precede effects in time.""" if time_gap > 0 and time_gap < 2.0: # Temporal contiguity return {'causal': True, 'strength': 1.0 / time_gap} return {'causal': False, 'strength': 0.0} ``` Then add to `core_knowledge/__init__.py`: ```python from .my_new_system import TemporalSystem ``` ### Adding a New Agent ```python # hippocampaif/agent/my_agent.py from hippocampaif.agent.brain import Brain class MyAgent: def __init__(self): self.brain = Brain(image_height=64, image_width=64, n_actions=4) def act(self, observation): perception = self.brain.perceive(observation) return self.brain.act() def learn(self, image, label): self.brain.one_shot_learn(image, label) ``` ### Adding Custom Reflexes ```python from hippocampaif.action.reflex_arc import ReflexArc class CustomReflexArc(ReflexArc): def dodge_reflex(self, projectile_pos, projectile_vel, agent_pos): """Dodge an incoming projectile.""" # Predict collision point predicted = projectile_pos + projectile_vel * 0.5 # Move perpendicular to projectile trajectory direction = np.cross(projectile_vel, [0, 0, 1])[:2] return self.reflex_gain * direction ``` --- ## Design Decisions & Rationale ### Why No PyTorch/TensorFlow/JAX? The framework is intentionally pure NumPy + SciPy because: 1. **Biological fidelity** — neural computations are local gradient updates, not backprop through a compute graph 2. **Interpretability** — every array corresponds to a neural population with known anatomy 3. **Minimal dependencies** — runs on any machine with Python and NumPy 4. **Educational value** — you can read every line and understand the neuroscience ### Why Hippocampal Fast-Binding Instead of MCMC? MCMC sampling is computationally expensive and biologically implausible. The hippocampus stores new memories **instantly** via pattern separation (DG) + fast Hebbian binding (CA3) — no need for thousands of samples. ### Why Spelke's Core Knowledge Instead of Tabula Rasa? Human infants are NOT blank slates. They have innate expectations about: - **Objects** — things persist when hidden - **Physics** — dropped objects fall - **Numbers** — small quantities are exact These priors are hardcoded because they evolved over millions of years and shouldn't need to be learned from scratch by every agent. ### Why Distortable Canvas Instead of CNN Features? CNNs require thousands of training images. The Distortable Canvas achieves 90% MNIST accuracy with just **4 examples** by treating image comparison as a smooth deformation problem — "how much do I need to warp image A to look like image B?" --- ## File Map ``` hippocampaif/ # 59 Python files across 9 packages ├── __init__.py # v1.0.0, exports core classes ├── core/ # Phase 1 — Foundation │ ├── tensor.py # SparseTensor │ ├── free_energy.py # FreeEnergyEngine │ ├── message_passing.py # HierarchicalMessagePassing │ └── dynamics.py # ContinuousDynamics ├── retina/ # Phase 2 — Eye │ ├── photoreceptor.py # PhotoreceptorArray │ ├── ganglion.py # GanglionCellLayer (DoG) │ └── spatiotemporal_energy.py # SpatiotemporalEnergyBank ├── v1_v5/ # Phase 3 — Visual Cortex │ ├── gabor_filters.py # V1SimpleCells │ ├── sparse_coding.py # V1ComplexCells │ └── hmax_pooling.py # HMAXHierarchy ├── hippocampus/ # Phase 4 — Memory │ ├── dg.py # DentateGyrus │ ├── ca3.py # CA3 │ ├── ca1.py # CA1 │ ├── entorhinal.py # EntorhinalCortex │ ├── index_memory.py # HippocampalIndex │ └── replay.py # ReplayBuffer ├── core_knowledge/ # Phase 5 — Innate Priors │ ├── object_system.py # ObjectSystem │ ├── physics_system.py # PhysicsSystem │ ├── number_system.py # NumberSystem │ ├── geometry_system.py # GeometrySystem │ ├── agent_system.py # AgentSystem │ └── social_system.py # SocialSystem ├── neocortex/ # Phase 6a — Higher Cognition │ ├── predictive_coding.py # PredictiveCodingHierarchy │ ├── prefrontal.py # PrefrontalCortex │ ├── temporal.py # TemporalCortex │ └── parietal.py # ParietalCortex ├── attention/ # Phase 6b — Attention │ ├── superior_colliculus.py # SuperiorColliculus │ ├── precision.py # PrecisionModulator │ └── competition.py # BiasedCompetition ├── learning/ # Phase 7 — One-Shot │ ├── distortable_canvas.py # DistortableCanvas │ ├── amgd.py # AMGD │ ├── one_shot_classifier.py # OneShotClassifier │ └── hebbian.py # HebbianLearning ├── action/ # Phase 8 — Motor │ ├── active_inference.py # ActiveInferenceController │ ├── motor_primitives.py # MotorPrimitives │ └── reflex_arc.py # ReflexArc ├── agent/ # Phase 9 — Integration │ ├── brain.py # Brain (full pipeline) │ ├── mnist_agent.py # MNISTAgent │ └── breakout_agent.py # BreakoutAgent └── tests/ # 8 test suites, 34+ tests ├── test_core.py ├── test_retina.py ├── test_visual_cortex.py ├── test_hippocampus.py ├── test_core_knowledge.py ├── test_neocortex_attention.py ├── test_learning.py └── test_action.py ``` --- ## Citation If you use this framework in research or production, please cite: ```bibtex @software{hippocampaif2026, author = {Albeos, Rembrant Oyangoren}, title = {HippocampAIF: Biologically Grounded Cognitive Architecture}, year = {2026}, description = {Free-energy minimization + hippocampal fast-binding + Spelke's core knowledge for one-shot learning and active inference} } ``` **References:** - Friston, K. (2009). The free-energy principle: a rough guide to the brain. *Trends in Cognitive Sciences*, 13(7), 293-301. - Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. *Science*, 350(6266), 1332-1338. - Spelke, E. S. (2000). Core knowledge. *American Psychologist*, 55(11), 1233-1243.