HippocampAIF / GUIDE.md
algorembrant's picture
Upload 253 files
32d978d verified

HippocampAIF β€” End-to-End Codebase Guide

A Biologically Grounded Cognitive Architecture for One-Shot Learning & Active Inference

License: Β© 2026 Algorembrant, Rembrant Oyangoren Albeos


Table of Contents

  1. What This Is
  2. Theoretical Foundations
  3. Architecture Map
  4. Setup
  5. Module Reference
  6. How the Pipeline Works
  7. Using the MNIST Agent
  8. Using the Breakout Agent
  9. Running Tests
  10. Extending the Framework
  11. Design Decisions & Rationale
  12. File Map

What This Is

HippocampAIF is a complete cognitive architecture implemented in pure Python (NumPy + SciPy only β€” no PyTorch, no TensorFlow, no JAX). Every module corresponds to a real brain structure with citations to the computational neuroscience literature.

The framework does two things that conventional ML cannot:

  1. One-shot classification β€” learn to recognize a new category from a single example (like humans do)
  2. Fast game mastery β€” play Atari Breakout using innate physics priors (like infants understand gravity before they can walk)

Key Innovation

Instead of POMDP/VI/MCMC (traditional AI approaches), HippocampAIF uses:

  • Free-Energy Minimization (Friston) for perception and action
  • Hippocampal Fast-Binding for instant one-shot memory
  • Spelke's Core Knowledge systems as hardcoded innate priors
  • Distortable Canvas for elastic image comparison

Theoretical Foundations

Three Source Papers

Paper What It Provides Where in Code
Friston (2009) "The free-energy principle: a rough guide to the brain" Free energy F = Energy βˆ’ Entropy, recognition dynamics, active inference core/free_energy.py, core/message_passing.py, neocortex/predictive_coding.py, action/active_inference.py
Lake et al. (2015) "Human-level concept learning through probabilistic program induction" (BPL) One-shot learning from single examples, compositional representations learning/one_shot_classifier.py, hippocampus/index_memory.py, agent/mnist_agent.py
Distortable Canvas (oneandtrulyone) Elastic canvas deformation, dual distance metric, AMGD optimization learning/distortable_canvas.py, learning/amgd.py, core_knowledge/geometry_system.py

Core Equations

Free Energy (Friston Box 1):

F = βˆ’βŸ¨ln p(y,Ο‘|m)⟩_q + ⟨ln q(Ο‘|ΞΌ)⟩_q

Under Laplace approximation: F β‰ˆ βˆ’ln p(y,ΞΌ) + Β½ ln|Ξ (ΞΌ)|

Recognition Dynamics (Friston Box 3):

ΞΌΜ‡ = βˆ’βˆ‚F/βˆ‚ΞΌ   (perception: update internal model)
Θ§ = βˆ’βˆ‚F/βˆ‚a   (action: change world to match predictions)
Ξ»Μ‡ = βˆ’βˆ‚F/βˆ‚Ξ»   (attention: optimize precision)

Dual Distance (Distortable Canvas):

D(I₁, Iβ‚‚) = min_u,v [ color_dist(warp(I₁, u, v), Iβ‚‚) + Ξ» Γ— canvas_dist(u, v) ]

Architecture Map

                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                        β”‚    Prefrontal Cortex (PFC) β”‚
                        β”‚  β€’ Working memory (7Β±2)    β”‚
                        β”‚  β€’ Executive control       β”‚
                        β”‚  β€’ Goal stack              β”‚
                        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                   β”‚ top-down control
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚                        β”‚                        β”‚
          β–Ό                        β–Ό                        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Temporal Cortex β”‚   β”‚ Predictive Codingβ”‚    β”‚  Parietal Cortex   β”‚
β”‚ β€’ Recognition   β”‚   β”‚ β€’ Friston Box 3  β”‚    β”‚  β€’ Priority maps   β”‚
β”‚ β€’ Categories    │◄──│ β€’ Free-energy min│──► β”‚  β€’ Coord. transformsβ”‚
β”‚ β€’ Semantic mem. β”‚   β”‚ β€’ Error signals  β”‚    β”‚  β€’ Sensorimotor    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                     β”‚                       β”‚
         β”‚     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚
         β”‚     β–Ό               β–Ό               β–Ό       β”‚
         β”‚  β”Œβ”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
         β”‚  β”‚ SC  β”‚   β”‚  Precision   β”‚  β”‚ Biased   β”‚   β”‚
         β”‚  β”‚Saccadeβ”‚ β”‚  Modulator   β”‚  β”‚ Compete  β”‚   β”‚
         β”‚  β””β”€β”€β”¬β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜   β”‚
         β”‚     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
         β”‚                   β”‚ attention               β”‚
         β”‚     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”‚
         β–Ό     β–Ό             β–Ό             β–Ό           β–Ό
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚              H I P P O C A M P U S                   β”‚
    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
    β”‚  β”‚  DG    β”‚β†’ β”‚ CA3 β”‚β†’ β”‚ CA1  β”‚β†’β”‚ Index Memory β”‚      β”‚
    β”‚  β”‚Separate β”‚ β”‚Completeβ”‚ β”‚Matchβ”‚  β”‚ Fast-binding  β”‚   β”‚
    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                β”‚
    β”‚  β”‚ Entorhinal EC β”‚  β”‚ Replay Buffer β”‚                β”‚
    β”‚  β”‚ Grid cells    β”‚  β”‚ Consolidation β”‚                β”‚
    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚ features
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚              V I S U A L   C O R T E X               β”‚
    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
    β”‚  β”‚ V1 Simple β”‚β†’ β”‚ V1 Complex   β”‚β†’ β”‚ HMAX Hierarchyβ”‚  β”‚
    β”‚  β”‚ Gabor     β”‚  β”‚ Max-pooling  β”‚  β”‚ V2β†’V4β†’IT      β”‚  β”‚
    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚ ON/OFF sparse
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚                    R E T I N A                       β”‚
    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
    β”‚  β”‚ Photoreceptorsβ”‚ β”‚ Ganglion β”‚  β”‚ Spatiotemporal β”‚  β”‚
    β”‚  β”‚ Adaptation   β”‚  β”‚ DoG      β”‚  β”‚ Motion energy  β”‚  β”‚
    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚ raw image
                          ═════╧═════
                          β”‚ SENSES β”‚
                          ═══════════

    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚            C O R E   K N O W L E D G E               β”‚
    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
    β”‚  β”‚Objects β”‚ β”‚Physics β”‚ β”‚Number  β”‚ β”‚Geometryβ”‚         β”‚
    β”‚  β”‚Perm/Cohβ”‚ β”‚Gravity β”‚ β”‚ANS/Sub β”‚ β”‚Canvas  β”‚         β”‚
    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”                               β”‚
    β”‚  β”‚Agent   β”‚ β”‚Social  β”‚    ← INNATE, NOT LEARNED      β”‚
    β”‚  β”‚Goals   β”‚ β”‚Helper  β”‚                               β”‚
    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜                               β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚              A C T I O N   S Y S T E M               β”‚
    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
    β”‚  β”‚ Active Inference β”‚  β”‚ Motor      β”‚  β”‚ Reflex   β”‚  β”‚
    β”‚  β”‚ Θ§ = βˆ’βˆ‚F/βˆ‚a       β”‚  β”‚ Primitives β”‚  β”‚ Arc      β”‚  β”‚
    β”‚  β”‚ Expected FE min. β”‚  β”‚ L/R/Fire   β”‚  β”‚ Track    β”‚  β”‚
    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Setup

Prerequisites

  • Python β‰₯ 3.10
  • NumPy β‰₯ 1.24
  • SciPy β‰₯ 1.10
  • Pillow β‰₯ 9.0

Installation

# 1. Clone or navigate to the project
cd c:\Users\User\Desktop\debugrem\clawd-one-and-only-one-shot

# 2. Create virtual environment
python -m venv .venv

# 3. Activate
.venv\Scripts\activate

# 4. Install dependencies
pip install -r requirements.txt

# 5. Set PYTHONPATH (REQUIRED β€” PowerShell syntax)
$env:PYTHONPATH = "c:\Users\User\Desktop\debugrem\clawd-one-and-only-one-shot"

CMD users: Use set PYTHONPATH=c:\Users\User\Desktop\debugrem\clawd-one-and-only-one-shot

Linux/Mac users: Use export PYTHONPATH=$(pwd)

Verify Installation

python -c "import hippocampaif; print(f'HippocampAIF v{hippocampaif.__version__}')"
# Expected: HippocampAIF v1.0.0

Module Reference

Phase 1: Core Infrastructure (core/)

Module Class Purpose
tensor.py SparseTensor Sparse ndarray wrapper β€” the brain is "lazy and sparse"
free_energy.py FreeEnergyEngine Variational free-energy computation and gradient descent
message_passing.py HierarchicalMessagePassing Forward (errors) + Backward (predictions) message passing
dynamics.py ContinuousDynamics Euler integration of recognition dynamics

Usage:

from hippocampaif.core.free_energy import FreeEnergyEngine

fe = FreeEnergyEngine(learning_rate=0.01)
F = fe.compute_free_energy(sensory_input, prediction, precision)
new_state = fe.perception_update(state, sensory_input, generative_fn, precision)

Phase 2: Retina (retina/)

Module Class Purpose
photoreceptor.py PhotoreceptorArray Luminance adaptation, Weber's law
ganglion.py GanglionCellLayer DoG center-surround β†’ ON/OFF sparse channels
spatiotemporal_energy.py SpatiotemporalEnergyBank Adelson-Bergen motion energy

Usage:

from hippocampaif.retina.ganglion import GanglionCellLayer

retina = GanglionCellLayer(center_sigma=1.0, surround_sigma=3.0)
st_on, st_off = retina.process(image)  # Returns SparseTensors
on_array = st_on.data  # Dense numpy array

Phase 3: Visual Cortex (v1_v5/)

Module Class Purpose
gabor_filters.py V1SimpleCells 2D Gabor filter bank (multi-orientation, multi-scale)
sparse_coding.py V1ComplexCells Max-pooling for shift invariance + hypercolumn sparsity
hmax_pooling.py HMAXHierarchy S-cell/C-cell hierarchy: V1β†’V2β†’V4β†’IT

Usage:

from hippocampaif.v1_v5.gabor_filters import V1SimpleCells
from hippocampaif.v1_v5.sparse_coding import V1ComplexCells
from hippocampaif.v1_v5.hmax_pooling import HMAXHierarchy

v1 = V1SimpleCells(n_orientations=8, n_scales=2, kernel_size=11, frequency=0.25)
v1c = V1ComplexCells(pool_size=3)
hmax = HMAXHierarchy(pool_sizes=[2, 2])

simple = v1.process(on_center_image)       # (n_filters, H, W)
complex_maps = v1c.process(simple)          # list[SparseTensor]
hierarchy = hmax.process(complex_maps)      # list[list[SparseTensor]]

Phase 4: Hippocampus (hippocampus/)

Module Class Purpose
dg.py DentateGyrus Pattern separation β€” sparse expansion coding
ca3.py CA3 Pattern completion β€” attractor network
ca1.py CA1 Match/mismatch detection β†’ novelty signals
entorhinal.py EntorhinalCortex Grid cells, spatial coding
index_memory.py HippocampalIndex One-shot fast-binding β€” store and retrieve in 1 exposure
replay.py ReplayBuffer Memory consolidation via offline replay

Usage (one-shot memory):

from hippocampaif.hippocampus.index_memory import HippocampalIndex

mem = HippocampalIndex(cortical_size=128, index_size=256)
mem.store(features_vector)                      # Instant! No training loops
result = mem.retrieve(query_features)            # Nearest match

Phase 5: Core Knowledge (core_knowledge/)

These are innate priors β€” hardcoded "common sense" that constrains perception, NOT learned from data.

Module Class What It Encodes
object_system.py ObjectSystem Objects persist when occluded, can't teleport, don't pass through each other
physics_system.py PhysicsSystem Gravity pulls down, objects bounce elastically, friction slows things
number_system.py NumberSystem Exact count ≀4 (subitizing), Weber ratio for larger sets
geometry_system.py GeometrySystem Spatial relations + Distortable Canvas deformation fields
agent_system.py AgentSystem Self-propelled entities with direction changes = intentional agents
social_system.py SocialSystem Helpers are preferred over hinderers

Usage (physics prediction for Breakout):

from hippocampaif.core_knowledge.physics_system import PhysicsSystem, PhysicsState

phys = PhysicsSystem(gravity=0.0, elasticity=1.0)
ball = PhysicsState(position=[50, 100], velocity=[3, -2])
trajectory = phys.predict_trajectory(ball, steps=50, bounds=([0,0], [160,210]))
# β†’ Predicts ball path with wall bounces

Phase 6: Neocortex + Attention (neocortex/, attention/)

Module Class Purpose
predictive_coding.py PredictiveCodingHierarchy Hierarchical free-energy minimization (Friston Box 3)
prefrontal.py PrefrontalCortex Working memory (7Β±2 items), executive control
temporal.py TemporalCortex Object recognition, one-shot categories
parietal.py ParietalCortex Priority maps, coordinate transforms
superior_colliculus.py SuperiorColliculus Saccade target selection via WTA competition
precision.py PrecisionModulator Attention = precision weighting (attend/suppress channels)
competition.py BiasedCompetition Desimone & Duncan biased competition model

Phase 7: One-Shot Learning (learning/)

Module Class Purpose
distortable_canvas.py DistortableCanvas Elastic image warping + dual distance metric
amgd.py AMGD Coarse-to-fine deformation optimization
one_shot_classifier.py OneShotClassifier Full pipeline: features β†’ match β†’ canvas refine
hebbian.py HebbianLearning Basic/Oja/BCM/anti-Hebbian plasticity rules

Phase 8: Action (action/)

Module Class Purpose
active_inference.py ActiveInferenceController Θ§ = βˆ’βˆ‚F/βˆ‚a β€” choose actions that minimize surprise
motor_primitives.py MotorPrimitives NOOP/FIRE/LEFT/RIGHT for Breakout
reflex_arc.py ReflexArc Tracking, withdrawal, orienting, intercept reflexes

Phase 9: Integrated Agent (agent/)

Module Class Purpose
brain.py Brain Wires ALL modules together: sense→remember→predict→attend→act
mnist_agent.py MNISTAgent One-shot MNIST: 1 exemplar per digit β†’ classify
breakout_agent.py BreakoutAgent Breakout: physics priors + reflex tracking

How the Pipeline Works

Perception Pipeline (seeing)

Raw Image (28Γ—28 or 84Γ—84)
    β”‚
    β–Ό  GanglionCellLayer.process()
ON/OFF SparseTensors (DoG filtered)
    β”‚
    β–Ό  V1SimpleCells.process()
Gabor responses (n_orientations Γ— n_scales, H, W)
    β”‚
    β–Ό  V1ComplexCells.process()
Shift-invariant sparse maps: list[SparseTensor]
    β”‚
    β–Ό  HMAXHierarchy.process()
Hierarchical features: list[list[SparseTensor]]
    β”‚
    β–Ό  Flatten + truncate to feature_size
Feature vector (128-dim)
    β”‚
    β”œβ”€β”€β–Ί PredictiveCodingHierarchy.process() β†’ free energy minimization
    β”œβ”€β”€β–Ί TemporalCortex.recognize() β†’ category label
    β”œβ”€β”€β–Ί PrefrontalCortex.store() β†’ working memory
    └──► HippocampalIndex.store() β†’ one-shot binding

Action Pipeline (doing)

Current internal state (from predictive coding)
    β”‚
    β–Ό  ActiveInferenceController.select_action()
Expected free energy G(a) for each action
    β”‚
    β–Ό  softmax(βˆ’Ξ² Γ— G)
Action probabilities
    β”‚
    β–Ό  argmin or sample
Discrete action (0-3)
    β”‚
    β–Ό  MotorPrimitives.get_action_name()
"LEFT" / "RIGHT" / "FIRE" / "NOOP"

One-Shot Learning Pipeline (classifying)

Test Image
    β”‚
    β–Ό  Full perception pipeline
Feature vector
    β”‚
    β–Ό  OneShotClassifier.classify()
    β”‚
    β”œβ”€β”€ Compare to all stored exemplar features
    β”œβ”€β”€ If confidence > threshold β†’ return label
    └── If ambiguous β†’ DistortableCanvas refinement:
        β”œβ”€β”€ AMGD optimizes deformation field
        β”œβ”€β”€ Dual distance = color_dist + Ξ» Γ— canvas_dist
        └── Choose exemplar with lowest dual distance

Using the MNIST Agent

Quick Start

import numpy as np
from hippocampaif.agent.mnist_agent import MNISTAgent

# Create agent (feature_size=128 is the default)
agent = MNISTAgent(feature_size=128, use_canvas=True)

# === TRAINING: Learn 1 exemplar per digit ===
# Load your MNIST data (10 training images, one per digit)
for digit in range(10):
    image = training_images[digit]  # 28Γ—28 numpy array, values 0-255
    agent.learn_digit(image, label=digit)

print(f"Learned {agent.exemplars_stored} digits")

# === TESTING: Classify new images ===
result = agent.classify(test_image)
print(f"Predicted: {result['label_int']}, Confidence: {result['confidence']:.2f}")

# === EVALUATION: Batch accuracy ===
stats = agent.evaluate(test_images, test_labels)
print(f"Accuracy: {stats['accuracy']*100:.1f}%")
print(f"Per-class: {stats['per_class_accuracy']}")

Loading MNIST Data

# Option 1: From sklearn
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
images = mnist.data.values.reshape(-1, 28, 28)
labels = mnist.target.values.astype(int)

# Option 2: From local .npy files
images = np.load('mnist_images.npy')
labels = np.load('mnist_labels.npy')

# Select 1 training exemplar per digit
train_indices = []
for d in range(10):
    idx = np.where(labels == d)[0][0]
    train_indices.append(idx)

train_images = images[train_indices]
train_labels = labels[train_indices]

Using the Breakout Agent

Quick Start

import numpy as np
from hippocampaif.agent.breakout_agent import BreakoutAgent

# Create agent
agent = BreakoutAgent(screen_height=210, screen_width=160)

# === Game Loop ===
agent.new_episode()
observation = env.reset()  # From gymnasium

for step in range(10000):
    action = agent.act(observation, reward=0.0)
    observation, reward, done, _, info = env.step(action)
    
    if done:
        print(f"Episode {agent.episode}: reward = {agent.episode_reward}")
        agent.new_episode()
        observation = env.reset()

With Gymnasium (requires optional deps)

pip install gymnasium[atari] ale-py
import gymnasium as gym
from hippocampaif.agent.breakout_agent import BreakoutAgent

env = gym.make('BreakoutNoFrameskip-v4', render_mode='human')
agent = BreakoutAgent()

for episode in range(5):
    agent.new_episode()
    obs, _ = env.reset()
    total_reward = 0
    
    while True:
        action = agent.act(obs)
        obs, reward, term, trunc, _ = env.step(action)
        total_reward += reward
        if term or trunc:
            break
    
    print(f"Episode {episode+1}: {total_reward} reward")
    print(agent.get_stats())

env.close()

Running Tests

All Phases

# Set PYTHONPATH first!
$env:PYTHONPATH = "c:\Users\User\Desktop\debugrem\clawd-one-and-only-one-shot"

# Phase 1-4 (Core, Retina, Visual Cortex, Hippocampus)
python -m hippocampaif.tests.test_core
python -m hippocampaif.tests.test_retina
python -m hippocampaif.tests.test_v1_v5
python -m hippocampaif.tests.test_hippocampus

# Phase 5-8 (Core Knowledge, Neocortex, Learning, Action)
python -m hippocampaif.tests.test_core_knowledge
python -m hippocampaif.tests.test_neocortex_attention
python -m hippocampaif.tests.test_learning
python -m hippocampaif.tests.test_action

What Each Test Suite Validates

Test Suite # Tests What It Checks
test_core β€” Free-energy convergence, message passing stability, sparse tensor ops
test_retina β€” DoG center-surround, motion energy detection
test_v1_v5 β€” Gabor orientations, HMAX invariant features
test_hippocampus β€” Pattern separation orthgonality, completion from partial cues
test_core_knowledge 11 Object permanence, continuity, gravity, bounce, support, subitizing, Weber, geometry, deformation, agency, social
test_neocortex_attention 10 PC convergence, PC learning, WM capacity, WM decay, one-shot recognition, coord transforms, priority maps, saccades, precision, biased competition
test_learning 7 Canvas warp identity, dual distance, same-class distance, AMGD, Hebbian basic, Oja bounded, one-shot classifier
test_action 6 Active inference goal-seeking, forward model learning, motor primitives, reflex tracking, intercept, habituation

Extending the Framework

Adding a New Core Knowledge System

# hippocampaif/core_knowledge/my_new_system.py
import numpy as np

class TemporalSystem:
    """Core knowledge of time and causality."""
    
    def __init__(self):
        self.causal_chains = []
    
    def detect_causality(self, event_a, event_b, time_gap):
        """Innate prior: causes precede effects in time."""
        if time_gap > 0 and time_gap < 2.0:  # Temporal contiguity
            return {'causal': True, 'strength': 1.0 / time_gap}
        return {'causal': False, 'strength': 0.0}

Then add to core_knowledge/__init__.py:

from .my_new_system import TemporalSystem

Adding a New Agent

# hippocampaif/agent/my_agent.py
from hippocampaif.agent.brain import Brain

class MyAgent:
    def __init__(self):
        self.brain = Brain(image_height=64, image_width=64, n_actions=4)
    
    def act(self, observation):
        perception = self.brain.perceive(observation)
        return self.brain.act()
    
    def learn(self, image, label):
        self.brain.one_shot_learn(image, label)

Adding Custom Reflexes

from hippocampaif.action.reflex_arc import ReflexArc

class CustomReflexArc(ReflexArc):
    def dodge_reflex(self, projectile_pos, projectile_vel, agent_pos):
        """Dodge an incoming projectile."""
        # Predict collision point
        predicted = projectile_pos + projectile_vel * 0.5
        
        # Move perpendicular to projectile trajectory
        direction = np.cross(projectile_vel, [0, 0, 1])[:2]
        return self.reflex_gain * direction

Design Decisions & Rationale

Why No PyTorch/TensorFlow/JAX?

The framework is intentionally pure NumPy + SciPy because:

  1. Biological fidelity β€” neural computations are local gradient updates, not backprop through a compute graph
  2. Interpretability β€” every array corresponds to a neural population with known anatomy
  3. Minimal dependencies β€” runs on any machine with Python and NumPy
  4. Educational value β€” you can read every line and understand the neuroscience

Why Hippocampal Fast-Binding Instead of MCMC?

MCMC sampling is computationally expensive and biologically implausible. The hippocampus stores new memories instantly via pattern separation (DG) + fast Hebbian binding (CA3) β€” no need for thousands of samples.

Why Spelke's Core Knowledge Instead of Tabula Rasa?

Human infants are NOT blank slates. They have innate expectations about:

  • Objects β€” things persist when hidden
  • Physics β€” dropped objects fall
  • Numbers β€” small quantities are exact

These priors are hardcoded because they evolved over millions of years and shouldn't need to be learned from scratch by every agent.

Why Distortable Canvas Instead of CNN Features?

CNNs require thousands of training images. The Distortable Canvas achieves 90% MNIST accuracy with just 4 examples by treating image comparison as a smooth deformation problem β€” "how much do I need to warp image A to look like image B?"


File Map

hippocampaif/                          # 59 Python files across 9 packages
β”œβ”€β”€ __init__.py                        # v1.0.0, exports core classes
β”œβ”€β”€ core/                              # Phase 1 β€” Foundation
β”‚   β”œβ”€β”€ tensor.py                      # SparseTensor
β”‚   β”œβ”€β”€ free_energy.py                 # FreeEnergyEngine
β”‚   β”œβ”€β”€ message_passing.py             # HierarchicalMessagePassing
β”‚   └── dynamics.py                    # ContinuousDynamics
β”œβ”€β”€ retina/                            # Phase 2 β€” Eye
β”‚   β”œβ”€β”€ photoreceptor.py               # PhotoreceptorArray
β”‚   β”œβ”€β”€ ganglion.py                    # GanglionCellLayer (DoG)
β”‚   └── spatiotemporal_energy.py       # SpatiotemporalEnergyBank
β”œβ”€β”€ v1_v5/                             # Phase 3 β€” Visual Cortex
β”‚   β”œβ”€β”€ gabor_filters.py               # V1SimpleCells
β”‚   β”œβ”€β”€ sparse_coding.py               # V1ComplexCells
β”‚   └── hmax_pooling.py                # HMAXHierarchy
β”œβ”€β”€ hippocampus/                       # Phase 4 β€” Memory
β”‚   β”œβ”€β”€ dg.py                          # DentateGyrus
β”‚   β”œβ”€β”€ ca3.py                         # CA3
β”‚   β”œβ”€β”€ ca1.py                         # CA1
β”‚   β”œβ”€β”€ entorhinal.py                  # EntorhinalCortex
β”‚   β”œβ”€β”€ index_memory.py                # HippocampalIndex
β”‚   └── replay.py                      # ReplayBuffer
β”œβ”€β”€ core_knowledge/                    # Phase 5 β€” Innate Priors
β”‚   β”œβ”€β”€ object_system.py               # ObjectSystem
β”‚   β”œβ”€β”€ physics_system.py              # PhysicsSystem
β”‚   β”œβ”€β”€ number_system.py               # NumberSystem
β”‚   β”œβ”€β”€ geometry_system.py             # GeometrySystem
β”‚   β”œβ”€β”€ agent_system.py                # AgentSystem
β”‚   └── social_system.py               # SocialSystem
β”œβ”€β”€ neocortex/                         # Phase 6a β€” Higher Cognition
β”‚   β”œβ”€β”€ predictive_coding.py           # PredictiveCodingHierarchy
β”‚   β”œβ”€β”€ prefrontal.py                  # PrefrontalCortex
β”‚   β”œβ”€β”€ temporal.py                    # TemporalCortex
β”‚   └── parietal.py                    # ParietalCortex
β”œβ”€β”€ attention/                         # Phase 6b β€” Attention
β”‚   β”œβ”€β”€ superior_colliculus.py         # SuperiorColliculus
β”‚   β”œβ”€β”€ precision.py                   # PrecisionModulator
β”‚   └── competition.py                 # BiasedCompetition
β”œβ”€β”€ learning/                          # Phase 7 β€” One-Shot
β”‚   β”œβ”€β”€ distortable_canvas.py          # DistortableCanvas
β”‚   β”œβ”€β”€ amgd.py                        # AMGD
β”‚   β”œβ”€β”€ one_shot_classifier.py         # OneShotClassifier
β”‚   └── hebbian.py                     # HebbianLearning
β”œβ”€β”€ action/                            # Phase 8 β€” Motor
β”‚   β”œβ”€β”€ active_inference.py            # ActiveInferenceController
β”‚   β”œβ”€β”€ motor_primitives.py            # MotorPrimitives
β”‚   └── reflex_arc.py                  # ReflexArc
β”œβ”€β”€ agent/                             # Phase 9 β€” Integration
β”‚   β”œβ”€β”€ brain.py                       # Brain (full pipeline)
β”‚   β”œβ”€β”€ mnist_agent.py                 # MNISTAgent
β”‚   └── breakout_agent.py              # BreakoutAgent
└── tests/                             # 8 test suites, 34+ tests
    β”œβ”€β”€ test_core.py
    β”œβ”€β”€ test_retina.py
    β”œβ”€β”€ test_visual_cortex.py
    β”œβ”€β”€ test_hippocampus.py
    β”œβ”€β”€ test_core_knowledge.py
    β”œβ”€β”€ test_neocortex_attention.py
    β”œβ”€β”€ test_learning.py
    └── test_action.py

Citation

If you use this framework in research or production, please cite:

@software{hippocampaif2026,
  author = {Albeos, Rembrant Oyangoren},
  title = {HippocampAIF: Biologically Grounded Cognitive Architecture},
  year = {2026},
  description = {Free-energy minimization + hippocampal fast-binding + 
                 Spelke's core knowledge for one-shot learning and active inference}
}

References:

  • Friston, K. (2009). The free-energy principle: a rough guide to the brain. Trends in Cognitive Sciences, 13(7), 293-301.
  • Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332-1338.
  • Spelke, E. S. (2000). Core knowledge. American Psychologist, 55(11), 1233-1243.