HippocampAIF β Fully Biological Sub-Symbolic Cognitive Framework
A brain-inspired cognitive architecture built from computational neuroscience first principles, grounded in three papers: Lake et al. BPL (Science 2015), Distortable Canvas one-shot learning (oneandtrulyone), and Friston's Free-Energy Principle (Trends Cogn Sci, 2009).
User Review Required
Scale & Scope: This is an 80+ component biological framework. The plan is phased β each phase produces tested, working code before moving on. Given the constraint of no PyTorch/TF/JAX, everything uses NumPy + SciPy only.
Performance Targets:
- MNIST: >90% accuracy with ONE sample per digit (10 total training images). The Distortable Canvas paper achieves 90% with just 4 examples.
- Breakout: Master the game under 5 episodes. This is extremely ambitious and requires strong innate priors (Spelke's physics core knowledge) plus hippocampal fast-learning.
No POMDP / VI Active Inference / MCMC: Per user directive, we replace these with biologically-grounded gradient-descent free-energy minimization (Friston-style) + hippocampal index memory + Spelke's core knowledge priors. The "common sense" stack replaces MCMC sampling.
Architecture Overview
hippocampaif/
βββ __init__.py
βββ core/ # Phase 1: Core infrastructure
β βββ __init__.py
β βββ tensor.py # Lightweight ndarray wrapper with sparse ops
β βββ free_energy.py # Variational free-energy engine (Friston)
β βββ message_passing.py # Hierarchical prediction-error message passing
β βββ dynamics.py # Continuous-state dynamics & gradient descent
β
βββ retina/ # Phase 2: Retinal processing
β βββ __init__.py
β βββ photoreceptor.py # Center-surround, ON/OFF channels
β βββ ganglion.py # Magno/Parvo/Konio pathways
β βββ spatiotemporal_energy.py # Adelson-Bergen energy model
β
βββ visual_cortex/ # Phase 3: V1-V5 visual hierarchy
β βββ __init__.py
β βββ v1_gabor.py # 2D Gabor filter bank + simple/complex cells
β βββ v1_disparity.py # Binocular disparity energy model
β βββ v2_contour.py # Contour integration, border-ownership
β βββ v3_shape.py # Shape-from-contour, curvature
β βββ v3a_motion.py # Motion processing (dorsal link)
β βββ v4_color_form.py # Color constancy + intermediate form
β βββ v5_mt_flow.py # Optic flow, motion integration
β βββ hmax.py # HMAX model (S1-C1-S2-C2 hierarchy)
β
βββ hippocampus/ # Phase 4: Hippocampal complex
β βββ __init__.py
β βββ dentate_gyrus.py # Pattern separation (sparse coding)
β βββ ca3_autoassociation.py # Pattern completion (attractor network)
β βββ ca1_comparator.py # Match/mismatch detection
β βββ entorhinal_cortex.py # Grid cells, spatial representation
β βββ index_memory.py # Fast one-shot index-based memory (BPL replacement)
β βββ replay.py # Memory consolidation replay
β
βββ core_knowledge/ # Phase 5: Spelke's core knowledge systems
β βββ __init__.py
β βββ object_system.py # Object permanence, cohesion, contact
β βββ agent_system.py # Intentional agency, goal-directedness
β βββ number_system.py # Approximate number system, subitizing
β βββ geometry_system.py # Geometric/spatial relations + Distortable Canvas
β βββ social_system.py # Social evaluation, in-group preference
β βββ physics_system.py # Gravity, friction, mass priors (believed, not computed)
β
βββ neocortex/ # Phase 6: Neocortical processing
β βββ __init__.py
β βββ prefrontal.py # Working memory, executive control
β βββ temporal.py # Object recognition, semantic memory
β βββ parietal.py # Spatial attention, sensorimotor integration
β βββ predictive_coding.py # Hierarchical predictive coding (Friston Box 3)
β
βββ attention/ # Phase 6b: Attention & salience
β βββ __init__.py
β βββ superior_colliculus.py # Saccade control, salience map
β βββ precision_modulation.py # Synaptic gain / precision (Friston attention)
β βββ competition.py # Hemifield competition, biased competition
β
βββ learning/ # Phase 7: One-shot & fast learning
β βββ __init__.py
β βββ distortable_canvas.py # From oneandtrulyone paper
β βββ amgd.py # Abstracted Multi-level Gradient Descent
β βββ one_shot_classifier.py # One-shot classification pipeline
β βββ hebbian.py # Hebbian/anti-Hebbian learning rules
β
βββ action/ # Phase 8: Action & motor control
β βββ __init__.py
β βββ motor_primitives.py # Motor primitive library
β βββ active_inference.py # Action as free-energy minimization (NOT VI/POMDP)
β βββ reflex_arc.py # Innate reflexive behaviors
β
βββ agent/ # Phase 9: Integrated agent
β βββ __init__.py
β βββ brain.py # Full brain integration (all modules)
β βββ mnist_agent.py # MNIST one-shot benchmark agent
β βββ breakout_agent.py # Breakout game agent
β
βββ tests/ # All phases: Component tests
βββ test_core.py
βββ test_retina.py
βββ test_visual_cortex.py
βββ test_hippocampus.py
βββ test_core_knowledge.py
βββ test_neocortex.py
βββ test_learning.py
βββ test_action.py
βββ test_mnist.py # MNIST >90% one-shot benchmark
βββ test_breakout.py # Breakout mastery <5 episodes
Proposed Changes
Phase 1: Core Infrastructure (core/)
The foundation: lightweight tensor operations, the free-energy engine, and hierarchical message passing. Everything else builds on this.
[NEW] tensor.py
- Sparse ndarray wrapper over NumPy β supports lazy computation, sparsity masks
- The brain is "lazy and sparse" β this is computationally modeled here
- Key ops: sparse dot, threshold activation, top-k sparsification
[NEW] free_energy.py
- Implements Friston's variational free energy: F = Energy β Entropy
F = ββ¨ln p(y,Ο|m)β©_q + β¨ln q(Ο|ΞΌ)β©_q- Laplace approximation: q specified by mean ΞΌ and conditional precision Ξ (ΞΌ)
- Gradient descent on F w.r.t. internal states (perception) and action parameters
- NOT variational inference in the ML sense β this is biological FEP
[NEW] message_passing.py
- Hierarchical prediction-error scheme (Friston Box 3, Figure I)
- Forward (bottom-up): prediction errors Ξ΅ from superficial pyramidal cells
- Backward (top-down): predictions ΞΌ from deep pyramidal cells
- Lateral: precision-weighted error at same level
- Ξ΅β½β±βΎ = ΞΌβ½β±β»ΒΉβΎ β g(ΞΌβ½β±βΎ) β Ξ(ΞΌβ½β±βΎ)Ξ΅β½β±βΎ (recognition dynamics)
[NEW] dynamics.py
- Continuous-state generalized coordinates of motion (Friston Box 2, Eq. I)
- y(t) = g(xβ½ΒΉβΎ,vβ½ΒΉβΎ,ΞΈβ½ΒΉβΎ) + zβ½ΒΉβΎ
- Hierarchical state transitions with random fluctuations
- Euler integration of recognition dynamics
[NEW] test_core.py
- Tests sparse ops, free-energy computation convergence, message passing stability
Phase 2: Retinal Processing (retina/)
The eye's computational front-end: center-surround antagonism, ON/OFF channels, and motion energy.
[NEW] photoreceptor.py
- Difference-of-Gaussians (DoG) center-surround
- ON-center/OFF-surround and OFF-center/ON-surround channels
- Luminance adaptation (Weber's law)
[NEW] ganglion.py
- Magnocellular (motion/flicker), Parvocellular (color/detail), Koniocellular (blue-yellow) pathways
- Temporal filtering: transient vs sustained responses
[NEW] spatiotemporal_energy.py
- Adelson-Bergen spatio-temporal energy model for local motion detection
- Oriented space-time filters (quadrature pairs)
- Motion energy = sum of squared quadrature pair outputs
[NEW] test_retina.py
- Tests DoG produces expected center-surround, motion energy detects drifting gratings
Phase 3: Visual Cortex V1βV5 + HMAX (visual_cortex/)
The ventral "what" and dorsal "where/how" streams, modeled as formalized computational neuroscience.
[NEW] v1_gabor.py
- 2D Gabor filter bank: G(x,y) = exp(β(x'Β²+Ξ³Β²y'Β²)/2ΟΒ²) Γ cos(2Οx'/Ξ» + Ο)
- Multiple orientations (0Β°, 45Β°, 90Β°, 135Β° ...), spatial frequencies, phases
- Simple cells: linear filtering. Complex cells: energy model (sum of squared quadrature)
- Half-wave rectification + normalization (divisive normalization)
[NEW] v1_disparity.py
- Binocular disparity energy model (Ohzawa et al.)
- Left/right eye Gabor responses β phase-difference disparity tuning
- Position and phase disparity computation
[NEW] v2_contour.py
- Contour integration via association fields
- Border-ownership signals
- Texture boundary detection
[NEW] v3_shape.py
- Shape-from-contour: curvature computation
- Medial axis / skeleton extraction
[NEW] v3a_motion.py
- Motion processing bridging V1βV5 (MT)
- Pattern motion vs component motion selectivity
[NEW] v4_color_form.py
- Color constancy (von Kries adaptation)
- Intermediate form representation (curvature-selective)
[NEW] v5_mt_flow.py
- Optic flow computation (Lucas-Kanade style with biological plausibility)
- Motion integration / intersection of constraints
[NEW] hmax.py
- HMAX hierarchy: S1 (Gabor) β C1 (MaxPool) β S2 (learned patches) β C2 (MaxPool)
- Position/scale invariance through max-pooling
- Crucial for the MNIST one-shot pipeline
[NEW] test_visual_cortex.py
- Tests Gabor filter orientations, HMAX produces invariant features, disparity tuning curves
Phase 4: Hippocampal Complex (hippocampus/)
The fast-learning, index-memory, pattern-differentiation engine. This replaces MCMC by providing rapid one-shot binding and retrieval.
[NEW] dentate_gyrus.py
- Pattern separation via sparse expansion coding
- Input β high-dimensional sparse representation (expansion ratio ~5-10Γ)
- Winner-take-all competitive inhibition
[NEW] ca3_autoassociation.py
- Attractor network for pattern completion
- Recurrent connections with Hebbian learning
- Given partial input, settles to stored pattern
[NEW] ca1_comparator.py
- Match/mismatch detection between CA3 recall and direct entorhinal input
- Novelty signal generation
- Drives encoding vs retrieval mode switching
[NEW] entorhinal_cortex.py
- Grid-cell-like spatial coding (hexagonal pattern formation via self-organization)
- Conjunctive representations (space Γ item)
[NEW] index_memory.py
- Key innovation for BPL replacement: one-shot binding of cortical representations
- Store: bind HMAX feature vector β label in single exposure
- Retrieve: given new input, find nearest stored representation
- "Good enough" threshold (~60%) + gap filling from core knowledge priors
- No MCMC β just direct hippocampal fast-mapping
[NEW] replay.py
- Memory consolidation via offline replay
- Strengthens hippocampalβcortical transfer
[NEW] test_hippocampus.py
- Tests pattern separation orthogonality, pattern completion from partial cues, one-shot store/retrieve accuracy
Phase 5: Spelke's Core Knowledge (core_knowledge/)
Innate priors β not tabula rasa. These are "believed, not computed."
[NEW] object_system.py
- Object permanence: objects persist when occluded
- Cohesion: objects move as bounded wholes
- Contact: objects don't pass through each other
- Continuity: objects trace continuous paths
- Implemented as hard constraint priors on object state transitions
[NEW] agent_system.py
- Goal-directedness detection: efficient action toward goals
- Contingency: agents respond to other agents
- Self-propulsion: agents can initiate motion
[NEW] number_system.py
- Approximate Number System (ANS): Weber ratio-based numerosity
- Subitizing: exact enumeration for β€4 items
- Ordinal comparison
[NEW] geometry_system.py
- Geometric/spatial relations (left, right, above, below, inside, outside)
- Boosted by Distortable Canvas from oneandtrulyone paper
- Smooth deformations as canvas-based geometric transformations
- Surface layout representations
[NEW] social_system.py
- Social evaluation: helper vs hinderer distinction
- In-group preference priors
[NEW] physics_system.py
- Believed, not computed β these are hardcoded priors on world dynamics:
- Gravity: objects fall downward (constant downward acceleration prior)
- Friction: moving objects slow down without force
- Mass: heavier objects are harder to move
- Elasticity: objects bounce on collision
- Support: unsupported objects fall
- Critical for Breakout: ball trajectory prediction, paddle physics understanding
[NEW] test_core_knowledge.py
- Tests object permanence tracking, numerosity discrimination (Weber ratio), physics predictions match intuition
Phase 6: Neocortex + Attention (neocortex/, attention/)
Higher cognitive processing, predictive coding, and precision-based attention.
[NEW] predictive_coding.py
- Full hierarchical predictive coding (Friston Box 3)
- SG layer: prediction errors (superficial pyramidal)
- L4: state estimation
- IG layer: predictions (deep pyramidal)
- Recognition dynamics via gradient descent on free-energy
[NEW] prefrontal.py
- Working memory buffer (limited capacity ~7Β±2)
- Executive control: task switching, inhibition
- Goal maintenance
[NEW] temporal.py
- Object recognition pathway (ventral "what" stream terminus)
- Semantic memory / category formation
[NEW] parietal.py
- Spatial attention, sensorimotor integration
- Coordinate transformations (retinotopic β egocentric β allocentric)
[NEW] superior_colliculus.py
- Bottom-up salience map (intensity, color, orientation contrasts)
- Saccade target selection
[NEW] precision_modulation.py
- Attention as precision optimization (Friston): ΞΌΜα΅ = βA/βΞ», Γ = F
- Synaptic gain control per hierarchical level
- Top-down precision weighting of prediction errors
[NEW] competition.py
- Hemifield competition (visual field rivalry)
- Biased competition model (Desimone & Duncan)
Phase 7: One-Shot Learning (learning/)
The Distortable Canvas + hippocampal fast-mapping pipeline for one-shot classification.
[NEW] distortable_canvas.py
- From oneandtrulyone paper:
- Images as smooth functions on elastic 2D canvas
- Canvas deformation field u(x,y), v(x,y) β smooth via Gaussian regularization
- Color distortion: pixel-wise intensity distance
- Canvas distortion: geometric warping energy (Jacobian penalty)
- Dual distance = color_dist + Ξ» Γ canvas_dist
[NEW] amgd.py
- Abstracted Multi-level Gradient Descent from oneandtrulyone
- Coarse-to-fine optimization of canvas deformation
- Multiple resolution levels, warm-starting from coarser solutions
- Step size adaptation
[NEW] one_shot_classifier.py
- Full pipeline: Retina β V1 Gabor β HMAX β Hippocampal Index Memory β Classify
- For each test image: extract HMAX features, compare to stored prototypes
- Distortable Canvas distance as similarity metric for ambiguous cases
- "Good enough" (>60%) confidence β classify; otherwise β refine with canvas
[NEW] hebbian.py
- Hebbian learning: Ξw = Ξ· Γ pre Γ post
- Anti-Hebbian for decorrelation
- BCM rule for selectivity
- Used for online adaptation within cortical layers
Phase 8: Action & Active Inference (action/)
Action as free-energy minimization β NOT POMDP/VI.
[NEW] active_inference.py
- Action selection via Θ§ = ββF/βa (Friston Box 1)
- Action changes sensory input to fulfill predictions
- Prior expectations about desired states β action to reach them
- For Breakout: prior = "ball stays in play" β paddle moves to predicted ball position
[NEW] motor_primitives.py
- Library of basic motor actions (move left, move right, stay, fire)
- Motor commands mapped from continuous action signals
[NEW] reflex_arc.py
- Innate reflexive behaviors (e.g., tracking moving objects)
- Fast pathway bypassing full cortical processing
Phase 9: Integrated Agent (agent/)
Wire everything together for benchmarks.
[NEW] brain.py
- Full brain integration: all modules connected
- Processing pipeline: Retina β V1-V5 β Hippocampus β Neocortex β Action
- Free-energy minimization loop running across all levels
- Sparse "lazy" processing β only activates needed pathways
[NEW] mnist_agent.py
- One-shot MNIST classification agent
- Stores 1 exemplar per digit (10 total)
- Pipeline: raw image β retinal processing β V1 Gabor β HMAX features β hippocampal matching + Distortable Canvas refinement
[NEW] breakout_agent.py
- Breakout game agent using gymnasium[atari] + ale-py
- Physics core knowledge: predicts ball trajectory (gravity-free, elastic bouncing)
- Visual tracking: retina + V1 motion energy β ball/paddle/brick detection
- Hippocampal fast-learning: after first 1-2 episodes, learns brick patterns and optimal strategies
- Active inference: prior = "keep ball alive" + "maximize brick destruction"
Phase 10: Dependencies & Setup
[NEW] setup.py
- Package setup with minimal dependencies:
numpy,scipy,Pillow - Optional:
gymnasium[atari],ale-pyfor Breakout benchmark only
[NEW] requirements.txt
numpy>=1.24,scipy>=1.10,Pillow>=9.0gymnasium[atari]>=1.0,ale-py>=0.9(Breakout only)
Verification Plan
Automated Tests
Each phase includes unit tests that verify real functionality (not stubs):
# Run all tests
python -m pytest hippocampaif/tests/ -v
# Phase-by-phase
python -m pytest hippocampaif/tests/test_core.py -v # Free-energy convergence, message passing
python -m pytest hippocampaif/tests/test_retina.py -v # DoG, motion energy
python -m pytest hippocampaif/tests/test_visual_cortex.py -v # Gabor orientations, HMAX invariance
python -m pytest hippocampaif/tests/test_hippocampus.py -v # Pattern separation/completion, index memory
python -m pytest hippocampaif/tests/test_core_knowledge.py -v # Object permanence, physics, numerosity
python -m pytest hippocampaif/tests/test_neocortex.py -v # Predictive coding convergence
python -m pytest hippocampaif/tests/test_learning.py -v # Distortable Canvas, AMGD, one-shot
python -m pytest hippocampaif/tests/test_action.py -v # Active inference action selection
Benchmark Tests (End-to-End)
# MNIST one-shot (target: >90% accuracy with 1 sample per digit)
python -m pytest hippocampaif/tests/test_mnist.py -v -s
# Breakout mastery (target: master under 5 episodes)
python -m pytest hippocampaif/tests/test_breakout.py -v -s
Manual Verification
- Inspect HMAX feature visualizations to confirm Gabor filters look biologically plausible
- Review Distortable Canvas deformation fields to confirm smooth warping
- Monitor free-energy curves during perception to confirm they decrease (convergence)
- Watch Breakout agent play to verify it tracks the ball and learns brick patterns
Implementation Order & Dependencies
| Phase | Component | Depends On | Estimated Effort |
|---|---|---|---|
| 1 | Core infrastructure | Nothing | Foundation |
| 2 | Retina | Core | Small |
| 3 | Visual Cortex V1-V5 + HMAX | Core, Retina | Large |
| 4 | Hippocampus | Core | Medium |
| 5 | Core Knowledge | Core | Medium |
| 6 | Neocortex + Attention | Core, Visual Cortex | Medium |
| 7 | One-Shot Learning | Visual Cortex, Hippocampus, Core Knowledge | Medium |
| 8 | Action | Core, Core Knowledge | Small |
| 9 | Integrated Agent | All above | Medium |
| 10 | Setup & packaging | All above | Small |
Build-then-verify loop: Each phase ends with passing tests before moving to the next. This prevents cascading errors and ensures each biological component genuinely works.