# HippocampAIF — Fully Biological Sub-Symbolic Cognitive Framework A brain-inspired cognitive architecture built from computational neuroscience first principles, grounded in three papers: **Lake et al. BPL** (Science 2015), **Distortable Canvas one-shot learning** (oneandtrulyone), and **Friston's Free-Energy Principle** (Trends Cogn Sci, 2009). ## User Review Required > [!IMPORTANT] > **Scale & Scope:** This is an 80+ component biological framework. The plan is phased — each phase produces tested, working code before moving on. Given the constraint of no PyTorch/TF/JAX, everything uses NumPy + SciPy only. > [!WARNING] > **Performance Targets:** > - **MNIST**: >90% accuracy with ONE sample per digit (10 total training images). The Distortable Canvas paper achieves 90% with just 4 examples. > - **Breakout**: Master the game under 5 episodes. This is extremely ambitious and requires strong innate priors (Spelke's physics core knowledge) plus hippocampal fast-learning. > [!CAUTION] > **No POMDP / VI Active Inference / MCMC:** Per user directive, we replace these with biologically-grounded gradient-descent free-energy minimization (Friston-style) + hippocampal index memory + Spelke's core knowledge priors. The "common sense" stack replaces MCMC sampling. --- ## Architecture Overview ``` hippocampaif/ ├── __init__.py ├── core/ # Phase 1: Core infrastructure │ ├── __init__.py │ ├── tensor.py # Lightweight ndarray wrapper with sparse ops │ ├── free_energy.py # Variational free-energy engine (Friston) │ ├── message_passing.py # Hierarchical prediction-error message passing │ └── dynamics.py # Continuous-state dynamics & gradient descent │ ├── retina/ # Phase 2: Retinal processing │ ├── __init__.py │ ├── photoreceptor.py # Center-surround, ON/OFF channels │ ├── ganglion.py # Magno/Parvo/Konio pathways │ └── spatiotemporal_energy.py # Adelson-Bergen energy model │ ├── visual_cortex/ # Phase 3: V1-V5 visual hierarchy │ ├── __init__.py │ ├── v1_gabor.py # 2D Gabor filter bank + simple/complex cells │ ├── v1_disparity.py # Binocular disparity energy model │ ├── v2_contour.py # Contour integration, border-ownership │ ├── v3_shape.py # Shape-from-contour, curvature │ ├── v3a_motion.py # Motion processing (dorsal link) │ ├── v4_color_form.py # Color constancy + intermediate form │ ├── v5_mt_flow.py # Optic flow, motion integration │ └── hmax.py # HMAX model (S1-C1-S2-C2 hierarchy) │ ├── hippocampus/ # Phase 4: Hippocampal complex │ ├── __init__.py │ ├── dentate_gyrus.py # Pattern separation (sparse coding) │ ├── ca3_autoassociation.py # Pattern completion (attractor network) │ ├── ca1_comparator.py # Match/mismatch detection │ ├── entorhinal_cortex.py # Grid cells, spatial representation │ ├── index_memory.py # Fast one-shot index-based memory (BPL replacement) │ └── replay.py # Memory consolidation replay │ ├── core_knowledge/ # Phase 5: Spelke's core knowledge systems │ ├── __init__.py │ ├── object_system.py # Object permanence, cohesion, contact │ ├── agent_system.py # Intentional agency, goal-directedness │ ├── number_system.py # Approximate number system, subitizing │ ├── geometry_system.py # Geometric/spatial relations + Distortable Canvas │ ├── social_system.py # Social evaluation, in-group preference │ └── physics_system.py # Gravity, friction, mass priors (believed, not computed) │ ├── neocortex/ # Phase 6: Neocortical processing │ ├── __init__.py │ ├── prefrontal.py # Working memory, executive control │ ├── temporal.py # Object recognition, semantic memory │ ├── parietal.py # Spatial attention, sensorimotor integration │ └── predictive_coding.py # Hierarchical predictive coding (Friston Box 3) │ ├── attention/ # Phase 6b: Attention & salience │ ├── __init__.py │ ├── superior_colliculus.py # Saccade control, salience map │ ├── precision_modulation.py # Synaptic gain / precision (Friston attention) │ └── competition.py # Hemifield competition, biased competition │ ├── learning/ # Phase 7: One-shot & fast learning │ ├── __init__.py │ ├── distortable_canvas.py # From oneandtrulyone paper │ ├── amgd.py # Abstracted Multi-level Gradient Descent │ ├── one_shot_classifier.py # One-shot classification pipeline │ └── hebbian.py # Hebbian/anti-Hebbian learning rules │ ├── action/ # Phase 8: Action & motor control │ ├── __init__.py │ ├── motor_primitives.py # Motor primitive library │ ├── active_inference.py # Action as free-energy minimization (NOT VI/POMDP) │ └── reflex_arc.py # Innate reflexive behaviors │ ├── agent/ # Phase 9: Integrated agent │ ├── __init__.py │ ├── brain.py # Full brain integration (all modules) │ ├── mnist_agent.py # MNIST one-shot benchmark agent │ └── breakout_agent.py # Breakout game agent │ └── tests/ # All phases: Component tests ├── test_core.py ├── test_retina.py ├── test_visual_cortex.py ├── test_hippocampus.py ├── test_core_knowledge.py ├── test_neocortex.py ├── test_learning.py ├── test_action.py ├── test_mnist.py # MNIST >90% one-shot benchmark └── test_breakout.py # Breakout mastery <5 episodes ``` --- ## Proposed Changes ### Phase 1: Core Infrastructure (`core/`) The foundation: lightweight tensor operations, the free-energy engine, and hierarchical message passing. Everything else builds on this. #### [NEW] [tensor.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core/tensor.py) - Sparse ndarray wrapper over NumPy — supports lazy computation, sparsity masks - The brain is "lazy and sparse" — this is computationally modeled here - Key ops: sparse dot, threshold activation, top-k sparsification #### [NEW] [free_energy.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core/free_energy.py) - Implements Friston's variational free energy: **F = Energy − Entropy** - `F = −⟨ln p(y,ϑ|m)⟩_q + ⟨ln q(ϑ|μ)⟩_q` - Laplace approximation: q specified by mean μ and conditional precision Π(μ) - Gradient descent on F w.r.t. internal states (perception) and action parameters - **NOT** variational inference in the ML sense — this is biological FEP #### [NEW] [message_passing.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core/message_passing.py) - Hierarchical prediction-error scheme (Friston Box 3, Figure I) - Forward (bottom-up): prediction errors ε from superficial pyramidal cells - Backward (top-down): predictions μ from deep pyramidal cells - Lateral: precision-weighted error at same level - ε⁽ⁱ⁾ = μ⁽ⁱ⁻¹⁾ − g(μ⁽ⁱ⁾) − Λ(μ⁽ⁱ⁾)ε⁽ⁱ⁾ (recognition dynamics) #### [NEW] [dynamics.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core/dynamics.py) - Continuous-state generalized coordinates of motion (Friston Box 2, Eq. I) - y(t) = g(x⁽¹⁾,v⁽¹⁾,θ⁽¹⁾) + z⁽¹⁾ - Hierarchical state transitions with random fluctuations - Euler integration of recognition dynamics #### [NEW] [test_core.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_core.py) - Tests sparse ops, free-energy computation convergence, message passing stability --- ### Phase 2: Retinal Processing (`retina/`) The eye's computational front-end: center-surround antagonism, ON/OFF channels, and motion energy. #### [NEW] [photoreceptor.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/retina/photoreceptor.py) - Difference-of-Gaussians (DoG) center-surround - ON-center/OFF-surround and OFF-center/ON-surround channels - Luminance adaptation (Weber's law) #### [NEW] [ganglion.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/retina/ganglion.py) - Magnocellular (motion/flicker), Parvocellular (color/detail), Koniocellular (blue-yellow) pathways - Temporal filtering: transient vs sustained responses #### [NEW] [spatiotemporal_energy.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/retina/spatiotemporal_energy.py) - Adelson-Bergen spatio-temporal energy model for local motion detection - Oriented space-time filters (quadrature pairs) - Motion energy = sum of squared quadrature pair outputs #### [NEW] [test_retina.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_retina.py) - Tests DoG produces expected center-surround, motion energy detects drifting gratings --- ### Phase 3: Visual Cortex V1–V5 + HMAX (`visual_cortex/`) The ventral "what" and dorsal "where/how" streams, modeled as formalized computational neuroscience. #### [NEW] [v1_gabor.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v1_gabor.py) - 2D Gabor filter bank: G(x,y) = exp(−(x'²+γ²y'²)/2σ²) × cos(2πx'/λ + ψ) - Multiple orientations (0°, 45°, 90°, 135° ...), spatial frequencies, phases - Simple cells: linear filtering. Complex cells: energy model (sum of squared quadrature) - Half-wave rectification + normalization (divisive normalization) #### [NEW] [v1_disparity.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v1_disparity.py) - Binocular disparity energy model (Ohzawa et al.) - Left/right eye Gabor responses → phase-difference disparity tuning - Position and phase disparity computation #### [NEW] [v2_contour.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v2_contour.py) - Contour integration via association fields - Border-ownership signals - Texture boundary detection #### [NEW] [v3_shape.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v3_shape.py) - Shape-from-contour: curvature computation - Medial axis / skeleton extraction #### [NEW] [v3a_motion.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v3a_motion.py) - Motion processing bridging V1→V5 (MT) - Pattern motion vs component motion selectivity #### [NEW] [v4_color_form.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v4_color_form.py) - Color constancy (von Kries adaptation) - Intermediate form representation (curvature-selective) #### [NEW] [v5_mt_flow.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v5_mt_flow.py) - Optic flow computation (Lucas-Kanade style with biological plausibility) - Motion integration / intersection of constraints #### [NEW] [hmax.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/hmax.py) - HMAX hierarchy: S1 (Gabor) → C1 (MaxPool) → S2 (learned patches) → C2 (MaxPool) - Position/scale invariance through max-pooling - Crucial for the MNIST one-shot pipeline #### [NEW] [test_visual_cortex.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_visual_cortex.py) - Tests Gabor filter orientations, HMAX produces invariant features, disparity tuning curves --- ### Phase 4: Hippocampal Complex (`hippocampus/`) The fast-learning, index-memory, pattern-differentiation engine. This replaces MCMC by providing rapid one-shot binding and retrieval. #### [NEW] [dentate_gyrus.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/dentate_gyrus.py) - Pattern separation via sparse expansion coding - Input → high-dimensional sparse representation (expansion ratio ~5-10×) - Winner-take-all competitive inhibition #### [NEW] [ca3_autoassociation.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/ca3_autoassociation.py) - Attractor network for pattern completion - Recurrent connections with Hebbian learning - Given partial input, settles to stored pattern #### [NEW] [ca1_comparator.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/ca1_comparator.py) - Match/mismatch detection between CA3 recall and direct entorhinal input - Novelty signal generation - Drives encoding vs retrieval mode switching #### [NEW] [entorhinal_cortex.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/entorhinal_cortex.py) - Grid-cell-like spatial coding (hexagonal pattern formation via self-organization) - Conjunctive representations (space × item) #### [NEW] [index_memory.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/index_memory.py) - **Key innovation for BPL replacement:** one-shot binding of cortical representations - Store: bind HMAX feature vector ↔ label in single exposure - Retrieve: given new input, find nearest stored representation - "Good enough" threshold (~60%) + gap filling from core knowledge priors - No MCMC — just direct hippocampal fast-mapping #### [NEW] [replay.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/replay.py) - Memory consolidation via offline replay - Strengthens hippocampal→cortical transfer #### [NEW] [test_hippocampus.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_hippocampus.py) - Tests pattern separation orthogonality, pattern completion from partial cues, one-shot store/retrieve accuracy --- ### Phase 5: Spelke's Core Knowledge (`core_knowledge/`) Innate priors — not tabula rasa. These are "believed, not computed." #### [NEW] [object_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/object_system.py) - Object permanence: objects persist when occluded - Cohesion: objects move as bounded wholes - Contact: objects don't pass through each other - Continuity: objects trace continuous paths - Implemented as hard constraint priors on object state transitions #### [NEW] [agent_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/agent_system.py) - Goal-directedness detection: efficient action toward goals - Contingency: agents respond to other agents - Self-propulsion: agents can initiate motion #### [NEW] [number_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/number_system.py) - Approximate Number System (ANS): Weber ratio-based numerosity - Subitizing: exact enumeration for ≤4 items - Ordinal comparison #### [NEW] [geometry_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/geometry_system.py) - Geometric/spatial relations (left, right, above, below, inside, outside) - **Boosted by Distortable Canvas** from oneandtrulyone paper - Smooth deformations as canvas-based geometric transformations - Surface layout representations #### [NEW] [social_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/social_system.py) - Social evaluation: helper vs hinderer distinction - In-group preference priors #### [NEW] [physics_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/physics_system.py) - **Believed, not computed** — these are hardcoded priors on world dynamics: - Gravity: objects fall downward (constant downward acceleration prior) - Friction: moving objects slow down without force - Mass: heavier objects are harder to move - Elasticity: objects bounce on collision - Support: unsupported objects fall - Critical for Breakout: ball trajectory prediction, paddle physics understanding #### [NEW] [test_core_knowledge.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_core_knowledge.py) - Tests object permanence tracking, numerosity discrimination (Weber ratio), physics predictions match intuition --- ### Phase 6: Neocortex + Attention (`neocortex/`, `attention/`) Higher cognitive processing, predictive coding, and precision-based attention. #### [NEW] [predictive_coding.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/neocortex/predictive_coding.py) - Full hierarchical predictive coding (Friston Box 3) - SG layer: prediction errors (superficial pyramidal) - L4: state estimation - IG layer: predictions (deep pyramidal) - Recognition dynamics via gradient descent on free-energy #### [NEW] [prefrontal.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/neocortex/prefrontal.py) - Working memory buffer (limited capacity ~7±2) - Executive control: task switching, inhibition - Goal maintenance #### [NEW] [temporal.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/neocortex/temporal.py) - Object recognition pathway (ventral "what" stream terminus) - Semantic memory / category formation #### [NEW] [parietal.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/neocortex/parietal.py) - Spatial attention, sensorimotor integration - Coordinate transformations (retinotopic → egocentric → allocentric) #### [NEW] [superior_colliculus.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/attention/superior_colliculus.py) - Bottom-up salience map (intensity, color, orientation contrasts) - Saccade target selection #### [NEW] [precision_modulation.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/attention/precision_modulation.py) - Attention as precision optimization (Friston): μ̇ᵟ = ∂A/∂λ, Å = F - Synaptic gain control per hierarchical level - Top-down precision weighting of prediction errors #### [NEW] [competition.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/attention/competition.py) - Hemifield competition (visual field rivalry) - Biased competition model (Desimone & Duncan) --- ### Phase 7: One-Shot Learning (`learning/`) The Distortable Canvas + hippocampal fast-mapping pipeline for one-shot classification. #### [NEW] [distortable_canvas.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/learning/distortable_canvas.py) - From oneandtrulyone paper: - Images as smooth functions on elastic 2D canvas - Canvas deformation field u(x,y), v(x,y) — smooth via Gaussian regularization - Color distortion: pixel-wise intensity distance - Canvas distortion: geometric warping energy (Jacobian penalty) - Dual distance = color_dist + λ × canvas_dist #### [NEW] [amgd.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/learning/amgd.py) - Abstracted Multi-level Gradient Descent from oneandtrulyone - Coarse-to-fine optimization of canvas deformation - Multiple resolution levels, warm-starting from coarser solutions - Step size adaptation #### [NEW] [one_shot_classifier.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/learning/one_shot_classifier.py) - Full pipeline: Retina → V1 Gabor → HMAX → Hippocampal Index Memory → Classify - For each test image: extract HMAX features, compare to stored prototypes - Distortable Canvas distance as similarity metric for ambiguous cases - "Good enough" (>60%) confidence → classify; otherwise → refine with canvas #### [NEW] [hebbian.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/learning/hebbian.py) - Hebbian learning: Δw = η × pre × post - Anti-Hebbian for decorrelation - BCM rule for selectivity - Used for online adaptation within cortical layers --- ### Phase 8: Action & Active Inference (`action/`) Action as free-energy minimization — NOT POMDP/VI. #### [NEW] [active_inference.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/action/active_inference.py) - Action selection via ȧ = −∂F/∂a (Friston Box 1) - Action changes sensory input to fulfill predictions - Prior expectations about desired states → action to reach them - For Breakout: prior = "ball stays in play" → paddle moves to predicted ball position #### [NEW] [motor_primitives.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/action/motor_primitives.py) - Library of basic motor actions (move left, move right, stay, fire) - Motor commands mapped from continuous action signals #### [NEW] [reflex_arc.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/action/reflex_arc.py) - Innate reflexive behaviors (e.g., tracking moving objects) - Fast pathway bypassing full cortical processing --- ### Phase 9: Integrated Agent (`agent/`) Wire everything together for benchmarks. #### [NEW] [brain.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/agent/brain.py) - Full brain integration: all modules connected - Processing pipeline: Retina → V1-V5 → Hippocampus ↔ Neocortex → Action - Free-energy minimization loop running across all levels - Sparse "lazy" processing — only activates needed pathways #### [NEW] [mnist_agent.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/agent/mnist_agent.py) - One-shot MNIST classification agent - Stores 1 exemplar per digit (10 total) - Pipeline: raw image → retinal processing → V1 Gabor → HMAX features → hippocampal matching + Distortable Canvas refinement #### [NEW] [breakout_agent.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/agent/breakout_agent.py) - Breakout game agent using gymnasium[atari] + ale-py - Physics core knowledge: predicts ball trajectory (gravity-free, elastic bouncing) - Visual tracking: retina + V1 motion energy → ball/paddle/brick detection - Hippocampal fast-learning: after first 1-2 episodes, learns brick patterns and optimal strategies - Active inference: prior = "keep ball alive" + "maximize brick destruction" --- ### Phase 10: Dependencies & Setup #### [NEW] [setup.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/setup.py) - Package setup with minimal dependencies: `numpy`, `scipy`, `Pillow` - Optional: `gymnasium[atari]`, `ale-py` for Breakout benchmark only #### [NEW] [requirements.txt](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/requirements.txt) - `numpy>=1.24`, `scipy>=1.10`, `Pillow>=9.0` - `gymnasium[atari]>=1.0`, `ale-py>=0.9` (Breakout only) --- ## Verification Plan ### Automated Tests Each phase includes unit tests that verify **real** functionality (not stubs): ```bash # Run all tests python -m pytest hippocampaif/tests/ -v # Phase-by-phase python -m pytest hippocampaif/tests/test_core.py -v # Free-energy convergence, message passing python -m pytest hippocampaif/tests/test_retina.py -v # DoG, motion energy python -m pytest hippocampaif/tests/test_visual_cortex.py -v # Gabor orientations, HMAX invariance python -m pytest hippocampaif/tests/test_hippocampus.py -v # Pattern separation/completion, index memory python -m pytest hippocampaif/tests/test_core_knowledge.py -v # Object permanence, physics, numerosity python -m pytest hippocampaif/tests/test_neocortex.py -v # Predictive coding convergence python -m pytest hippocampaif/tests/test_learning.py -v # Distortable Canvas, AMGD, one-shot python -m pytest hippocampaif/tests/test_action.py -v # Active inference action selection ``` ### Benchmark Tests (End-to-End) ```bash # MNIST one-shot (target: >90% accuracy with 1 sample per digit) python -m pytest hippocampaif/tests/test_mnist.py -v -s # Breakout mastery (target: master under 5 episodes) python -m pytest hippocampaif/tests/test_breakout.py -v -s ``` ### Manual Verification - Inspect HMAX feature visualizations to confirm Gabor filters look biologically plausible - Review Distortable Canvas deformation fields to confirm smooth warping - Monitor free-energy curves during perception to confirm they decrease (convergence) - Watch Breakout agent play to verify it tracks the ball and learns brick patterns --- ## Implementation Order & Dependencies | Phase | Component | Depends On | Estimated Effort | |-------|-----------|------------|-----------------| | 1 | Core infrastructure | Nothing | Foundation | | 2 | Retina | Core | Small | | 3 | Visual Cortex V1-V5 + HMAX | Core, Retina | Large | | 4 | Hippocampus | Core | Medium | | 5 | Core Knowledge | Core | Medium | | 6 | Neocortex + Attention | Core, Visual Cortex | Medium | | 7 | One-Shot Learning | Visual Cortex, Hippocampus, Core Knowledge | Medium | | 8 | Action | Core, Core Knowledge | Small | | 9 | Integrated Agent | All above | Medium | | 10 | Setup & packaging | All above | Small | > [!TIP] > **Build-then-verify loop**: Each phase ends with passing tests before moving to the next. This prevents cascading errors and ensures each biological component genuinely works.