HippocampAIF / hippocampaif.md
algorembrant's picture
Upload 253 files
32d978d verified

create a framework called HippocampAIF, a fully biological, sub symbolic (no symbolic/hardcoded domain of a specific domain), universal, no huge deps like torch/torchvision/tensorflow/jax, and no POMDP & VI Active Inference with biological components like Linear-Nonlinear Model, 2D Gabor functions + Max-Pooling, Binocular Disparity Energy Model, Hierarchical Model and X (HMAX), Spatio-Temporal Energy Model/Adelson-Bergen Energy Model for Retina, V1, V2, V3, V3A, V4, V5, hippocampus (we need it for like literally everything, from fast learning/index memory, to pattern differentiator and more, just add like fucking all), and more (implement all important components from brain like neocortex, superior colliculus, hemifield & competition, and more, just add everything you could think of). and remember that the brain is lazy and sparse, and this what makes it has common sense, like literally, because it just needs to know like >60% and then just fill out the rest (gaps filling), and each components should be implemented as computational models that has been formalized (like that Retina and V1-V5 example there), and don't forget that humans aren't tabula rasa, humans have built in core knowledge, so we need to implement a computational model of all 5 (or more) spelke's core knowledge too which has object, agent, number, geometric (ig the From one and only one paper have this, so we need to implement spelke's geometric plus boosted with this Distortable Canvas paper), social, and physics (gravity, friction, mass, etc... and should not be computed, but believed as this is what real priors should have do), and for BPL, just throw away the MCMC, our stack literally covers it for BPL (like hippocampus for fast mapping/index memory and common sense for BPL to just learn until good enough and fill the rest, super good visuals and tracking from Retina and V1-V5, spelke's core knowledge especially object that makes it not be fooled by a fucking pixel that moved and no need MCMC), all components must be in a seperate files, and every components must be tested to see if it truly works (the test must not be stubs), and test it on MNIST one samples per digit (must be >90% since From one and only one shot gets 90% with just 4 examples) and breakout (must master the game under 5 episodes), for breakout specifically, just pip install gymnasium[atari] ale-py and no AutoROM or accept ROM License cuz gymnasium >1.0 and ale-py >0.9 doesn't need it anymore, and don't forget that bfain literally has like 80+ components. So, happy implementing! (btw don't implement all at once, everytime you make a component, you must verify all the logic works, not just a stub tests)