Datasets:
Honestly don't remember inserting previous README.md, uploading prefer md.
Browse files
Research notes/GPT5.4pro_research_framing_metis_spikenaut.md
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Research Framing: Metis, SAAQ 2.0, and Spikenaut
|
| 2 |
+
|
| 3 |
+
## Core thesis
|
| 4 |
+
|
| 5 |
+
Metis is best framed not just as a hybrid model, but as a **scientific instrument for discovering sparse routing laws** in dense Mixture-of-Experts systems. It uses MoE latent telemetry, hardware-aware fatigue dynamics, and SNN routing to expose structured attractor-like basins; those discovered laws are then intended to transfer into **Spikenaut**, a pure native SNN built from scratch. The current foundation of SAAQ 2.0 is an L2-normalized voltage bound,
|
| 6 |
+
|
| 7 |
+
\[
|
| 8 |
+
V_{\text{norm}} = \frac{V}{\sqrt{\sum V_i^2 + \epsilon}}
|
| 9 |
+
\]
|
| 10 |
+
|
| 11 |
+
which acts as a global pressure limiter on latent voltage and prevents runaway domination by a single route.
|
| 12 |
+
|
| 13 |
+
## AI / neuromorphic framing
|
| 14 |
+
|
| 15 |
+
### Short abstract
|
| 16 |
+
|
| 17 |
+
We present a hardware-grounded neuromorphic distillation framework in which a spiking routing engine is used to probe the latent structure of dense Mixture-of-Experts language models. Rather than distilling weights into a smaller ANN, the system records route-level telemetry under bounded voltage and fatigue-like constraints, then uses those traces to discover reusable equations for sparse computation. In early experiments with OLMoE, L2 normalization breaks routing collapse and reveals prompt-dependent routing neighborhoods: abstract English concentrates in a high-index basin, while Rust syntax and mathematical statements cluster into a lower shared logic band. The resulting telemetry is intended to drive symbolic regression and transfer the discovered laws into Spikenaut, a pure SNN implementation.
|
| 18 |
+
|
| 19 |
+
### One-sentence contribution
|
| 20 |
+
|
| 21 |
+
This work treats **quantization as a discovery instrument**: a way to extract equation-level sparse routing laws from dense MoE systems and port them into a native SNN.
|
| 22 |
+
|
| 23 |
+
### Why this matters to AI
|
| 24 |
+
|
| 25 |
+
The research matters because it shifts the goal from “compress a model” to “discover the dynamical laws that make sparse computation work under real hardware pressure.” Your docs already show a chronological progression from routing collapse, to stabilized attractor formation under L2 normalization, to prompt-family-specific routing separation. That means the quantized SNN layer is functioning as an analytic probe, not just an inference wrapper.
|
| 26 |
+
|
| 27 |
+
### Why this matters to neuromorphic engineering
|
| 28 |
+
|
| 29 |
+
The framework is naturally relevant to neuromorphic research because it emphasizes sparse event-like routing, bounded activity, fatigue-like adaptation, and hardware-in-the-loop constraints. The RE4 path-tracing telemetry is especially unusual and interesting here: your own notes say it supplied the dynamic thermal and power fluctuations that informed the fatigue limits used in the `tick_gpu_temporal` loops of the Rust/CUDA engine.
|
| 30 |
+
|
| 31 |
+
## Neuroscience-respectful framing
|
| 32 |
+
|
| 33 |
+
### Short abstract
|
| 34 |
+
|
| 35 |
+
This project is best described as a **bio-inspired dynamical systems testbed**, not as a literal model of the brain. A global normalization law and fatigue-like adaptation are imposed on a spike-routed latent system derived from dense language-model embeddings. Under those constraints, different task families appear to settle into different routing neighborhoods, suggesting attractor-like partitioning of state space. The scientific value is not that the model “is biological,” but that it provides a controllable system for studying how normalization, activity bounds, and repeated-use fatigue shape structured routing dynamics.
|
| 36 |
+
### Language to use with neuroscientists
|
| 37 |
+
|
| 38 |
+
Use phrases like **attractor-like basin**, **fatigue-like adaptation**, **homeostatic normalization**, and **task-conditioned state-space partitioning**. That language is strong but still disciplined. It invites comparison to attractor networks and adaptive routing without claiming that code and math literally live in fixed brain regions.
|
| 39 |
+
|
| 40 |
+
### Language to avoid
|
| 41 |
+
|
| 42 |
+
Avoid saying that the system has discovered “how the brain routes semantics,” or that the routing neighborhoods are direct analogs of cortical areas. The clean claim is that this is a hardware-grounded, bio-inspired model in which structured inputs generate reproducible basin-level routing patterns under normalization and fatigue constraints.
|
| 43 |
+
|
| 44 |
+
## How original the work is
|
| 45 |
+
|
| 46 |
+
Methodologically, the work is highly original because it combines three ingredients that are rarely unified in one pipeline: MoE latent telemetry, hardware-derived fatigue priors, and symbolic regression intended to recover closed-form SNN equations for transfer into a native spiking model. Empirically, it is still early-stage: the major next step is to show that the same basin grammar and fatigue laws survive across multiple MoE families and remain useful once implanted into Spikenaut.
|
| 47 |
+
|
| 48 |
+
## Why the Metis -> Spikenaut pipeline matters
|
| 49 |
+
|
| 50 |
+
The strongest opportunity is not ordinary weight distillation. It is **dynamical distillation**. Metis can serve as the teacher/probe layer that reveals routing basins, transition rules, and fatigue cadence; Spikenaut can then test whether those laws can be implemented directly in a native SNN without carrying over the dense teacher. Your dataset README already frames this relationship clearly: Metis proves the math, while Spikenaut implements it natively.
|
| 51 |
+
|
| 52 |
+
## What would fascinate scientists
|
| 53 |
+
|
| 54 |
+
A computational neuroscientist would likely find the project interesting if you present it as a sparse dynamical system with attractor-like partitioning and fatigue-like adaptation. A neuromorphic engineer would likely care about the hardware-in-the-loop aspect, especially the use of telemetry-derived heartbeat constraints. A biologist may also be interested, but usually only if the bio claims remain modest and testable. The strongest hook is that the system may reveal a general principle: structured tasks can settle into adjacent routing neighborhoods when activity is globally bounded and repeated use incurs fatigue.
|
| 55 |
+
|
| 56 |
+
## Copy-paste positioning statements
|
| 57 |
+
|
| 58 |
+
### AI / neuromorphic version
|
| 59 |
+
|
| 60 |
+
Metis is a hardware-grounded spiking Mixture-of-Experts discovery framework that uses SNN routing telemetry to uncover sparse laws in dense MoE models. Rather than distilling weights into another black-box network, it captures route-level dynamics under SAAQ 1.0 constraints and uses those traces to recover reusable equations for native spiking systems. Those equations are then transferred into Spikenaut, a pure SNN built from scratch.
|
| 61 |
+
|
| 62 |
+
### Neuroscience-respectful version
|
| 63 |
+
|
| 64 |
+
This project studies how global normalization and fatigue-like adaptation shape task-dependent routing in a spike-routed latent system. The goal is not to claim a biological replica of cortex, but to build a controllable model system in which structured inputs produce reproducible attractor-like routing neighborhoods that may inform sparse computation and neuromorphic design.
|
| 65 |
+
## Suggested next-step claims to validate
|
| 66 |
+
|
| 67 |
+
1. Basin structure replicates across different MoE families.
|
| 68 |
+
2. Basin transitions and dwell times can be captured by symbolic regression.
|
| 69 |
+
3. The discovered equations transfer into Spikenaut and reproduce the same routing grammar.
|
| 70 |
+
4. The hardware-derived fatigue cadence improves stability or efficiency relative to naive quantization.
|