| --- |
| license: apache-2.0 |
| tags: |
| - vindex |
| - mechanistic-interpretability |
| - larql |
| - knowledge-graph |
| - gemma |
| --- |
| |
| # Gemma 3 4B IT — Vindex (f16) |
|
|
| FFN knowledge index extracted from `google/gemma-3-4b-it` using LARQL. |
|
|
| Treats transformer FFN weights as a queryable knowledge graph — retrieval via dot-product graph walks against gate vectors, no matrix multiplication. |
|
|
| ## Usage |
|
|
| ```sql |
| larql> USE "hf://chrishayuk/gemma-3-4b-it-vindex"; |
| larql> DESCRIBE "France"; |
| ``` |
|
|
| ## Contents |
|
|
| - 34 layers, 348.2K features |
| - Gate vectors, embeddings, down features/weights |
| - Attention weights, norms, tokenizer |
| - Probe-confirmed feature labels |
| - f16 precision |
|
|
| ## What is a vindex? |
|
|
| A vindex decouples a model's knowledge from its inference machinery. The FFN weights become a queryable graph — DESCRIBE returns typed knowledge edges, WALK traces activation paths, INFER runs graph-walk inference at 31 tok/sec on CPU. |
|
|
| See [LARQL](https://github.com/chrishayuk/larql) for the full engine. |
|
|