Gemma 3 4B IT β€” Vindex (f16)

FFN knowledge index extracted from google/gemma-3-4b-it using LARQL.

Treats transformer FFN weights as a queryable knowledge graph β€” retrieval via dot-product graph walks against gate vectors, no matrix multiplication.

Usage

larql> USE "hf://chrishayuk/gemma-3-4b-it-vindex";
larql> DESCRIBE "France";

Contents

  • 34 layers, 348.2K features
  • Gate vectors, embeddings, down features/weights
  • Attention weights, norms, tokenizer
  • Probe-confirmed feature labels
  • f16 precision

What is a vindex?

A vindex decouples a model's knowledge from its inference machinery. The FFN weights become a queryable graph β€” DESCRIBE returns typed knowledge edges, WALK traces activation paths, INFER runs graph-walk inference at 31 tok/sec on CPU.

See LARQL for the full engine.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collections including chrishayuk/gemma-3-4b-it-vindex-browse