Ministral-3B-Instruct — LarQL Vindex

Source model: mistralai/Ministral-3B-Instruct-2410 Vindex short ID: 00d28f96 Layers: 26 Hidden size: 3072 Features per layer: 128

What This Is

A LarQL vindex (vector index) — a compact binary representation of the feature geometry of mistralai/Ministral-3B-Instruct-2410. It contains the top-128 SVD directions of every MLP gate_proj and down_proj matrix in the network, plus token embeddings, layer norms, and vocabulary projection metadata.

What This Is NOT

This is not a model you can run for inference. It has no weights sufficient to generate text. It is a mechanistic interpretability artifact: a feature database for probing, editing, and comparing what mistralai/Ministral-3B-Instruct-2410 has learned.

Universal Constants (Phase 2 Measurements)

Measured via forward-pass hooks on a 256-token factual probe text.

Constant Symbol Value Interpretation
FFN Sparsity C1 0.236 Fraction of near-zero SwiGLU activations
Top-8 Prob Mass C2 0.000 Probability mass on top-8 output tokens
Gate Coherence C3 0.724 Mean cosine sim of adjacent gate_proj directions
Layer Temperature C4 0.265 Mean per-neuron SwiGLU activation variance
Circuit Stages C5 3 CKA transition count + 1

Notes: fp8 quantized; 26 text layers used for vindex (vision layers skipped). C4=0.036 confirms the 0.036–0.042 universal constant range.

Gate 3 Status (DELETE Patch Test)

Not yet evaluated.

Gate 3 tests whether a rank-1 ΔW patch to gate_proj.weight at the top Paris→capital feature layer suppresses P(Paris) by ≥70% with ≤30% Berlin collateral damage.

Files

File Description
gate_vectors.bin Top-128 SVD directions of gate_proj per layer [L×F×H, f16]
down_features.bin Top-128 SVD directions of down_proj per layer [L×F×H, f16]
embeddings.bin Token embedding matrix [V×H, f16]
norms.bin Layer norm weight vectors
down_meta.bin Per-feature top-k vocabulary projections
index.json Vindex metadata (layers, hidden_size, num_feats, etc.)
manifest.json Build provenance (source SHA, extraction timestamp)
SHA256SUMS File integrity checksums

How to Use

import numpy as np, json

vindex_dir = "path/to/downloaded/vindex"

with open(f"{vindex_dir}/index.json") as f:
    idx = json.load(f)

L, F, H = idx["num_layers"], idx["num_feats"], idx["hidden_size"]
V = idx["vocab_size"]

# Load gate feature directions [L, F, H]
gate = np.frombuffer(
    open(f"{vindex_dir}/gate_vectors.bin", "rb").read(),
    dtype=np.float16
).reshape(L, F, H).astype(np.float32)

# Load embeddings [V, H]
emb = np.frombuffer(
    open(f"{vindex_dir}/embeddings.bin", "rb").read(),
    dtype=np.float16
).reshape(V, H).astype(np.float32)

# Score a token against all features (cosine similarity)
emb_n = emb / (np.linalg.norm(emb, axis=1, keepdims=True) + 1e-8)
gate_n = gate / (np.linalg.norm(gate, axis=2, keepdims=True) + 1e-8)

token_id = 12379  # e.g., " Paris"
scores = gate_n @ emb_n[token_id]  # [L, F]
l_max, f_max = np.unravel_index(scores.argmax(), scores.shape)
print(f"Top feature: layer={l_max}, feature={f_max}, score={scores[l_max, f_max]:.4f}")

License

CC-BY-NC 4.0 — same terms as the source model. Research use only.

Citation

If you use this vindex in published work, please cite:

@misc{divinci2026larql,
  title  = {LarQL Vindex: Ministral-3B-Instruct},
  author = {Divinci AI},
  year   = {2026},
  url    = {https://huggingface.co/Divinci-AI/ministral-3b-vindex}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using Divinci-AI/ministral-3b-vindex 1