--- title: Divinci AI emoji: 🧠colorFrom: green colorTo: yellow sdk: static pinned: false short_description: Feature-level interpretability for open transformers --- # Divinci AI Feature-level interpretability artifacts for open transformers — built openly, validated empirically. A **vindex** is a transformer's weights decompiled into a queryable feature database. It exposes the entity associations, circuit structure, and knowledge-editing surfaces that live inside a model's FFN layers — without requiring GPU inference for most operations. Think of it as the model's index: the thing you search before you run it. --- ## Interactive viewer [](https://huggingface.co/spaces/Divinci-AI/vindex-viewer) **[→ Open the interactive viewer](https://huggingface.co/spaces/Divinci-AI/vindex-viewer)** Pick any of 9 models from the dropdown. Toggle between the 3D cylinder spiral and a flat 2D circuit/network view. Hit **⇌ Compare** to render the current model alongside Bonsai 1-bit, side-by-side — the contrast between fp16 structure (organized rings) and 1-bit dissolution (scattered cloud) is the most direct picture of what 1-bit training does to a transformer's internal organization that we know how to render. Search for entity features (`?q=paris&model=gemma-4-e2b`) to see real probe-derived activations light up across the layer stack — backed by a 5000-token offline-built search index. --- ## Published vindexes Cross-family evidence in hand: **Gemma**, **Qwen3**, **Mistral**, **Llama**, **OpenAI MoE**, **Moonshot MoE**, **DeepSeek-V4 MoE**, plus two 1-bit controls.
| MODEL | ARCHITECTURE | PARAMS | VINDEX | C4 / var@64 | STATUS | NOTES |
| Gemma 4 E2B-it | Dense (Gemma 4) | 2B | gemma-4-e2b-vindex | 0.0407 ± 0.0004 ✓ | Complete | 3-seed validated; headline universal-constant model |
| Qwen3-0.6B | Dense (Qwen 3) | 0.6B | qwen3-0.6b-vindex | 0.411 | Complete | Smallest published; Qwen3 family-elevated C4 |
| Qwen3-8B bf16 | Dense (Qwen 3) | 8B | qwen3-8b-vindex | 0.804 | Complete | Architecture control for Bonsai |
| Qwen3.6-35B-A3B | MoE (Qwen 3.6) | 35B / 3B active | qwen3.6-35b-a3b-vindex | — | Complete | 256 experts, 40 layers |
| Ministral-3B | Dense (Mistral 3) | 3B | ministral-3b-vindex | 0.265 | Complete | Post-quant fp8 → bf16; non-dissolved spectrum |
| Llama 3.1-8B | Dense (Llama 3.1) | 8B | llama-3.1-8b-vindex | 0.012 ✓ | Complete | Llama family signature |
| MedGemma 1.5-4B | Dense (Gemma multimodal) | 4B | medgemma-1.5-4b-vindex | 1.898 ⚠| Complete | 45× cohort anomaly — under investigation |
| GPT-OSS 120B | MoE (OpenAI) | 120B | gpt-oss-120b-vindex | — | Complete | S[0] grows 117× with depth (L0=111 → final=13,056) |
| Kimi-K2-Instruct | MoE fp8-native (DeepSeek-V3 style) | 1T / 32B active | kimi-k2-instruct-vindex | 0.0938 (MoE median) ‡ | Complete | 60 MoE layers; 42.28 GB gate_proj binary; broader L52–L60 secondary rise than initial dome SVD suggested |
| DeepSeek-V4-Flash | MoE MXFP4 (DeepSeek-V4) | 43L / 256 experts / 6 active | publishing soon | — | Phase 1B running | 43-layer all-MoE; first-peak L17 + double-bend profile (distinct from Kimi’s smooth dome); MXFP4 unpacker added to builder |
| DeepSeek-V4-Pro | MoE MXFP4 (DeepSeek-V4) | 61L / 384 experts / 6 active | queued | — | Queued | Same scale as Kimi-K2 (60–61 layers × 384 experts × 7168 hidden); MXFP4 expert weights |
| Bonsai 8B | 1-bit (Qwen 3 base, post-quantized) | 8B | vindex pending publish | 0.093 (var@64) | Phase 1 complete | C5 = 1 (circuit dissolved); n=1 of 1-bit dissolution |
| BitNet b1.58-2B-4T | 1-bit (Microsoft, native) | 2B | vindex pending publish | 0.111 (var@64) | Phase 1 complete | n=2 dissolution confirmation; native 1-bit training |