mikeumus-divincian commited on
Commit
bf8897e
·
verified ·
1 Parent(s): 0f7f10c

Update org card: Kimi-K2 vindex complete (Phase 1+1B+2), DeepSeek-V4-Flash 1B running, DeepSeek-V4-Pro queued

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -30,7 +30,7 @@ Pick any of 9 models from the dropdown. Toggle between the 3D cylinder spiral an
30
 
31
  ## Published vindexes
32
 
33
- Cross-family evidence in hand: **Gemma**, **Qwen3**, **Mistral**, **Llama**, **OpenAI MoE**, **Moonshot MoE**, plus two 1-bit controls.
34
 
35
  <table>
36
  <tbody>
@@ -43,13 +43,15 @@ Cross-family evidence in hand: **Gemma**, **Qwen3**, **Mistral**, **Llama**, **O
43
  <tr><td>Llama 3.1-8B</td><td>Dense (Llama 3.1)</td><td>8B</td><td><a href="https://huggingface.co/Divinci-AI/llama-3.1-8b-vindex">llama-3.1-8b-vindex</a></td><td><strong>0.012</strong> ✓</td><td>Complete</td><td>Llama family signature</td></tr>
44
  <tr><td>MedGemma 1.5-4B</td><td>Dense (Gemma multimodal)</td><td>4B</td><td><a href="https://huggingface.co/Divinci-AI/medgemma-1.5-4b-vindex">medgemma-1.5-4b-vindex</a></td><td><strong>1.898 ⚠</strong></td><td>Complete</td><td>45× cohort anomaly — under investigation</td></tr>
45
  <tr><td>GPT-OSS 120B</td><td>MoE (OpenAI)</td><td>120B</td><td><a href="https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex">gpt-oss-120b-vindex</a></td><td>—</td><td>Complete</td><td>S[0] grows 117× with depth (L0=111 → final=13,056)</td></tr>
46
- <tr><td><strong>Kimi-K2-Instruct</strong></td><td>MoE fp8-native (DeepSeek-V3 style)</td><td>1T / 32B active</td><td><a href="https://huggingface.co/Divinci-AI/kimi-k2-vindex">kimi-k2-vindex</a></td><td><strong>0.088</strong> (MoE median) ‡</td><td><strong>Phase 1 running</strong> (6/61 layers)</td><td>3rd fp8-native dissolution datapoint var@64 same class as 1-bit models</td></tr>
 
 
47
  <tr><td><strong>Bonsai 8B</strong></td><td>1-bit (Qwen 3 base, post-quantized)</td><td>8B</td><td><em>vindex pending publish</em></td><td>0.093 (var@64)</td><td>Phase 1 complete</td><td><strong>C5 = 1</strong> (circuit dissolved); n=1 of 1-bit dissolution</td></tr>
48
  <tr><td><strong>BitNet b1.58-2B-4T</strong></td><td>1-bit (Microsoft, native)</td><td>2B</td><td><em>vindex pending publish</em></td><td>0.111 (var@64)</td><td>Phase 1 complete</td><td>n=2 dissolution confirmation; native 1-bit training</td></tr>
49
  </tbody>
50
  </table>
51
 
52
- ‡*Kimi-K2 spot-check: L00 dense var@64=0.037, MoE layers L01L04 median=0.088. Full 61-layer Phase 1 completing ~2026-04-23. Card updates in-place as phases land.*
53
 
54
  ---
55
 
 
30
 
31
  ## Published vindexes
32
 
33
+ Cross-family evidence in hand: **Gemma**, **Qwen3**, **Mistral**, **Llama**, **OpenAI MoE**, **Moonshot MoE**, **DeepSeek-V4 MoE**, plus two 1-bit controls.
34
 
35
  <table>
36
  <tbody>
 
43
  <tr><td>Llama 3.1-8B</td><td>Dense (Llama 3.1)</td><td>8B</td><td><a href="https://huggingface.co/Divinci-AI/llama-3.1-8b-vindex">llama-3.1-8b-vindex</a></td><td><strong>0.012</strong> ✓</td><td>Complete</td><td>Llama family signature</td></tr>
44
  <tr><td>MedGemma 1.5-4B</td><td>Dense (Gemma multimodal)</td><td>4B</td><td><a href="https://huggingface.co/Divinci-AI/medgemma-1.5-4b-vindex">medgemma-1.5-4b-vindex</a></td><td><strong>1.898 ⚠</strong></td><td>Complete</td><td>45× cohort anomaly — under investigation</td></tr>
45
  <tr><td>GPT-OSS 120B</td><td>MoE (OpenAI)</td><td>120B</td><td><a href="https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex">gpt-oss-120b-vindex</a></td><td>—</td><td>Complete</td><td>S[0] grows 117× with depth (L0=111 → final=13,056)</td></tr>
46
+ <tr><td><strong>Kimi-K2-Instruct</strong></td><td>MoE fp8-native (DeepSeek-V3 style)</td><td>1T / 32B active</td><td><a href="https://huggingface.co/Divinci-AI/kimi-k2-instruct-vindex">kimi-k2-instruct-vindex</a></td><td><strong>0.0938</strong> (MoE median) ‡</td><td>Complete</td><td>60 MoE layers; 42.28 GB gate_proj binary; broader L52–L60 secondary rise than initial dome SVD suggested</td></tr>
47
+ <tr><td><strong>DeepSeek-V4-Flash</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>43L / 256 experts / 6 active</td><td><em>publishing soon</em></td><td><strong>—</strong></td><td><strong>Phase 1B running</strong></td><td>43-layer all-MoE; first-peak L17 + double-bend profile (distinct from Kimi’s smooth dome); MXFP4 unpacker added to builder</td></tr>
48
+ <tr><td><strong>DeepSeek-V4-Pro</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>61L / 384 experts / 6 active</td><td><em>queued</em></td><td>—</td><td>Queued</td><td>Same scale as Kimi-K2 (60–61 layers × 384 experts × 7168 hidden); MXFP4 expert weights</td></tr>
49
  <tr><td><strong>Bonsai 8B</strong></td><td>1-bit (Qwen 3 base, post-quantized)</td><td>8B</td><td><em>vindex pending publish</em></td><td>0.093 (var@64)</td><td>Phase 1 complete</td><td><strong>C5 = 1</strong> (circuit dissolved); n=1 of 1-bit dissolution</td></tr>
50
  <tr><td><strong>BitNet b1.58-2B-4T</strong></td><td>1-bit (Microsoft, native)</td><td>2B</td><td><em>vindex pending publish</em></td><td>0.111 (var@64)</td><td>Phase 1 complete</td><td>n=2 dissolution confirmation; native 1-bit training</td></tr>
51
  </tbody>
52
  </table>
53
 
54
+ ‡*Kimi-K2 final: 60 MoE layers (L01–L60), gate_proj SVD, median var@64=0.0938 (range 0.083–0.108). Phase 1 + Phase 1B + Phase 2 all complete 2026-04-24; 42.28 GB binary published. DeepSeek-V4 series builds with MXFP4 unpacker (V4-Flash 1B in progress 2026-04-25, V4-Pro queued). Card updates in-place as phases land.*
55
 
56
  ---
57