mikeumus-divincian commited on
Commit
c01058f
·
verified ·
1 Parent(s): ba09059

Update org card: V4-Pro vindex now live (3 MoE vindexes complete)

Browse files
Files changed (1) hide show
  1. index.html +1 -1
index.html CHANGED
@@ -68,7 +68,7 @@ controls.</p>
68
  <tr><td>GPT-OSS 120B</td><td>MoE (OpenAI)</td><td>120B</td><td><a href="https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex">gpt-oss-120b-vindex</a></td><td>—</td><td>S[0] grows 117× with depth (L0=111 → final=13,056)</td></tr>
69
  <tr><td><strong>Kimi-K2-Instruct</strong></td><td>MoE fp8-native (DeepSeek-V3 style)</td><td>1T / 32B active</td><td><a href="https://huggingface.co/Divinci-AI/kimi-k2-instruct-vindex">kimi-k2-instruct-vindex</a></td><td><strong>0.0938</strong> (MoE median)</td><td>60 MoE layers; 42.28 GB gate_proj binary; broader L52–L60 secondary rise than initial dome SVD suggested</td></tr>
70
  <tr><td><strong>DeepSeek-V4-Flash</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>43L / 256 experts / 6 active</td><td><a href="https://huggingface.co/Divinci-AI/deepseek-v4-flash-vindex">deepseek-v4-flash-vindex</a></td><td><strong>0.108</strong> (MoE median)</td><td>43-layer all-MoE; 11.54 GB gate_proj binary; first-peak L18 + double-bend profile (distinct from Kimi smooth dome); MXFP4 expert unpacking</td></tr>
71
- <tr><td><strong>DeepSeek-V4-Pro</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>61L / 384 experts / 6 active</td><td><em>queued</em></td><td></td><td>Queued; same scale as Kimi-K2 (60–61 layers × 384 experts × 7168 hidden); MXFP4 expert weights</td></tr>
72
  <tr><td><strong>Bonsai 8B</strong></td><td>1-bit (Qwen 3 base, post-quantized)</td><td>8B</td><td><em>vindex pending publish</em></td><td>0.429</td><td><strong>C5 = 1</strong> (circuit dissolved); var@64 = 0.093</td></tr>
73
  <tr><td><strong>BitNet b1.58-2B-4T</strong></td><td>1-bit (Microsoft, native)</td><td>2B</td><td><em>vindex pending publish</em></td><td>(Phase 2 pending)</td><td><strong>var@64 = 0.111</strong> mean across 30 layers — n=2 confirmation of dissolution</td></tr>
74
  </tbody>
 
68
  <tr><td>GPT-OSS 120B</td><td>MoE (OpenAI)</td><td>120B</td><td><a href="https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex">gpt-oss-120b-vindex</a></td><td>—</td><td>S[0] grows 117× with depth (L0=111 → final=13,056)</td></tr>
69
  <tr><td><strong>Kimi-K2-Instruct</strong></td><td>MoE fp8-native (DeepSeek-V3 style)</td><td>1T / 32B active</td><td><a href="https://huggingface.co/Divinci-AI/kimi-k2-instruct-vindex">kimi-k2-instruct-vindex</a></td><td><strong>0.0938</strong> (MoE median)</td><td>60 MoE layers; 42.28 GB gate_proj binary; broader L52–L60 secondary rise than initial dome SVD suggested</td></tr>
70
  <tr><td><strong>DeepSeek-V4-Flash</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>43L / 256 experts / 6 active</td><td><a href="https://huggingface.co/Divinci-AI/deepseek-v4-flash-vindex">deepseek-v4-flash-vindex</a></td><td><strong>0.108</strong> (MoE median)</td><td>43-layer all-MoE; 11.54 GB gate_proj binary; first-peak L18 + double-bend profile (distinct from Kimi smooth dome); MXFP4 expert unpacking</td></tr>
71
+ <tr><td><strong>DeepSeek-V4-Pro</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>61L / 384 experts / 6 active</td><td><a href="https://huggingface.co/Divinci-AI/deepseek-v4-pro-vindex">deepseek-v4-pro-vindex</a></td><td><strong>0.0653</strong> (MoE median)</td><td>61-layer all-MoE; 42.98 GB gate_proj binary; lowest var@64 of 3 published MoE vindexes (V4-Pro 0.065 < Kimi 0.094 < V4-Flash 0.108) — V4-Pro experts are most shared/redundant; late secondary rise L53–L60</td></tr>
72
  <tr><td><strong>Bonsai 8B</strong></td><td>1-bit (Qwen 3 base, post-quantized)</td><td>8B</td><td><em>vindex pending publish</em></td><td>0.429</td><td><strong>C5 = 1</strong> (circuit dissolved); var@64 = 0.093</td></tr>
73
  <tr><td><strong>BitNet b1.58-2B-4T</strong></td><td>1-bit (Microsoft, native)</td><td>2B</td><td><em>vindex pending publish</em></td><td>(Phase 2 pending)</td><td><strong>var@64 = 0.111</strong> mean across 30 layers — n=2 confirmation of dissolution</td></tr>
74
  </tbody>