Spaces:
Configuration error
Configuration error
Update org card: V4-Flash vindex now live
Browse files- index.html +1 -1
index.html
CHANGED
|
@@ -67,7 +67,7 @@ controls.</p>
|
|
| 67 |
<tr><td>MedGemma 1.5-4B</td><td>Dense (Gemma multimodal)</td><td>4B</td><td><a href="https://huggingface.co/Divinci-AI/medgemma-1.5-4b-vindex">medgemma-1.5-4b-vindex</a></td><td><strong>1.898 β </strong></td><td>45Γ cohort anomaly β under investigation</td></tr>
|
| 68 |
<tr><td>GPT-OSS 120B</td><td>MoE (OpenAI)</td><td>120B</td><td><a href="https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex">gpt-oss-120b-vindex</a></td><td>β</td><td>S[0] grows 117Γ with depth (L0=111 β final=13,056)</td></tr>
|
| 69 |
<tr><td><strong>Kimi-K2-Instruct</strong></td><td>MoE fp8-native (DeepSeek-V3 style)</td><td>1T / 32B active</td><td><a href="https://huggingface.co/Divinci-AI/kimi-k2-instruct-vindex">kimi-k2-instruct-vindex</a></td><td><strong>0.0938</strong> (MoE median)</td><td>60 MoE layers; 42.28 GB gate_proj binary; broader L52βL60 secondary rise than initial dome SVD suggested</td></tr>
|
| 70 |
-
<tr><td><strong>DeepSeek-V4-Flash</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>43L / 256 experts / 6 active</td><td><
|
| 71 |
<tr><td><strong>DeepSeek-V4-Pro</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>61L / 384 experts / 6 active</td><td><em>queued</em></td><td>β</td><td>Queued; same scale as Kimi-K2 (60β61 layers Γ 384 experts Γ 7168 hidden); MXFP4 expert weights</td></tr>
|
| 72 |
<tr><td><strong>Bonsai 8B</strong></td><td>1-bit (Qwen 3 base, post-quantized)</td><td>8B</td><td><em>vindex pending publish</em></td><td>0.429</td><td><strong>C5 = 1</strong> (circuit dissolved); var@64 = 0.093</td></tr>
|
| 73 |
<tr><td><strong>BitNet b1.58-2B-4T</strong></td><td>1-bit (Microsoft, native)</td><td>2B</td><td><em>vindex pending publish</em></td><td>(Phase 2 pending)</td><td><strong>var@64 = 0.111</strong> mean across 30 layers β n=2 confirmation of dissolution</td></tr>
|
|
|
|
| 67 |
<tr><td>MedGemma 1.5-4B</td><td>Dense (Gemma multimodal)</td><td>4B</td><td><a href="https://huggingface.co/Divinci-AI/medgemma-1.5-4b-vindex">medgemma-1.5-4b-vindex</a></td><td><strong>1.898 β </strong></td><td>45Γ cohort anomaly β under investigation</td></tr>
|
| 68 |
<tr><td>GPT-OSS 120B</td><td>MoE (OpenAI)</td><td>120B</td><td><a href="https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex">gpt-oss-120b-vindex</a></td><td>β</td><td>S[0] grows 117Γ with depth (L0=111 β final=13,056)</td></tr>
|
| 69 |
<tr><td><strong>Kimi-K2-Instruct</strong></td><td>MoE fp8-native (DeepSeek-V3 style)</td><td>1T / 32B active</td><td><a href="https://huggingface.co/Divinci-AI/kimi-k2-instruct-vindex">kimi-k2-instruct-vindex</a></td><td><strong>0.0938</strong> (MoE median)</td><td>60 MoE layers; 42.28 GB gate_proj binary; broader L52βL60 secondary rise than initial dome SVD suggested</td></tr>
|
| 70 |
+
<tr><td><strong>DeepSeek-V4-Flash</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>43L / 256 experts / 6 active</td><td><a href="https://huggingface.co/Divinci-AI/deepseek-v4-flash-vindex">deepseek-v4-flash-vindex</a></td><td><strong>0.108</strong> (MoE median)</td><td>43-layer all-MoE; 11.54 GB gate_proj binary; first-peak L18 + double-bend profile (distinct from Kimi smooth dome); MXFP4 expert unpacking</td></tr>
|
| 71 |
<tr><td><strong>DeepSeek-V4-Pro</strong></td><td>MoE MXFP4 (DeepSeek-V4)</td><td>61L / 384 experts / 6 active</td><td><em>queued</em></td><td>β</td><td>Queued; same scale as Kimi-K2 (60β61 layers Γ 384 experts Γ 7168 hidden); MXFP4 expert weights</td></tr>
|
| 72 |
<tr><td><strong>Bonsai 8B</strong></td><td>1-bit (Qwen 3 base, post-quantized)</td><td>8B</td><td><em>vindex pending publish</em></td><td>0.429</td><td><strong>C5 = 1</strong> (circuit dissolved); var@64 = 0.093</td></tr>
|
| 73 |
<tr><td><strong>BitNet b1.58-2B-4T</strong></td><td>1-bit (Microsoft, native)</td><td>2B</td><td><em>vindex pending publish</em></td><td>(Phase 2 pending)</td><td><strong>var@64 = 0.111</strong> mean across 30 layers β n=2 confirmation of dissolution</td></tr>
|