Supermix Omni Collective V4 Frontier
Custom PyTorch checkpoint for the omni_collective_v4 frontier model.
Included files
omni_collective_v4_frontier.pthomni_collective_v4_frontier_meta.jsonomni_collective_v4_frontier_summary.jsonomni_collective_v4_model.pyomni_collective_model.pyimage_feature_utils.pyimage_recognition_model.pymath_equation_model.pytrain_omni_collective_v4.py
Model summary
- Parameters: 19032281
- Stage 1 rows: 8589
- Stage 2 rows: 9447
- Stage 2 validation score: 0.5176
- Stage 2 intent accuracy: 0.8195
- Stage 2 response accuracy: 0.1402
- Stage 2 vision accuracy: 0.9020
- Stage 2 domain accuracy: 0.7765
Training sources
- 3d: 190
- bible: 180
- books: 620
- coding: 380
- creative: 940
- creative_100k: 280
- delta_anchor: 171
- delta_official: 32
- dictionary: 180
- finnegans: 120
- hybrid: 860
- quality_anchor: 2
- qwen_teacher_league_total: 36
- reasoning: 1400
- science: 160
- science_image: 420
- science_novel: 120
- supermix_plus: 2200
- video_contact: 248
- world_events: 66
Notes
This is a custom checkpoint, not a standard Transformers from_pretrained model.
It adds sparse expert routing, per-depth modality-context routing, deeper memory, wider multimodal data, and Qwen teacher-league repair rows on top of the earlier omni line.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support