Supermix Omni Collective V4 Frontier

Custom PyTorch checkpoint for the omni_collective_v4 frontier model.

Included files

  • omni_collective_v4_frontier.pth
  • omni_collective_v4_frontier_meta.json
  • omni_collective_v4_frontier_summary.json
  • omni_collective_v4_model.py
  • omni_collective_model.py
  • image_feature_utils.py
  • image_recognition_model.py
  • math_equation_model.py
  • train_omni_collective_v4.py

Model summary

  • Parameters: 19032281
  • Stage 1 rows: 8589
  • Stage 2 rows: 9447
  • Stage 2 validation score: 0.5176
  • Stage 2 intent accuracy: 0.8195
  • Stage 2 response accuracy: 0.1402
  • Stage 2 vision accuracy: 0.9020
  • Stage 2 domain accuracy: 0.7765

Training sources

  • 3d: 190
  • bible: 180
  • books: 620
  • coding: 380
  • creative: 940
  • creative_100k: 280
  • delta_anchor: 171
  • delta_official: 32
  • dictionary: 180
  • finnegans: 120
  • hybrid: 860
  • quality_anchor: 2
  • qwen_teacher_league_total: 36
  • reasoning: 1400
  • science: 160
  • science_image: 420
  • science_novel: 120
  • supermix_plus: 2200
  • video_contact: 248
  • world_events: 66

Notes

This is a custom checkpoint, not a standard Transformers from_pretrained model. It adds sparse expert routing, per-depth modality-context routing, deeper memory, wider multimodal data, and Qwen teacher-league repair rows on top of the earlier omni line.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support