Supermix Omni Collective V5 Frontier
Custom PyTorch checkpoint for the omni_collective_v5 frontier model.
Included files
omni_collective_v5_frontier.pthomni_collective_v5_frontier_meta.jsonomni_collective_v5_frontier_summary.jsonomni_collective_v5_model.pyomni_collective_v4_model.pyomni_collective_model.pyimage_feature_utils.pyimage_recognition_model.pymath_equation_model.pytrain_omni_collective_v5.py
Model summary
- Parameters: 25370219
- Stage 1 rows: 8808
- Stage 2 rows: 9675
- Stage 2 validation score: 0.5270
- Stage 2 intent accuracy: 0.8252
- Stage 2 response accuracy: 0.1508
- Stage 2 vision accuracy: 0.8846
- Stage 2 domain accuracy: 0.8072
Training sources
- 3d: 190
- bible: 180
- books: 620
- coding: 380
- coding_delta_jsonl_v5: 180
- coding_delta_v5: 14
- creative: 940
- creative_100k: 280
- delta_anchor: 169
- delta_official: 32
- dictionary: 180
- finnegans: 120
- hybrid: 859
- openscad_v5: 10
- prompt_understanding_v5: 8
- quality_anchor: 6
- reasoning: 1400
- science: 160
- science_image: 429
- science_novel: 120
- supermix_plus: 2200
- video_contact: 248
- world_events: 66
- qwen_teacher_league_total: 40
Notes
This is a custom checkpoint, not a standard Transformers from_pretrained model.
v5 continues the v4 omni line with:
- a slightly larger multimodal backbone
- extra coding and OpenSCAD continuation data
- prompt-understanding deltas
- multi-pass deliberative consensus at inference time
- retention of text, math, coding, and image-recognition behavior in one checkpoint
Minimal local usage
from pathlib import Path
from omni_collective_v5_model import OmniCollectiveEngineV5
engine = OmniCollectiveEngineV5(
weights_path=Path("omni_collective_v5_frontier.pth"),
meta_path=Path("omni_collective_v5_frontier_meta.json"),
)
print(engine.answer("Write a simple OpenSCAD example for a hollow box with 2 mm walls."))
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support