120B model for agent orchestration
#214
by O96a - opened
The 120B parameter scale is interesting for complex reasoning tasks. We've been investigating where the sweet spot lies between model size and latency for multi-agent systems — at some point, the inference cost outweighs the reasoning gains for production workloads. Curious if anyone has benchmarked gpt-oss-120b on multi-step agentic tasks vs smaller models with chain-of-thought prompting. For RAG pipelines, we've found that 70B models often match 120B performance when the retrieval is good. The 4616 likes suggest strong interest, but deployment stories would be valuable for teams considering the infrastructure investment.