Papers
arxiv:2603.04816

Scaling Laws for Cross-Encoder Reranking

Published on Apr 18
Authors:
,
,
,

Abstract

Scaling laws for cross-encoder rerankers show predictable power law relationships across different objectives and model sizes, enabling accurate forecasting of larger models and providing compute allocation guidelines.

AI-generated summary

Scaling laws are well studied for language models and first-stage retrieval, but not for reranking. We present the first systematic study of scaling laws for cross-encoder rerankers across pointwise, pairwise, and listwise objectives. Across model size and training exposure, ranking quality follows predictable power laws, enabling larger rerankers to be forecast from smaller runs. Using models up to 150M parameters, we forecast 400M and 1B rerankers on MSMARCO-dev and TREC DL. Beyond forecasting, we derive compute-allocation rules from the fitted joint scaling law and compare them with equal-compute checkpoints, showing that retrieval metrics often favor data-heavy scaling, though the recommendation depends on the training objective. The forecasts are accurate and typically conservative, making them useful for planning expensive large-model training. These results provide practical scaling principles for industrial reranking systems, and we will release code and evaluation protocols.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.04816
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.04816 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.04816 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.04816 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.