Papers
arxiv:2604.05083

Beyond LLM-as-a-Judge: Deterministic Metrics for Multilingual Generative Text Evaluation

Published on Apr 6
Authors:
,
,
,

Abstract

OmniScore presents deterministic learned metrics using small-parameter models for text evaluation, offering consistent and scalable alternatives to large language model judges across multiple languages and tasks.

AI-generated summary

While Large Language Models (LLMs) are increasingly adopted as automated judges for evaluating generated text, their outputs are often costly, and highly sensitive to prompt design, language, and aggregation strategies, severely, which limits reproducibility. To address these challenges, we propose \textit{OmniScore}, a family of complementary, deterministic learned metrics developed using small size (<1B) parameter models. OmniScore approximates LLM-judge behavior while preserving the low latency and consistency of traditional model-based scoring. We trained the models large-scale synthetic supervision (sim564k instances, in 107 languages) and evaluated using 8,617 manually annotated instances. The OmniScore family supports reliable, multi-dimensional scores across a variety of settings, including reference-based, source-grounded, and hybrid evaluations. We evaluate these models across question answering (QA), translation, and summarization in 6 languages. Our results demonstrate that lightweight, deterministic learned metrics provide a highly practical and scalable alternative to frontier LLMs. Our models and datasets can be found at https://huggingface.co/collections/QCRI/omniscore

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.05083
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.05083 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.05083 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.05083 in a Space README.md to link it from this page.

Collections including this paper 1