π§ Model Card β Phi-3.5-mini Medical Judge (SFT + GRPO)
π Model Description
This model extends the SFT version of Phi-3.5-mini-instruct with reinforcement alignment using Group Relative Policy Optimization (GRPO).
It is designed as a lightweight LLM judge for evaluating semantic equivalence in French medical OEQA.
π§Ύ Task Definition
Same as SFT model:
Input: (question, reference, candidate answer)
Output:
1β Equivalent0β Not equivalent
π Training Data
- Same dataset as SFT (~184 instances)
- Expert-labeled equivalence annotations
βοΈ Training Procedure
Stage 1 β SFT
- Same setup as SFT model
Stage 2 β GRPO
Epochs: 2
Reward function:
- +1 / -1 β prediction correctness
- +0.5 β correct output format (
0or1)
GRPO is applied after SFT convergence to refine decision boundaries.
π Citation
@inproceedings{belmadani-etal-2026-judges,
title = "Who Judges the Judge? Evaluating {LLM}-as-a-Judge for {F}rench Medical open-ended {QA}",
author = "Belmadani, Ikram and
El Khettari, Oumaima and
Constant dit Beaufils, Pac{\^o}me and
Dufour, Richard and
Favre, Benoit",
editor = {Danilova, Vera and
Kurfal{\i}, Murathan and
S{\"o}derfeldt, Ylva and
Reed, Julia and
Burchell, Andrew},
booktitle = "Proceedings of the 1st Workshop on Linguistic Analysis for Health ({H}ea{L}ing 2026)",
month = mar,
year = "2026",
address = "Rabat, Morocco",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2026.healing-1.12/",
pages = "142--157",
ISBN = "979-8-89176-367-8",
abstract = "Automatic evaluation of open-ended question answering in specialized domains remains challenging mainly because it relies on manual annotations from domain experts. In this work, we assess the ability of several large language models (LLMs), including closed-access (GPT-5.1, Gemini-2.5-Pro), open-source general-purpose (Qwen-80B), and biomedical domain-adapted models (MedGemma-27B, Phi-3.5-mini variants), to act as automatic evaluators of semantic equivalence in French medical open-ended QA. Our analysis reveals that LLM-based judgments are sensitive to the source of answer generation: judgement correlation varies substantially across different generator models. Among the judges, MedGemma-27B and Qwen-80B achieve the highest agreement with expert annotations in terms of F1 score and Pearson correlation. We further explore lightweight adaptation strategies on Phi-3.5-mini using supervised fine-tuning (SFT) and Group Relative Policy Optimization (GRPO). Even with 184 training instances, these adaptations significantly improve Phi-3.5{'}s results and reduce variability across answer generators, achieving performance comparable to larger domain-adapted models. Our results highlight the importance of generator-aware evaluation, the limitations of general-purpose LLMs in domain-specific settings, and the effectiveness of lightweight adaptation for compact models in low-resource scenarios."
}
- Downloads last month
- 26
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for ik-ram28/phi-judge-sft-grpo
Base model
microsoft/Phi-3.5-mini-instruct