anonymousreview111 commited on
Commit
ec09659
·
verified ·
1 Parent(s): d88ff5a

Upload judgesense-benchmark dataset

Browse files
README.md CHANGED
@@ -1,158 +1,159 @@
1
- # JudgeSense: A Benchmark for Prompt Sensitivity in LLM-as-a-Judge Systems
2
-
3
- [![License: CC-BY-4.0](https://img.shields.io/badge/License-CC--BY--4.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)
4
- [![arXiv](https://img.shields.io/badge/arXiv-[REDACTED]-red.svg)]()
5
- [![HuggingFace](https://img.shields.io/badge/dataset-HuggingFace-orange.svg)](https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark)
6
-
7
- ---
8
-
9
- ## Overview
10
-
11
- **JudgeSense** is a benchmark dataset of **500 hand-validated prompt pairs** for measuring prompt sensitivity in LLM-as-a-Judge evaluation systems. Each pair contains two differently phrased but semantically equivalent judge prompts applied to the same response, enabling rigorous measurement of how much a judge's decision changes due to prompt wording alone.
12
-
13
- All 500 pairs were validated by a human annotator: 450 confirmed semantically equivalent; 50 pairs involving Template 4 (polarity-inverted) are flagged and handled via label remapping in the evaluation code.
14
-
15
- The dataset covers four evaluation task types:
16
-
17
- | Task | Source | Pairs | Labels |
18
- |------|--------|-------|--------|
19
- | **Factuality** | TruthfulQA | 125 | accurate / inaccurate |
20
- | **Coherence** | SummEval | 125 | score_1 ... score_5 |
21
- | **Preference** | MT-Bench | 125 | A / B |
22
- | **Relevance** | BEIR | 125 | A / B |
23
-
24
- ---
25
-
26
- ## What This Enables
27
-
28
- - **Prompt sensitivity evaluation** — measure how fragile a judge is to phrasing variation
29
- - **LLM judge robustness benchmarking** — compare models on decision consistency
30
- - **Detection of prompt-induced artifacts** — identify polarity inversions (T4) and other systematic biases
31
-
32
- ---
33
-
34
- ## Quick Start
35
-
36
- ```python
37
- from utils.load_judgesense import load_task, load_all
38
- from utils.compute_jss import compute_jss
39
-
40
- # Load one task
41
- pairs = load_task("factuality")
42
- print(f"{len(pairs)} pairs loaded")
43
-
44
- # Load all tasks
45
- all_data = load_all()
46
-
47
- # Compute JSS from your judge's decisions
48
- jss = compute_jss(decisions_a, decisions_b)
49
- print(f"JSS: {jss:.3f}")
50
- ```
51
-
52
- Run the full example:
53
-
54
- ```bash
55
- cd judgesense-benchmark
56
- python examples/run_jss_example.py
57
- ```
58
-
59
- ---
60
-
61
- ## Dataset Schema
62
-
63
- Each JSONL record has eight fields:
64
-
65
- ```json
66
- {
67
- "pair_id": "fact_001",
68
- "task_type": "factuality",
69
- "source_benchmark": "TruthfulQA",
70
- "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: ...",
71
- "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: ...",
72
- "response_being_judged": "The Earth orbits around the Sun.",
73
- "ground_truth_label": "accurate",
74
- "semantic_equivalence_score": 1.0
75
- }
76
- ```
77
-
78
- ---
79
-
80
- ## Metric: Judge Sensitivity Score (JSS)
81
-
82
- JSS is the fraction of pairs where both prompt variants elicit the same decision from the judge:
83
-
84
- ```
85
- JSS = (1/N) * sum( decisions_a[i] == decisions_b[i] )
86
- ```
87
-
88
- - **JSS = 1.0** — perfectly consistent; the judge never changes its decision due to prompt phrasing
89
- - **JSS = 0.0** — maximally sensitive; every decision flips between prompts
90
-
91
- A high flip rate (= 1 - JSS) indicates the judge's apparent decisions are largely driven by prompt design rather than the content being evaluated.
92
-
93
- ---
94
-
95
- ## Benchmark Results (13 judges, pass-2)
96
-
97
- ### Coherence (most discriminating task)
98
-
99
- | Model | JSS | Cohen's kappa |
100
- |---|---|---|
101
- | Claude Sonnet 4.5 | 0.99 | 0.986 |
102
- | Qwen-2.5-72B | 0.92 | 0.846 |
103
- | GPT-4o | 0.92 | 0.828 |
104
- | GPT-5.5 | 0.83 | 0.694 |
105
- | GPT-4o-mini | 0.78 | 0.627 |
106
- | Claude Haiku 4.5 | 0.73 | 0.583 |
107
- | Claude Opus 4.7 | 0.70 | 0.576 |
108
- | LLaMA-3.1-70B | 0.55 | 0.338 |
109
- | DeepSeek-R1 | 0.53 | 0.326 |
110
- | Qwen 3.6 Flash | 0.51 | 0.372 |
111
- | DeepSeek-V4 Flash | 0.50 | 0.350 |
112
- | Mistral-7B | 0.48 | -0.082 |
113
- | Gemini 2.5 Flash | 0.39 | -0.053 |
114
-
115
- ### Factuality (after T4 polarity correction)
116
-
117
- | Model | JSS (raw) | JSS (corrected) | Delta |
118
- |---|---|---|---|
119
- | GPT-4o | 0.63 | 1.00 | +0.37 |
120
- | GPT-4o-mini | 0.63 | 1.00 | +0.37 |
121
- | Claude Haiku 4.5 | 0.63 | 1.00 | +0.37 |
122
- | Claude Sonnet 4.5 | 0.63 | 1.00 | +0.37 |
123
- | DeepSeek-R1 | 0.63 | 1.00 | +0.37 |
124
- | LLaMA-3.1-70B | 0.63 | 1.00 | +0.37 |
125
- | Gemini 2.5 Flash | 0.63 | 1.00 | +0.37 |
126
- | Qwen-2.5-72B | 0.63 | 1.00 | +0.37 |
127
- | Mistral-7B | 0.71 | 0.88 | +0.17 |
128
- | GPT-5.5 | 0.63 | 1.00 | +0.37 |
129
- | Claude Opus 4.7 | 0.63 | 1.00 | +0.37 |
130
- | Qwen 3.6 Flash | 0.63 | 1.00 | +0.37 |
131
- | DeepSeek-V4 Flash | 0.62 | 0.99 | +0.37 |
132
-
133
- ---
134
-
135
- ## Key Insights
136
-
137
- > **Coherence JSS varies by more than 0.6 units across 13 judges and does not track model scale or recency.**
138
-
139
- - Claude Opus 4.7 (0.70) scores lower than Claude Haiku 4.5 (0.73); GPT-5.5 (0.83) scores lower than GPT-4o (0.92)
140
- - Factuality sensitivity is entirely driven by Template 4 polarity inversion, not by model-level inconsistency
141
- - Preference and relevance JSS are degenerate (12 of 13 judges always select option A)
142
-
143
- ---
144
-
145
- ## Citation
146
-
147
- If you use JudgeSense in your research, please cite the accompanying paper (details redacted for double-blind review).
148
-
149
- ---
150
-
151
- ## License
152
-
153
- - **Dataset**: [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
154
- - **Code**: MIT License
155
-
156
- ---
157
-
158
- *Anonymous submission for double-blind review. All evaluations conducted on public benchmarks and APIs.*
 
 
1
+ # JudgeSense: A Benchmark for Prompt Sensitivity in LLM-as-a-Judge Systems
2
+
3
+ [![License: CC-BY-4.0](https://img.shields.io/badge/License-CC--BY--4.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)
4
+ [![arXiv](https://img.shields.io/badge/arXiv-[REDACTED]-red.svg)]()
5
+ [![HuggingFace](https://img.shields.io/badge/dataset-HuggingFace-orange.svg)](https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark)
6
+
7
+ ---
8
+
9
+ ## Overview
10
+
11
+ **JudgeSense** is a benchmark dataset of **450 hand-validated prompt pairs** for measuring prompt sensitivity in LLM-as-a-Judge evaluation systems. Each pair contains two differently phrased but semantically equivalent judge prompts applied to the same response, enabling rigorous measurement of how much a judge's decision changes due to prompt wording alone.
12
+
13
+ All 450 pairs were validated by a human annotator: 450 confirmed semantically equivalent; 50 pairs involving Template 4 (polarity-inverted) are flagged and handled via label remapping in the evaluation code.
14
+
15
+ The dataset covers four evaluation task types:
16
+
17
+ | Task | Source | Pairs | Labels |
18
+ |------|--------|-------|--------|
19
+ | **Factuality** | TruthfulQA | 75 | accurate / inaccurate |
20
+ | **Coherence** | SummEval | 125 | score_1 ... score_5 |
21
+ | **Preference** | MT-Bench | 125 | A / B |
22
+ | **Relevance** | BEIR | 125 | A / B |
23
+
24
+ ---
25
+
26
+ ## What This Enables
27
+
28
+ - **Prompt sensitivity evaluation** — measure how fragile a judge is to phrasing variation
29
+ - **LLM judge robustness benchmarking** — compare models on decision consistency
30
+ - **Detection of prompt-induced artifacts** — identify polarity inversions (T4) and other systematic biases
31
+
32
+ ---
33
+
34
+ ## Quick Start
35
+
36
+ ```python
37
+ from utils.load_judgesense import load_task, load_all
38
+ from utils.compute_jss import compute_jss
39
+
40
+ # Load one task
41
+ pairs = load_task("factuality")
42
+ print(f"{len(pairs)} pairs loaded")
43
+
44
+ # Load all tasks
45
+ all_data = load_all()
46
+
47
+ # Compute JSS from your judge's decisions
48
+ jss = compute_jss(decisions_a, decisions_b)
49
+ print(f"JSS: {jss:.3f}")
50
+ ```
51
+
52
+ Run the full example:
53
+
54
+ ```bash
55
+ cd judgesense-benchmark
56
+ python examples/run_jss_example.py
57
+ ```
58
+
59
+ ---
60
+
61
+ ## Dataset Schema
62
+
63
+ Each JSONL record has eight fields:
64
+
65
+ ```json
66
+ {
67
+ "pair_id": "fact_001",
68
+ "task_type": "factuality",
69
+ "source_benchmark": "TruthfulQA",
70
+ "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: ...",
71
+ "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: ...",
72
+ "response_being_judged": "The Earth orbits around the Sun.",
73
+ "ground_truth_label": "accurate",
74
+ "semantic_equivalence_score": 1.0,
75
+ "ab_swapped": false
76
+ }
77
+ ```
78
+
79
+ ---
80
+
81
+ ## Metric: Judge Sensitivity Score (JSS)
82
+
83
+ JSS is the fraction of pairs where both prompt variants elicit the same decision from the judge:
84
+
85
+ ```
86
+ JSS = (1/N) * sum( decisions_a[i] == decisions_b[i] )
87
+ ```
88
+
89
+ - **JSS = 1.0** — perfectly consistent; the judge never changes its decision due to prompt phrasing
90
+ - **JSS = 0.0** — maximally sensitive; every decision flips between prompts
91
+
92
+ A high flip rate (= 1 - JSS) indicates the judge's apparent decisions are largely driven by prompt design rather than the content being evaluated.
93
+
94
+ ---
95
+
96
+ ## Benchmark Results (13 judges, pass-2)
97
+
98
+ ### Coherence (most discriminating task)
99
+
100
+ | Model | JSS | Cohen's kappa |
101
+ |---|---|---|
102
+ | Claude Sonnet 4.5 | 0.99 | 0.986 |
103
+ | Qwen-2.5-72B | 0.92 | 0.846 |
104
+ | GPT-4o | 0.92 | 0.828 |
105
+ | GPT-5.5 | 0.83 | 0.694 |
106
+ | GPT-4o-mini | 0.78 | 0.627 |
107
+ | Claude Haiku 4.5 | 0.73 | 0.583 |
108
+ | Claude Opus 4.7 | 0.70 | 0.576 |
109
+ | LLaMA-3.1-70B | 0.55 | 0.338 |
110
+ | DeepSeek-R1 | 0.53 | 0.326 |
111
+ | Qwen 3.6 Flash | 0.51 | 0.372 |
112
+ | DeepSeek-V4 Flash | 0.50 | 0.350 |
113
+ | Mistral-7B | 0.48 | -0.082 |
114
+ | Gemini 2.5 Flash | 0.39 | -0.053 |
115
+
116
+ ### Factuality (after T4 polarity correction)
117
+
118
+ | Model | JSS (raw) | JSS (corrected) | Delta |
119
+ |---|---|---|---|
120
+ | GPT-4o | 0.63 | 1.00 | +0.37 |
121
+ | GPT-4o-mini | 0.63 | 1.00 | +0.37 |
122
+ | Claude Haiku 4.5 | 0.63 | 1.00 | +0.37 |
123
+ | Claude Sonnet 4.5 | 0.63 | 1.00 | +0.37 |
124
+ | DeepSeek-R1 | 0.63 | 1.00 | +0.37 |
125
+ | LLaMA-3.1-70B | 0.63 | 1.00 | +0.37 |
126
+ | Gemini 2.5 Flash | 0.63 | 1.00 | +0.37 |
127
+ | Qwen-2.5-72B | 0.63 | 1.00 | +0.37 |
128
+ | Mistral-7B | 0.71 | 0.88 | +0.17 |
129
+ | GPT-5.5 | 0.63 | 1.00 | +0.37 |
130
+ | Claude Opus 4.7 | 0.63 | 1.00 | +0.37 |
131
+ | Qwen 3.6 Flash | 0.63 | 1.00 | +0.37 |
132
+ | DeepSeek-V4 Flash | 0.62 | 0.99 | +0.37 |
133
+
134
+ ---
135
+
136
+ ## Key Insights
137
+
138
+ > **Coherence JSS varies by more than 0.6 units across 13 judges and does not track model scale or recency.**
139
+
140
+ - Claude Opus 4.7 (0.70) scores lower than Claude Haiku 4.5 (0.73); GPT-5.5 (0.83) scores lower than GPT-4o (0.92)
141
+ - Factuality sensitivity is entirely driven by Template 4 polarity inversion, not by model-level inconsistency
142
+ - Preference and relevance JSS are degenerate (12 of 13 judges always select option A)
143
+
144
+ ---
145
+
146
+ ## Citation
147
+
148
+ If you use JudgeSense in your research, please cite the accompanying paper (details redacted for double-blind review).
149
+
150
+ ---
151
+
152
+ ## License
153
+
154
+ - **Dataset**: [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
155
+ - **Code**: MIT License
156
+
157
+ ---
158
+
159
+ *Anonymous submission for double-blind review. All evaluations conducted on public benchmarks and APIs.*
data/factuality.jsonl CHANGED
@@ -1,125 +1,75 @@
1
  {"pair_id": "fact_001", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
2
- {"pair_id": "fact_002", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
3
  {"pair_id": "fact_003", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
4
  {"pair_id": "fact_004", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
5
- {"pair_id": "fact_005", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
6
  {"pair_id": "fact_006", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
7
- {"pair_id": "fact_007", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
8
  {"pair_id": "fact_008", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
9
  {"pair_id": "fact_009", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
10
- {"pair_id": "fact_010", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
11
  {"pair_id": "fact_011", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
12
- {"pair_id": "fact_012", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
13
  {"pair_id": "fact_013", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
14
  {"pair_id": "fact_014", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
15
- {"pair_id": "fact_015", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Paris is the capital of France.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
16
  {"pair_id": "fact_016", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
17
- {"pair_id": "fact_017", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Paris is the capital of France.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
18
  {"pair_id": "fact_018", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Paris is the capital of France.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
19
  {"pair_id": "fact_019", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
20
- {"pair_id": "fact_020", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Paris is the capital of France.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
21
  {"pair_id": "fact_021", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
22
- {"pair_id": "fact_022", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Paris is the capital of France.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
23
  {"pair_id": "fact_023", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Paris is the capital of France.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
24
  {"pair_id": "fact_024", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
25
- {"pair_id": "fact_025", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Paris is the capital of France.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
26
  {"pair_id": "fact_026", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
27
- {"pair_id": "fact_027", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
28
  {"pair_id": "fact_028", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
29
  {"pair_id": "fact_029", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
30
- {"pair_id": "fact_030", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
31
  {"pair_id": "fact_031", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
32
- {"pair_id": "fact_032", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
33
  {"pair_id": "fact_033", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
34
  {"pair_id": "fact_034", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
35
- {"pair_id": "fact_035", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
36
  {"pair_id": "fact_036", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
37
- {"pair_id": "fact_037", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
38
  {"pair_id": "fact_038", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
39
  {"pair_id": "fact_039", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
40
- {"pair_id": "fact_040", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
41
  {"pair_id": "fact_041", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
42
- {"pair_id": "fact_042", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
43
  {"pair_id": "fact_043", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
44
  {"pair_id": "fact_044", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
45
- {"pair_id": "fact_045", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
46
  {"pair_id": "fact_046", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
47
- {"pair_id": "fact_047", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
48
  {"pair_id": "fact_048", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
49
  {"pair_id": "fact_049", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
50
- {"pair_id": "fact_050", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
51
  {"pair_id": "fact_051", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
52
- {"pair_id": "fact_052", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
53
  {"pair_id": "fact_053", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The United States has 50 states.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
54
  {"pair_id": "fact_054", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
55
- {"pair_id": "fact_055", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The United States has 50 states.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
56
  {"pair_id": "fact_056", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
57
- {"pair_id": "fact_057", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: The United States has 50 states.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
58
  {"pair_id": "fact_058", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The United States has 50 states.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
59
  {"pair_id": "fact_059", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
60
- {"pair_id": "fact_060", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The United States has 50 states.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
61
  {"pair_id": "fact_061", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
62
- {"pair_id": "fact_062", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: The United States has 50 states.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
63
  {"pair_id": "fact_063", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The United States has 50 states.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
64
  {"pair_id": "fact_064", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
65
- {"pair_id": "fact_065", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The United States has 50 states.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
66
  {"pair_id": "fact_066", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
67
- {"pair_id": "fact_067", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
68
  {"pair_id": "fact_068", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
69
  {"pair_id": "fact_069", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
70
- {"pair_id": "fact_070", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
71
  {"pair_id": "fact_071", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
72
- {"pair_id": "fact_072", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
73
  {"pair_id": "fact_073", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
74
  {"pair_id": "fact_074", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
75
- {"pair_id": "fact_075", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
76
  {"pair_id": "fact_076", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
77
- {"pair_id": "fact_077", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
78
  {"pair_id": "fact_078", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
79
  {"pair_id": "fact_079", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
80
- {"pair_id": "fact_080", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
81
  {"pair_id": "fact_081", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
82
- {"pair_id": "fact_082", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
83
  {"pair_id": "fact_083", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
84
  {"pair_id": "fact_084", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
85
- {"pair_id": "fact_085", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
86
  {"pair_id": "fact_086", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
87
- {"pair_id": "fact_087", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
88
  {"pair_id": "fact_088", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
89
  {"pair_id": "fact_089", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
90
- {"pair_id": "fact_090", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
91
  {"pair_id": "fact_091", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
92
- {"pair_id": "fact_092", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
93
  {"pair_id": "fact_093", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
94
  {"pair_id": "fact_094", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
95
- {"pair_id": "fact_095", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
96
  {"pair_id": "fact_096", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
97
- {"pair_id": "fact_097", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
98
  {"pair_id": "fact_098", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
99
  {"pair_id": "fact_099", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
100
- {"pair_id": "fact_100", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
101
  {"pair_id": "fact_101", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
102
- {"pair_id": "fact_102", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
103
  {"pair_id": "fact_103", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
104
  {"pair_id": "fact_104", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
105
- {"pair_id": "fact_105", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
106
  {"pair_id": "fact_106", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
107
- {"pair_id": "fact_107", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
108
  {"pair_id": "fact_108", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
109
  {"pair_id": "fact_109", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
110
- {"pair_id": "fact_110", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
111
  {"pair_id": "fact_111", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
112
- {"pair_id": "fact_112", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
113
  {"pair_id": "fact_113", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
114
  {"pair_id": "fact_114", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
115
- {"pair_id": "fact_115", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
116
  {"pair_id": "fact_116", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
117
- {"pair_id": "fact_117", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
118
  {"pair_id": "fact_118", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
119
  {"pair_id": "fact_119", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
120
- {"pair_id": "fact_120", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
121
  {"pair_id": "fact_121", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
122
- {"pair_id": "fact_122", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Are the facts in this response accurate? YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
123
  {"pair_id": "fact_123", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
124
- {"pair_id": "fact_124", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
125
- {"pair_id": "fact_125", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Does this response contain factual errors? Answer NO (accurate) or YES (has errors).\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
1
  {"pair_id": "fact_001", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
2
  {"pair_id": "fact_003", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
3
  {"pair_id": "fact_004", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
4
  {"pair_id": "fact_006", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
5
  {"pair_id": "fact_008", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
6
  {"pair_id": "fact_009", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
7
  {"pair_id": "fact_011", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
8
  {"pair_id": "fact_013", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The Earth orbits around the Sun.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The Earth orbits around the Sun.", "response_being_judged": "The Earth orbits around the Sun.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
9
  {"pair_id": "fact_014", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
10
  {"pair_id": "fact_016", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
11
  {"pair_id": "fact_018", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Paris is the capital of France.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
12
  {"pair_id": "fact_019", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
13
  {"pair_id": "fact_021", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
14
  {"pair_id": "fact_023", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Paris is the capital of France.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
15
  {"pair_id": "fact_024", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
16
  {"pair_id": "fact_026", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Paris is the capital of France.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Paris is the capital of France.", "response_being_judged": "Paris is the capital of France.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
17
  {"pair_id": "fact_028", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
18
  {"pair_id": "fact_029", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
19
  {"pair_id": "fact_031", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
20
  {"pair_id": "fact_033", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
21
  {"pair_id": "fact_034", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
22
  {"pair_id": "fact_036", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
23
  {"pair_id": "fact_038", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
24
  {"pair_id": "fact_039", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Water boils at 100 degrees Celsius at sea level.", "response_being_judged": "Water boils at 100 degrees Celsius at sea level.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
25
  {"pair_id": "fact_041", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
26
  {"pair_id": "fact_043", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
27
  {"pair_id": "fact_044", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
28
  {"pair_id": "fact_046", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
29
  {"pair_id": "fact_048", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
30
  {"pair_id": "fact_049", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
31
  {"pair_id": "fact_051", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: DNA is a protein molecule found in cells.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: DNA is a protein molecule found in cells.", "response_being_judged": "DNA is a protein molecule found in cells.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
32
  {"pair_id": "fact_053", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The United States has 50 states.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
33
  {"pair_id": "fact_054", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
34
  {"pair_id": "fact_056", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
35
  {"pair_id": "fact_058", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The United States has 50 states.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
36
  {"pair_id": "fact_059", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
37
  {"pair_id": "fact_061", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
38
  {"pair_id": "fact_063", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The United States has 50 states.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
39
  {"pair_id": "fact_064", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The United States has 50 states.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The United States has 50 states.", "response_being_judged": "The United States has 50 states.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
40
  {"pair_id": "fact_066", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
41
  {"pair_id": "fact_068", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
42
  {"pair_id": "fact_069", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
43
  {"pair_id": "fact_071", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
44
  {"pair_id": "fact_073", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
45
  {"pair_id": "fact_074", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
46
  {"pair_id": "fact_076", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Mount Everest is the tallest mountain in the solar system.", "response_being_judged": "Mount Everest is the tallest mountain in the solar system.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
47
  {"pair_id": "fact_078", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
48
  {"pair_id": "fact_079", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
49
  {"pair_id": "fact_081", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
50
  {"pair_id": "fact_083", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
51
  {"pair_id": "fact_084", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
52
  {"pair_id": "fact_086", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
53
  {"pair_id": "fact_088", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
54
  {"pair_id": "fact_089", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Photosynthesis converts sunlight into chemical energy.", "response_being_judged": "Photosynthesis converts sunlight into chemical energy.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
55
  {"pair_id": "fact_091", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
56
  {"pair_id": "fact_093", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
57
  {"pair_id": "fact_094", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
58
  {"pair_id": "fact_096", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
59
  {"pair_id": "fact_098", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
60
  {"pair_id": "fact_099", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
61
  {"pair_id": "fact_101", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Quantum mechanics describes particles larger than atoms.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Quantum mechanics describes particles larger than atoms.", "response_being_judged": "Quantum mechanics describes particles larger than atoms.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
62
  {"pair_id": "fact_103", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
63
  {"pair_id": "fact_104", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
64
  {"pair_id": "fact_106", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
65
  {"pair_id": "fact_108", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
66
  {"pair_id": "fact_109", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
67
  {"pair_id": "fact_111", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
 
68
  {"pair_id": "fact_113", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: The human heart pumps blood to the lungs and body.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: The human heart pumps blood to the lungs and body.", "response_being_judged": "The human heart pumps blood to the lungs and body.", "ground_truth_label": "accurate", "semantic_equivalence_score": 1.0}
69
  {"pair_id": "fact_114", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
70
  {"pair_id": "fact_116", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
71
  {"pair_id": "fact_118", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
72
  {"pair_id": "fact_119", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
73
  {"pair_id": "fact_121", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Is this factually correct? Answer YES or NO only.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
74
  {"pair_id": "fact_123", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Evaluate factual accuracy. One word answer - YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Is this factually correct? Answer YES or NO only.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
75
+ {"pair_id": "fact_124", "task_type": "factuality", "source_benchmark": "TruthfulQA", "prompt_a": "Fact-check this response. Reply YES (correct) or NO (incorrect).\n\nResponse: Gravitational waves were theorized by Newton.", "prompt_b": "Are the facts in this response accurate? YES or NO.\n\nResponse: Gravitational waves were theorized by Newton.", "response_being_judged": "Gravitational waves were theorized by Newton.", "ground_truth_label": "inaccurate", "semantic_equivalence_score": 1.0}
 
data/preference.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/relevance.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
judgesense_croissant.json CHANGED
@@ -1,587 +1,587 @@
1
- {
2
- "@context": {
3
- "@language": "en",
4
- "@vocab": "https://schema.org/",
5
- "cr": "http://mlcommons.org/croissant/",
6
- "rai": "http://mlcommons.org/croissant/RAI/",
7
- "dct": "http://purl.org/dc/terms/",
8
- "prov": "http://www.w3.org/ns/prov#",
9
- "sc": "https://schema.org/",
10
- "citeAs": "cr:citeAs",
11
- "column": "cr:column",
12
- "conformsTo": "dct:conformsTo",
13
- "data": {
14
- "@id": "cr:data",
15
- "@type": "@json"
16
- },
17
- "dataType": {
18
- "@id": "cr:dataType",
19
- "@type": "@vocab"
20
- },
21
- "examples": {
22
- "@id": "cr:examples",
23
- "@type": "@json"
24
- },
25
- "extract": "cr:extract",
26
- "field": "cr:field",
27
- "fileObject": "cr:fileObject",
28
- "fileProperty": "cr:fileProperty",
29
- "fileSet": "cr:fileSet",
30
- "format": "cr:format",
31
- "includes": "cr:includes",
32
- "isLiveDataset": "cr:isLiveDataset",
33
- "jsonPath": "cr:jsonPath",
34
- "key": "cr:key",
35
- "md5": "cr:md5",
36
- "parentField": "cr:parentField",
37
- "path": "cr:path",
38
- "recordSet": "cr:recordSet",
39
- "references": "cr:references",
40
- "regex": "cr:regex",
41
- "repeated": "cr:repeated",
42
- "replace": "cr:replace",
43
- "separator": "cr:separator",
44
- "source": "cr:source",
45
- "subField": "cr:subField",
46
- "transform": "cr:transform"
47
- },
48
-
49
- "@type": "sc:Dataset",
50
- "@id": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark",
51
- "conformsTo": "http://mlcommons.org/croissant/1.1",
52
-
53
- "name": "JudgeSense",
54
- "description": "JudgeSense is a benchmark of 500 hand-validated prompt-paraphrase pairs for evaluating prompt sensitivity in LLM-as-a-Judge systems. Each pair presents two differently phrased but semantically equivalent judge prompts applied to the same response, enabling rigorous measurement of how a judge's decision changes due to prompt wording alone. The dataset spans four evaluation tasks: factuality (TruthfulQA, 125 pairs), coherence (SummEval, 125 pairs), preference (MT-Bench, 125 pairs), and relevance (BEIR, 125 pairs). The primary metric is the Judge Sensitivity Score (JSS): the fraction of pairs where both prompt variants elicit the same categorical decision from an LLM judge.",
55
- "url": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark",
56
- "license": "https://creativecommons.org/licenses/by/4.0/",
57
- "version": "2.0",
58
- "datePublished": "2026-05-03",
59
- "inLanguage": "en",
60
-
61
- "keywords": [
62
- "llm-as-a-judge",
63
- "prompt sensitivity",
64
- "benchmark",
65
- "evaluation",
66
- "judge sensitivity score",
67
- "factuality",
68
- "coherence",
69
- "preference",
70
- "relevance",
71
- "natural language processing"
72
- ],
73
-
74
- "creator": {
75
- "@type": "Person",
76
- "name": "Anonymous Author"
77
- },
78
-
79
- "citeAs": "@misc{anonymous2026judgesense, title={JudgeSense: A Benchmark for Prompt Sensitivity in LLM-as-a-Judge Systems}, author={Anonymous Author}, year={2026}}",
80
-
81
- "isLiveDataset": false,
82
-
83
- "distribution": [
84
- {
85
- "@type": "cr:FileObject",
86
- "@id": "factuality-jsonl",
87
- "name": "factuality.jsonl",
88
- "description": "125 prompt-paraphrase pairs for factuality evaluation. Responses drawn from TruthfulQA. Binary labels: accurate or inaccurate. Includes 25 pairs involving Template 4 (polarity-inverted), which require label remapping before computing JSS.",
89
- "contentUrl": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark/resolve/main/data/factuality.jsonl",
90
- "encodingFormat": "application/jsonlines",
91
- "sha256": "b5e762e9980e3882a3a7a65238afcc697b7eabca009603df5d86ea850b69fe13"
92
- },
93
- {
94
- "@type": "cr:FileObject",
95
- "@id": "coherence-jsonl",
96
- "name": "coherence.jsonl",
97
- "description": "125 prompt-paraphrase pairs for coherence evaluation. Responses drawn from SummEval. Likert-scale labels: score_1 through score_5.",
98
- "contentUrl": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark/resolve/main/data/coherence.jsonl",
99
- "encodingFormat": "application/jsonlines",
100
- "sha256": "6a2f7a5214694c1796f85d8f009af4fd2990b5254c63d8e50b82fa33ca0a2b80"
101
- },
102
- {
103
- "@type": "cr:FileObject",
104
- "@id": "preference-jsonl",
105
- "name": "preference.jsonl",
106
- "description": "125 prompt-paraphrase pairs for pairwise preference evaluation. Responses drawn from MT-Bench. Binary choice labels: A or B.",
107
- "contentUrl": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark/resolve/main/data/preference.jsonl",
108
- "encodingFormat": "application/jsonlines",
109
- "sha256": "1f365c624fb52a788caf56f17082c129f33fb91f339a7644b5bc07b7ac70507d"
110
- },
111
- {
112
- "@type": "cr:FileObject",
113
- "@id": "relevance-jsonl",
114
- "name": "relevance.jsonl",
115
- "description": "125 prompt-paraphrase pairs for pairwise relevance evaluation. Responses drawn from BEIR. Binary choice labels: A or B.",
116
- "contentUrl": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark/resolve/main/data/relevance.jsonl",
117
- "encodingFormat": "application/jsonlines",
118
- "sha256": "9200f09285a0b41076ec656dd1332901bec418a37820a92900747ac8e647c709"
119
- }
120
- ],
121
-
122
- "recordSet": [
123
- {
124
- "@type": "cr:RecordSet",
125
- "@id": "factuality-records",
126
- "name": "Factuality prompt-paraphrase pairs",
127
- "description": "125 records for the factuality task. Source: TruthfulQA. Label space: accurate, inaccurate.",
128
- "source": {
129
- "fileObject": { "@id": "factuality-jsonl" }
130
- },
131
- "field": [
132
- {
133
- "@type": "cr:Field",
134
- "@id": "factuality-records/pair_id",
135
- "name": "pair_id",
136
- "description": "Unique pair identifier, e.g. fact_001 through fact_125.",
137
- "dataType": "sc:Text",
138
- "source": {
139
- "fileObject": { "@id": "factuality-jsonl" },
140
- "extract": { "column": "pair_id" }
141
- }
142
- },
143
- {
144
- "@type": "cr:Field",
145
- "@id": "factuality-records/task_type",
146
- "name": "task_type",
147
- "description": "Always 'factuality' for records in this file.",
148
- "dataType": "sc:Text",
149
- "source": {
150
- "fileObject": { "@id": "factuality-jsonl" },
151
- "extract": { "column": "task_type" }
152
- }
153
- },
154
- {
155
- "@type": "cr:Field",
156
- "@id": "factuality-records/source_benchmark",
157
- "name": "source_benchmark",
158
- "description": "Always 'TruthfulQA' for records in this file.",
159
- "dataType": "sc:Text",
160
- "source": {
161
- "fileObject": { "@id": "factuality-jsonl" },
162
- "extract": { "column": "source_benchmark" }
163
- }
164
- },
165
- {
166
- "@type": "cr:Field",
167
- "@id": "factuality-records/prompt_a",
168
- "name": "prompt_a",
169
- "description": "First judge prompt variant (Template A). Full prompt string including the response being judged.",
170
- "dataType": "sc:Text",
171
- "source": {
172
- "fileObject": { "@id": "factuality-jsonl" },
173
- "extract": { "column": "prompt_a" }
174
- }
175
- },
176
- {
177
- "@type": "cr:Field",
178
- "@id": "factuality-records/prompt_b",
179
- "name": "prompt_b",
180
- "description": "Second judge prompt variant (Template B). Semantically equivalent to prompt_a but differently phrased. Template 4 uses inverted polarity.",
181
- "dataType": "sc:Text",
182
- "source": {
183
- "fileObject": { "@id": "factuality-jsonl" },
184
- "extract": { "column": "prompt_b" }
185
- }
186
- },
187
- {
188
- "@type": "cr:Field",
189
- "@id": "factuality-records/response_being_judged",
190
- "name": "response_being_judged",
191
- "description": "The text being evaluated by the LLM judge, drawn from TruthfulQA.",
192
- "dataType": "sc:Text",
193
- "source": {
194
- "fileObject": { "@id": "factuality-jsonl" },
195
- "extract": { "column": "response_being_judged" }
196
- }
197
- },
198
- {
199
- "@type": "cr:Field",
200
- "@id": "factuality-records/ground_truth_label",
201
- "name": "ground_truth_label",
202
- "description": "Reference label from TruthfulQA. Values: accurate or inaccurate.",
203
- "dataType": "sc:Text",
204
- "source": {
205
- "fileObject": { "@id": "factuality-jsonl" },
206
- "extract": { "column": "ground_truth_label" }
207
- }
208
- },
209
- {
210
- "@type": "cr:Field",
211
- "@id": "factuality-records/semantic_equivalence_score",
212
- "name": "semantic_equivalence_score",
213
- "description": "Human-validated semantic equivalence score. 1.0 for all factuality pairs including T4 polarity-inverted pairs, for backward compatibility.",
214
- "dataType": "sc:Float",
215
- "source": {
216
- "fileObject": { "@id": "factuality-jsonl" },
217
- "extract": { "column": "semantic_equivalence_score" }
218
- }
219
- }
220
- ]
221
- },
222
- {
223
- "@type": "cr:RecordSet",
224
- "@id": "coherence-records",
225
- "name": "Coherence prompt-paraphrase pairs",
226
- "description": "125 records for the coherence task. Source: SummEval. Label space: score_1, score_2, score_3, score_4, score_5.",
227
- "source": {
228
- "fileObject": { "@id": "coherence-jsonl" }
229
- },
230
- "field": [
231
- {
232
- "@type": "cr:Field",
233
- "@id": "coherence-records/pair_id",
234
- "name": "pair_id",
235
- "description": "Unique pair identifier, e.g. cohe_001 through cohe_125.",
236
- "dataType": "sc:Text",
237
- "source": {
238
- "fileObject": { "@id": "coherence-jsonl" },
239
- "extract": { "column": "pair_id" }
240
- }
241
- },
242
- {
243
- "@type": "cr:Field",
244
- "@id": "coherence-records/task_type",
245
- "name": "task_type",
246
- "description": "Always 'coherence' for records in this file.",
247
- "dataType": "sc:Text",
248
- "source": {
249
- "fileObject": { "@id": "coherence-jsonl" },
250
- "extract": { "column": "task_type" }
251
- }
252
- },
253
- {
254
- "@type": "cr:Field",
255
- "@id": "coherence-records/source_benchmark",
256
- "name": "source_benchmark",
257
- "description": "Always 'SummEval' for records in this file.",
258
- "dataType": "sc:Text",
259
- "source": {
260
- "fileObject": { "@id": "coherence-jsonl" },
261
- "extract": { "column": "source_benchmark" }
262
- }
263
- },
264
- {
265
- "@type": "cr:Field",
266
- "@id": "coherence-records/prompt_a",
267
- "name": "prompt_a",
268
- "description": "First judge prompt variant (Template A). Full prompt string including the text being evaluated.",
269
- "dataType": "sc:Text",
270
- "source": {
271
- "fileObject": { "@id": "coherence-jsonl" },
272
- "extract": { "column": "prompt_a" }
273
- }
274
- },
275
- {
276
- "@type": "cr:Field",
277
- "@id": "coherence-records/prompt_b",
278
- "name": "prompt_b",
279
- "description": "Second judge prompt variant (Template B). Semantically equivalent to prompt_a, differently phrased.",
280
- "dataType": "sc:Text",
281
- "source": {
282
- "fileObject": { "@id": "coherence-jsonl" },
283
- "extract": { "column": "prompt_b" }
284
- }
285
- },
286
- {
287
- "@type": "cr:Field",
288
- "@id": "coherence-records/response_being_judged",
289
- "name": "response_being_judged",
290
- "description": "The text whose coherence is being rated, drawn from SummEval.",
291
- "dataType": "sc:Text",
292
- "source": {
293
- "fileObject": { "@id": "coherence-jsonl" },
294
- "extract": { "column": "response_being_judged" }
295
- }
296
- },
297
- {
298
- "@type": "cr:Field",
299
- "@id": "coherence-records/ground_truth_label",
300
- "name": "ground_truth_label",
301
- "description": "Reference coherence rating from SummEval. Values: score_1 through score_5.",
302
- "dataType": "sc:Text",
303
- "source": {
304
- "fileObject": { "@id": "coherence-jsonl" },
305
- "extract": { "column": "ground_truth_label" }
306
- }
307
- },
308
- {
309
- "@type": "cr:Field",
310
- "@id": "coherence-records/semantic_equivalence_score",
311
- "name": "semantic_equivalence_score",
312
- "description": "Human-validated semantic equivalence score. 1.0 for all 125 coherence pairs.",
313
- "dataType": "sc:Float",
314
- "source": {
315
- "fileObject": { "@id": "coherence-jsonl" },
316
- "extract": { "column": "semantic_equivalence_score" }
317
- }
318
- }
319
- ]
320
- },
321
- {
322
- "@type": "cr:RecordSet",
323
- "@id": "preference-records",
324
- "name": "Preference prompt-paraphrase pairs",
325
- "description": "125 records for the pairwise preference task. Source: MT-Bench. Label space: A, B.",
326
- "source": {
327
- "fileObject": { "@id": "preference-jsonl" }
328
- },
329
- "field": [
330
- {
331
- "@type": "cr:Field",
332
- "@id": "preference-records/pair_id",
333
- "name": "pair_id",
334
- "description": "Unique pair identifier, e.g. pref_001 through pref_125.",
335
- "dataType": "sc:Text",
336
- "source": {
337
- "fileObject": { "@id": "preference-jsonl" },
338
- "extract": { "column": "pair_id" }
339
- }
340
- },
341
- {
342
- "@type": "cr:Field",
343
- "@id": "preference-records/task_type",
344
- "name": "task_type",
345
- "description": "Always 'preference' for records in this file.",
346
- "dataType": "sc:Text",
347
- "source": {
348
- "fileObject": { "@id": "preference-jsonl" },
349
- "extract": { "column": "task_type" }
350
- }
351
- },
352
- {
353
- "@type": "cr:Field",
354
- "@id": "preference-records/source_benchmark",
355
- "name": "source_benchmark",
356
- "description": "Always 'MT-Bench' for records in this file.",
357
- "dataType": "sc:Text",
358
- "source": {
359
- "fileObject": { "@id": "preference-jsonl" },
360
- "extract": { "column": "source_benchmark" }
361
- }
362
- },
363
- {
364
- "@type": "cr:Field",
365
- "@id": "preference-records/prompt_a",
366
- "name": "prompt_a",
367
- "description": "First judge prompt variant (Template A). Full pairwise preference prompt.",
368
- "dataType": "sc:Text",
369
- "source": {
370
- "fileObject": { "@id": "preference-jsonl" },
371
- "extract": { "column": "prompt_a" }
372
- }
373
- },
374
- {
375
- "@type": "cr:Field",
376
- "@id": "preference-records/prompt_b",
377
- "name": "prompt_b",
378
- "description": "Second judge prompt variant (Template B). Semantically equivalent to prompt_a.",
379
- "dataType": "sc:Text",
380
- "source": {
381
- "fileObject": { "@id": "preference-jsonl" },
382
- "extract": { "column": "prompt_b" }
383
- }
384
- },
385
- {
386
- "@type": "cr:Field",
387
- "@id": "preference-records/response_being_judged",
388
- "name": "response_being_judged",
389
- "description": "Two candidate responses (A and B) separated by ' | ', drawn from MT-Bench.",
390
- "dataType": "sc:Text",
391
- "source": {
392
- "fileObject": { "@id": "preference-jsonl" },
393
- "extract": { "column": "response_being_judged" }
394
- }
395
- },
396
- {
397
- "@type": "cr:Field",
398
- "@id": "preference-records/ground_truth_label",
399
- "name": "ground_truth_label",
400
- "description": "Reference preferred response from MT-Bench. Values: A or B.",
401
- "dataType": "sc:Text",
402
- "source": {
403
- "fileObject": { "@id": "preference-jsonl" },
404
- "extract": { "column": "ground_truth_label" }
405
- }
406
- },
407
- {
408
- "@type": "cr:Field",
409
- "@id": "preference-records/semantic_equivalence_score",
410
- "name": "semantic_equivalence_score",
411
- "description": "Human-validated semantic equivalence score. 1.0 for all 125 preference pairs.",
412
- "dataType": "sc:Float",
413
- "source": {
414
- "fileObject": { "@id": "preference-jsonl" },
415
- "extract": { "column": "semantic_equivalence_score" }
416
- }
417
- }
418
- ]
419
- },
420
- {
421
- "@type": "cr:RecordSet",
422
- "@id": "relevance-records",
423
- "name": "Relevance prompt-paraphrase pairs",
424
- "description": "125 records for the pairwise relevance task. Source: BEIR. Label space: A, B.",
425
- "source": {
426
- "fileObject": { "@id": "relevance-jsonl" }
427
- },
428
- "field": [
429
- {
430
- "@type": "cr:Field",
431
- "@id": "relevance-records/pair_id",
432
- "name": "pair_id",
433
- "description": "Unique pair identifier, e.g. relv_001 through relv_125.",
434
- "dataType": "sc:Text",
435
- "source": {
436
- "fileObject": { "@id": "relevance-jsonl" },
437
- "extract": { "column": "pair_id" }
438
- }
439
- },
440
- {
441
- "@type": "cr:Field",
442
- "@id": "relevance-records/task_type",
443
- "name": "task_type",
444
- "description": "Always 'relevance' for records in this file.",
445
- "dataType": "sc:Text",
446
- "source": {
447
- "fileObject": { "@id": "relevance-jsonl" },
448
- "extract": { "column": "task_type" }
449
- }
450
- },
451
- {
452
- "@type": "cr:Field",
453
- "@id": "relevance-records/source_benchmark",
454
- "name": "source_benchmark",
455
- "description": "Always 'BEIR' for records in this file.",
456
- "dataType": "sc:Text",
457
- "source": {
458
- "fileObject": { "@id": "relevance-jsonl" },
459
- "extract": { "column": "source_benchmark" }
460
- }
461
- },
462
- {
463
- "@type": "cr:Field",
464
- "@id": "relevance-records/prompt_a",
465
- "name": "prompt_a",
466
- "description": "First judge prompt variant (Template A). Full pairwise relevance prompt.",
467
- "dataType": "sc:Text",
468
- "source": {
469
- "fileObject": { "@id": "relevance-jsonl" },
470
- "extract": { "column": "prompt_a" }
471
- }
472
- },
473
- {
474
- "@type": "cr:Field",
475
- "@id": "relevance-records/prompt_b",
476
- "name": "prompt_b",
477
- "description": "Second judge prompt variant (Template B). Semantically equivalent to prompt_a.",
478
- "dataType": "sc:Text",
479
- "source": {
480
- "fileObject": { "@id": "relevance-jsonl" },
481
- "extract": { "column": "prompt_b" }
482
- }
483
- },
484
- {
485
- "@type": "cr:Field",
486
- "@id": "relevance-records/response_being_judged",
487
- "name": "response_being_judged",
488
- "description": "Two candidate documents (A and B) separated by ' | ', drawn from BEIR.",
489
- "dataType": "sc:Text",
490
- "source": {
491
- "fileObject": { "@id": "relevance-jsonl" },
492
- "extract": { "column": "response_being_judged" }
493
- }
494
- },
495
- {
496
- "@type": "cr:Field",
497
- "@id": "relevance-records/ground_truth_label",
498
- "name": "ground_truth_label",
499
- "description": "Reference more-relevant document from BEIR. Values: A or B.",
500
- "dataType": "sc:Text",
501
- "source": {
502
- "fileObject": { "@id": "relevance-jsonl" },
503
- "extract": { "column": "ground_truth_label" }
504
- }
505
- },
506
- {
507
- "@type": "cr:Field",
508
- "@id": "relevance-records/semantic_equivalence_score",
509
- "name": "semantic_equivalence_score",
510
- "description": "Human-validated semantic equivalence score. 1.0 for all 125 relevance pairs.",
511
- "dataType": "sc:Float",
512
- "source": {
513
- "fileObject": { "@id": "relevance-jsonl" },
514
- "extract": { "column": "semantic_equivalence_score" }
515
- }
516
- }
517
- ]
518
- }
519
- ],
520
-
521
- "prov:wasDerivedFrom": [
522
- {
523
- "@type": "sc:Dataset",
524
- "name": "TruthfulQA",
525
- "url": "https://github.com/sylinrl/TruthfulQA",
526
- "description": "Source of the 125 responses used in the factuality task."
527
- },
528
- {
529
- "@type": "sc:Dataset",
530
- "name": "SummEval",
531
- "url": "https://github.com/Yale-LILY/SummEval",
532
- "description": "Source of the 125 texts used in the coherence task."
533
- },
534
- {
535
- "@type": "sc:Dataset",
536
- "name": "MT-Bench",
537
- "url": "https://github.com/lm-sys/FastChat",
538
- "description": "Source of the 125 response pairs used in the preference task."
539
- },
540
- {
541
- "@type": "sc:Dataset",
542
- "name": "BEIR",
543
- "url": "https://github.com/beir-cellar/beir",
544
- "description": "Source of the 125 document pairs used in the relevance task."
545
- }
546
- ],
547
-
548
- "prov:wasGeneratedBy": {
549
- "@type": "prov:Activity",
550
- "name": "JudgeSense Benchmark Construction",
551
- "description": "Prompt-paraphrase pairs were constructed by applying five manually written judge-prompt templates (T1-T5) to each item drawn from four source benchmarks. Semantic equivalence was validated by human annotation followed by a GPT-4o-mini cross-check on a 10% random sample. The Template-4 polarity inversion in the factuality task was identified post-hoc and is addressed via label remapping in the evaluation code."
552
- },
553
-
554
- "rai:hasSyntheticData": false,
555
-
556
- "rai:dataCollection": "Prompt-paraphrase pairs were constructed by the authors by applying five manually written judge-prompt templates (T1-T5) to each item drawn from four public benchmarks: TruthfulQA (factuality), SummEval (coherence), MT-Bench (preference), and BEIR (relevance). Each pair consists of two templates applied to the same response, forming a semantically equivalent judgment request with different surface phrasing. No new human subjects were recruited and no surveys or interviews were conducted. The responses being judged are verbatim items from the source benchmarks and were not modified.",
557
-
558
- "rai:dataCollectionType": "Manually created / Benchmark construction from existing public datasets",
559
-
560
- "rai:dataCollectionMissingData": "Six factuality items were excluded during curation due to ambiguous or contested ground-truth labels in TruthfulQA. The 50 Template-4 (T4) factuality pairs involving polarity inversion are retained and flagged; the evaluation code applies label remapping before computing JSS rather than excluding them. No other data was intentionally withheld.",
561
-
562
- "rai:dataPreprocessingProtocol": "Source benchmark items were selected to provide a representative spread of difficulty levels and label classes. No text normalization, tokenization, or filtering was applied to the source responses. Five judge-prompt templates per task were written by the authors to systematically vary phrasing, instruction style, and label wording while preserving evaluation intent. Template 4 for the factuality task was identified post-hoc as polarity-inverted; corrected label mapping is implemented in utils/compute_jss.py.",
563
-
564
- "rai:dataAnnotationProtocol": "Semantic equivalence validation was performed by a single human annotator who independently reviewed all 500 prompt pairs. For each pair, the annotator judged whether the two prompt variants convey the same evaluation intent. Annotation options were YES (semantically equivalent), NO (not equivalent), and UNSURE. Validation was conducted in a single pass without adjudication rounds.",
565
-
566
- "rai:dataAnnotationPlatform": "In-house manual annotation; no crowdsourcing platform was used. A supplementary automated pass used GPT-4o-mini (OpenAI Chat Completions API, temperature=0) as a semantic-equivalence classifier to cross-check human judgments on a random 10% subset of pairs.",
567
-
568
- "rai:dataAnnotationAnalysis": "Single annotator; inter-annotator agreement is not applicable. Outcome: 450 of 500 pairs marked YES (semantically equivalent), 50 marked NO (all T4 polarity-inverted factuality pairs), 0 UNSURE. Automated GPT-4o-mini cross-check agreed with the human annotation on 100% of the reviewed 50-pair subset.",
569
-
570
- "rai:dataSocialImpact": "JudgeSense is a diagnostic benchmark for auditing LLM evaluation pipelines. It does not contain personal data, demographic information, or user-generated content from real individuals. The primary societal benefit is improving transparency in automated NLP evaluation: LLM judges are increasingly used as proxies for human evaluation, and undetected prompt sensitivity can silently bias research conclusions, model rankings, and deployment decisions. No harmful, offensive, or dual-use content is present in the dataset.",
571
-
572
- "rai:dataBiases": "1. English-only: all prompts and responses are in English; findings may not generalize to multilingual judge settings. 2. Template coverage: only 5 paraphrase templates per task; other phrasing variations may produce different sensitivity profiles. 3. Single-annotator equivalence validation: no inter-annotator reliability measure is reported. 4. Source benchmark bias: items drawn from TruthfulQA, SummEval, MT-Bench, and BEIR; task difficulty distributions reflect those benchmarks. 5. Template-4 polarity-inversion artifact (factuality): uncorrected analyses will overestimate flip rates. 6. Position bias in pairwise tasks: 12 of 13 tested judges systematically select option A in preference and relevance tasks.",
573
-
574
- "rai:dataUseCases": "Primary intended use: auditing LLM judges for prompt sensitivity using the Judge Sensitivity Score (JSS) metric. Secondary uses: prompt engineering research; meta-evaluation to detect prompt-induced artifacts in automated evaluation pipelines; comparative benchmarking of LLM judge models on decision consistency. Out-of-scope uses: training or fine-tuning LLMs; evaluating factual knowledge; leaderboard competition (no held-out test split).",
575
-
576
- "rai:dataLimitations": "1. Single human annotator for equivalence validation; no inter-annotator reliability metric is available. 2. Only 5 prompt templates per task; broader coverage may reveal additional sensitivity patterns. 3. English-only. 4. Pairwise sensitivity only: each record compares exactly two prompt variants. 5. Source responses are from academic benchmarks and may not reflect real-world LLM output distributions. 6. The T4 polarity-inversion artifact requires evaluation-code correction; naive application without remapping will overestimate factuality flip rates. 7. Position bias renders preference and relevance JSS values degenerate for most tested models.",
577
-
578
- "rai:dataSensitiveElement": "None. The dataset contains no personal identifiable information (PII), no demographic data, no health or financial data, no user-generated content from identifiable real individuals, and no content that could identify specific persons.",
579
-
580
- "rai:personalSensitiveInformation": "None. No gender, health, socioeconomic, geographic, linguistic, age, cultural, political, or religious information about individuals is present in the dataset.",
581
-
582
- "rai:annotationsPerItem": "1",
583
-
584
- "rai:annotatorDemographics": "Single annotator who is an NLP researcher with domain expertise in LLM evaluation and benchmark design. No additional demographic information was collected, consistent with the single-annotator in-house design and the absence of a human-subjects research protocol.",
585
-
586
- "rai:machineAnnotationTools": "GPT-4o-mini (OpenAI, model gpt-4o-mini) used as a supplementary semantic-equivalence classifier to cross-check human annotations on a 10% random sample (50 pairs). Queried via the OpenAI Chat Completions API at temperature=0. The primary annotation is human; the automated pass is supplementary validation only."
587
- }
 
1
+ {
2
+ "@context": {
3
+ "@language": "en",
4
+ "@vocab": "https://schema.org/",
5
+ "cr": "http://mlcommons.org/croissant/",
6
+ "rai": "http://mlcommons.org/croissant/RAI/",
7
+ "dct": "http://purl.org/dc/terms/",
8
+ "prov": "http://www.w3.org/ns/prov#",
9
+ "sc": "https://schema.org/",
10
+ "citeAs": "cr:citeAs",
11
+ "column": "cr:column",
12
+ "conformsTo": "dct:conformsTo",
13
+ "data": {
14
+ "@id": "cr:data",
15
+ "@type": "@json"
16
+ },
17
+ "dataType": {
18
+ "@id": "cr:dataType",
19
+ "@type": "@vocab"
20
+ },
21
+ "examples": {
22
+ "@id": "cr:examples",
23
+ "@type": "@json"
24
+ },
25
+ "extract": "cr:extract",
26
+ "field": "cr:field",
27
+ "fileObject": "cr:fileObject",
28
+ "fileProperty": "cr:fileProperty",
29
+ "fileSet": "cr:fileSet",
30
+ "format": "cr:format",
31
+ "includes": "cr:includes",
32
+ "isLiveDataset": "cr:isLiveDataset",
33
+ "jsonPath": "cr:jsonPath",
34
+ "key": "cr:key",
35
+ "md5": "cr:md5",
36
+ "parentField": "cr:parentField",
37
+ "path": "cr:path",
38
+ "recordSet": "cr:recordSet",
39
+ "references": "cr:references",
40
+ "regex": "cr:regex",
41
+ "repeated": "cr:repeated",
42
+ "replace": "cr:replace",
43
+ "separator": "cr:separator",
44
+ "source": "cr:source",
45
+ "subField": "cr:subField",
46
+ "transform": "cr:transform"
47
+ },
48
+
49
+ "@type": "sc:Dataset",
50
+ "@id": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark",
51
+ "conformsTo": "http://mlcommons.org/croissant/1.1",
52
+
53
+ "name": "JudgeSense",
54
+ "description": "JudgeSense is a benchmark of 450 hand-validated prompt-paraphrase pairs for evaluating prompt sensitivity in LLM-as-a-Judge systems. Each pair presents two differently phrased but semantically equivalent judge prompts applied to the same response, enabling rigorous measurement of how a judge's decision changes due to prompt wording alone. The dataset spans four evaluation tasks: factuality (TruthfulQA, 125 pairs), coherence (SummEval, 125 pairs), preference (MT-Bench, 125 pairs), and relevance (BEIR, 125 pairs). The primary metric is the Judge Sensitivity Score (JSS): the fraction of pairs where both prompt variants elicit the same categorical decision from an LLM judge.",
55
+ "url": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark",
56
+ "license": "https://creativecommons.org/licenses/by/4.0/",
57
+ "version": "2.0",
58
+ "datePublished": "2026-05-03",
59
+ "inLanguage": "en",
60
+
61
+ "keywords": [
62
+ "llm-as-a-judge",
63
+ "prompt sensitivity",
64
+ "benchmark",
65
+ "evaluation",
66
+ "judge sensitivity score",
67
+ "factuality",
68
+ "coherence",
69
+ "preference",
70
+ "relevance",
71
+ "natural language processing"
72
+ ],
73
+
74
+ "creator": {
75
+ "@type": "Person",
76
+ "name": "Anonymous Author"
77
+ },
78
+
79
+ "citeAs": "@misc{anonymous2026judgesense, title={JudgeSense: A Benchmark for Prompt Sensitivity in LLM-as-a-Judge Systems}, author={Anonymous Author}, year={2026}}",
80
+
81
+ "isLiveDataset": false,
82
+
83
+ "distribution": [
84
+ {
85
+ "@type": "cr:FileObject",
86
+ "@id": "factuality-jsonl",
87
+ "name": "factuality.jsonl",
88
+ "description": "125 prompt-paraphrase pairs for factuality evaluation. Responses drawn from TruthfulQA. Binary labels: accurate or inaccurate. Includes 25 pairs involving Template 4 (polarity-inverted), which require label remapping before computing JSS.",
89
+ "contentUrl": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark/resolve/main/data/factuality.jsonl",
90
+ "encodingFormat": "application/jsonlines",
91
+ "sha256": "b5e762e9980e3882a3a7a65238afcc697b7eabca009603df5d86ea850b69fe13"
92
+ },
93
+ {
94
+ "@type": "cr:FileObject",
95
+ "@id": "coherence-jsonl",
96
+ "name": "coherence.jsonl",
97
+ "description": "125 prompt-paraphrase pairs for coherence evaluation. Responses drawn from SummEval. Likert-scale labels: score_1 through score_5.",
98
+ "contentUrl": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark/resolve/main/data/coherence.jsonl",
99
+ "encodingFormat": "application/jsonlines",
100
+ "sha256": "6a2f7a5214694c1796f85d8f009af4fd2990b5254c63d8e50b82fa33ca0a2b80"
101
+ },
102
+ {
103
+ "@type": "cr:FileObject",
104
+ "@id": "preference-jsonl",
105
+ "name": "preference.jsonl",
106
+ "description": "125 prompt-paraphrase pairs for pairwise preference evaluation. Responses drawn from MT-Bench. Binary choice labels: A or B.",
107
+ "contentUrl": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark/resolve/main/data/preference.jsonl",
108
+ "encodingFormat": "application/jsonlines",
109
+ "sha256": "1f365c624fb52a788caf56f17082c129f33fb91f339a7644b5bc07b7ac70507d"
110
+ },
111
+ {
112
+ "@type": "cr:FileObject",
113
+ "@id": "relevance-jsonl",
114
+ "name": "relevance.jsonl",
115
+ "description": "125 prompt-paraphrase pairs for pairwise relevance evaluation. Responses drawn from BEIR. Binary choice labels: A or B.",
116
+ "contentUrl": "https://huggingface.co/datasets/anonymousreview111/judgesense-benchmark/resolve/main/data/relevance.jsonl",
117
+ "encodingFormat": "application/jsonlines",
118
+ "sha256": "9200f09285a0b41076ec656dd1332901bec418a37820a92900747ac8e647c709"
119
+ }
120
+ ],
121
+
122
+ "recordSet": [
123
+ {
124
+ "@type": "cr:RecordSet",
125
+ "@id": "factuality-records",
126
+ "name": "Factuality prompt-paraphrase pairs",
127
+ "description": "125 records for the factuality task. Source: TruthfulQA. Label space: accurate, inaccurate.",
128
+ "source": {
129
+ "fileObject": { "@id": "factuality-jsonl" }
130
+ },
131
+ "field": [
132
+ {
133
+ "@type": "cr:Field",
134
+ "@id": "factuality-records/pair_id",
135
+ "name": "pair_id",
136
+ "description": "Unique pair identifier, e.g. fact_001 through fact_125.",
137
+ "dataType": "sc:Text",
138
+ "source": {
139
+ "fileObject": { "@id": "factuality-jsonl" },
140
+ "extract": { "column": "pair_id" }
141
+ }
142
+ },
143
+ {
144
+ "@type": "cr:Field",
145
+ "@id": "factuality-records/task_type",
146
+ "name": "task_type",
147
+ "description": "Always 'factuality' for records in this file.",
148
+ "dataType": "sc:Text",
149
+ "source": {
150
+ "fileObject": { "@id": "factuality-jsonl" },
151
+ "extract": { "column": "task_type" }
152
+ }
153
+ },
154
+ {
155
+ "@type": "cr:Field",
156
+ "@id": "factuality-records/source_benchmark",
157
+ "name": "source_benchmark",
158
+ "description": "Always 'TruthfulQA' for records in this file.",
159
+ "dataType": "sc:Text",
160
+ "source": {
161
+ "fileObject": { "@id": "factuality-jsonl" },
162
+ "extract": { "column": "source_benchmark" }
163
+ }
164
+ },
165
+ {
166
+ "@type": "cr:Field",
167
+ "@id": "factuality-records/prompt_a",
168
+ "name": "prompt_a",
169
+ "description": "First judge prompt variant (Template A). Full prompt string including the response being judged.",
170
+ "dataType": "sc:Text",
171
+ "source": {
172
+ "fileObject": { "@id": "factuality-jsonl" },
173
+ "extract": { "column": "prompt_a" }
174
+ }
175
+ },
176
+ {
177
+ "@type": "cr:Field",
178
+ "@id": "factuality-records/prompt_b",
179
+ "name": "prompt_b",
180
+ "description": "Second judge prompt variant (Template B). Semantically equivalent to prompt_a but differently phrased. Template 4 uses inverted polarity.",
181
+ "dataType": "sc:Text",
182
+ "source": {
183
+ "fileObject": { "@id": "factuality-jsonl" },
184
+ "extract": { "column": "prompt_b" }
185
+ }
186
+ },
187
+ {
188
+ "@type": "cr:Field",
189
+ "@id": "factuality-records/response_being_judged",
190
+ "name": "response_being_judged",
191
+ "description": "The text being evaluated by the LLM judge, drawn from TruthfulQA.",
192
+ "dataType": "sc:Text",
193
+ "source": {
194
+ "fileObject": { "@id": "factuality-jsonl" },
195
+ "extract": { "column": "response_being_judged" }
196
+ }
197
+ },
198
+ {
199
+ "@type": "cr:Field",
200
+ "@id": "factuality-records/ground_truth_label",
201
+ "name": "ground_truth_label",
202
+ "description": "Reference label from TruthfulQA. Values: accurate or inaccurate.",
203
+ "dataType": "sc:Text",
204
+ "source": {
205
+ "fileObject": { "@id": "factuality-jsonl" },
206
+ "extract": { "column": "ground_truth_label" }
207
+ }
208
+ },
209
+ {
210
+ "@type": "cr:Field",
211
+ "@id": "factuality-records/semantic_equivalence_score",
212
+ "name": "semantic_equivalence_score",
213
+ "description": "Human-validated semantic equivalence score. 1.0 for all factuality pairs including T4 polarity-inverted factuality pairs were excluded from the published dataset; see Appendix B of the paper for their analysis.",
214
+ "dataType": "sc:Float",
215
+ "source": {
216
+ "fileObject": { "@id": "factuality-jsonl" },
217
+ "extract": { "column": "semantic_equivalence_score" }
218
+ }
219
+ }
220
+ ]
221
+ },
222
+ {
223
+ "@type": "cr:RecordSet",
224
+ "@id": "coherence-records",
225
+ "name": "Coherence prompt-paraphrase pairs",
226
+ "description": "125 records for the coherence task. Source: SummEval. Label space: score_1, score_2, score_3, score_4, score_5.",
227
+ "source": {
228
+ "fileObject": { "@id": "coherence-jsonl" }
229
+ },
230
+ "field": [
231
+ {
232
+ "@type": "cr:Field",
233
+ "@id": "coherence-records/pair_id",
234
+ "name": "pair_id",
235
+ "description": "Unique pair identifier, e.g. cohe_001 through cohe_125.",
236
+ "dataType": "sc:Text",
237
+ "source": {
238
+ "fileObject": { "@id": "coherence-jsonl" },
239
+ "extract": { "column": "pair_id" }
240
+ }
241
+ },
242
+ {
243
+ "@type": "cr:Field",
244
+ "@id": "coherence-records/task_type",
245
+ "name": "task_type",
246
+ "description": "Always 'coherence' for records in this file.",
247
+ "dataType": "sc:Text",
248
+ "source": {
249
+ "fileObject": { "@id": "coherence-jsonl" },
250
+ "extract": { "column": "task_type" }
251
+ }
252
+ },
253
+ {
254
+ "@type": "cr:Field",
255
+ "@id": "coherence-records/source_benchmark",
256
+ "name": "source_benchmark",
257
+ "description": "Always 'SummEval' for records in this file.",
258
+ "dataType": "sc:Text",
259
+ "source": {
260
+ "fileObject": { "@id": "coherence-jsonl" },
261
+ "extract": { "column": "source_benchmark" }
262
+ }
263
+ },
264
+ {
265
+ "@type": "cr:Field",
266
+ "@id": "coherence-records/prompt_a",
267
+ "name": "prompt_a",
268
+ "description": "First judge prompt variant (Template A). Full prompt string including the text being evaluated.",
269
+ "dataType": "sc:Text",
270
+ "source": {
271
+ "fileObject": { "@id": "coherence-jsonl" },
272
+ "extract": { "column": "prompt_a" }
273
+ }
274
+ },
275
+ {
276
+ "@type": "cr:Field",
277
+ "@id": "coherence-records/prompt_b",
278
+ "name": "prompt_b",
279
+ "description": "Second judge prompt variant (Template B). Semantically equivalent to prompt_a, differently phrased.",
280
+ "dataType": "sc:Text",
281
+ "source": {
282
+ "fileObject": { "@id": "coherence-jsonl" },
283
+ "extract": { "column": "prompt_b" }
284
+ }
285
+ },
286
+ {
287
+ "@type": "cr:Field",
288
+ "@id": "coherence-records/response_being_judged",
289
+ "name": "response_being_judged",
290
+ "description": "The text whose coherence is being rated, drawn from SummEval.",
291
+ "dataType": "sc:Text",
292
+ "source": {
293
+ "fileObject": { "@id": "coherence-jsonl" },
294
+ "extract": { "column": "response_being_judged" }
295
+ }
296
+ },
297
+ {
298
+ "@type": "cr:Field",
299
+ "@id": "coherence-records/ground_truth_label",
300
+ "name": "ground_truth_label",
301
+ "description": "Reference coherence rating from SummEval. Values: score_1 through score_5.",
302
+ "dataType": "sc:Text",
303
+ "source": {
304
+ "fileObject": { "@id": "coherence-jsonl" },
305
+ "extract": { "column": "ground_truth_label" }
306
+ }
307
+ },
308
+ {
309
+ "@type": "cr:Field",
310
+ "@id": "coherence-records/semantic_equivalence_score",
311
+ "name": "semantic_equivalence_score",
312
+ "description": "Human-validated semantic equivalence score. 1.0 for all 125 coherence pairs.",
313
+ "dataType": "sc:Float",
314
+ "source": {
315
+ "fileObject": { "@id": "coherence-jsonl" },
316
+ "extract": { "column": "semantic_equivalence_score" }
317
+ }
318
+ }
319
+ ]
320
+ },
321
+ {
322
+ "@type": "cr:RecordSet",
323
+ "@id": "preference-records",
324
+ "name": "Preference prompt-paraphrase pairs",
325
+ "description": "125 records for the pairwise preference task. Source: MT-Bench. Label space: A, B.",
326
+ "source": {
327
+ "fileObject": { "@id": "preference-jsonl" }
328
+ },
329
+ "field": [
330
+ {
331
+ "@type": "cr:Field",
332
+ "@id": "preference-records/pair_id",
333
+ "name": "pair_id",
334
+ "description": "Unique pair identifier, e.g. pref_001 through pref_125.",
335
+ "dataType": "sc:Text",
336
+ "source": {
337
+ "fileObject": { "@id": "preference-jsonl" },
338
+ "extract": { "column": "pair_id" }
339
+ }
340
+ },
341
+ {
342
+ "@type": "cr:Field",
343
+ "@id": "preference-records/task_type",
344
+ "name": "task_type",
345
+ "description": "Always 'preference' for records in this file.",
346
+ "dataType": "sc:Text",
347
+ "source": {
348
+ "fileObject": { "@id": "preference-jsonl" },
349
+ "extract": { "column": "task_type" }
350
+ }
351
+ },
352
+ {
353
+ "@type": "cr:Field",
354
+ "@id": "preference-records/source_benchmark",
355
+ "name": "source_benchmark",
356
+ "description": "Always 'MT-Bench' for records in this file.",
357
+ "dataType": "sc:Text",
358
+ "source": {
359
+ "fileObject": { "@id": "preference-jsonl" },
360
+ "extract": { "column": "source_benchmark" }
361
+ }
362
+ },
363
+ {
364
+ "@type": "cr:Field",
365
+ "@id": "preference-records/prompt_a",
366
+ "name": "prompt_a",
367
+ "description": "First judge prompt variant (Template A). Full pairwise preference prompt.",
368
+ "dataType": "sc:Text",
369
+ "source": {
370
+ "fileObject": { "@id": "preference-jsonl" },
371
+ "extract": { "column": "prompt_a" }
372
+ }
373
+ },
374
+ {
375
+ "@type": "cr:Field",
376
+ "@id": "preference-records/prompt_b",
377
+ "name": "prompt_b",
378
+ "description": "Second judge prompt variant (Template B). Semantically equivalent to prompt_a.",
379
+ "dataType": "sc:Text",
380
+ "source": {
381
+ "fileObject": { "@id": "preference-jsonl" },
382
+ "extract": { "column": "prompt_b" }
383
+ }
384
+ },
385
+ {
386
+ "@type": "cr:Field",
387
+ "@id": "preference-records/response_being_judged",
388
+ "name": "response_being_judged",
389
+ "description": "Two candidate responses (A and B) separated by ' | ', drawn from MT-Bench.",
390
+ "dataType": "sc:Text",
391
+ "source": {
392
+ "fileObject": { "@id": "preference-jsonl" },
393
+ "extract": { "column": "response_being_judged" }
394
+ }
395
+ },
396
+ {
397
+ "@type": "cr:Field",
398
+ "@id": "preference-records/ground_truth_label",
399
+ "name": "ground_truth_label",
400
+ "description": "Reference preferred response from MT-Bench. Values: A or B.",
401
+ "dataType": "sc:Text",
402
+ "source": {
403
+ "fileObject": { "@id": "preference-jsonl" },
404
+ "extract": { "column": "ground_truth_label" }
405
+ }
406
+ },
407
+ {
408
+ "@type": "cr:Field",
409
+ "@id": "preference-records/semantic_equivalence_score",
410
+ "name": "semantic_equivalence_score",
411
+ "description": "Human-validated semantic equivalence score. 1.0 for all 125 preference pairs.",
412
+ "dataType": "sc:Float",
413
+ "source": {
414
+ "fileObject": { "@id": "preference-jsonl" },
415
+ "extract": { "column": "semantic_equivalence_score" }
416
+ }
417
+ }
418
+ ]
419
+ },
420
+ {
421
+ "@type": "cr:RecordSet",
422
+ "@id": "relevance-records",
423
+ "name": "Relevance prompt-paraphrase pairs",
424
+ "description": "125 records for the pairwise relevance task. Source: BEIR. Label space: A, B.",
425
+ "source": {
426
+ "fileObject": { "@id": "relevance-jsonl" }
427
+ },
428
+ "field": [
429
+ {
430
+ "@type": "cr:Field",
431
+ "@id": "relevance-records/pair_id",
432
+ "name": "pair_id",
433
+ "description": "Unique pair identifier, e.g. relv_001 through relv_125.",
434
+ "dataType": "sc:Text",
435
+ "source": {
436
+ "fileObject": { "@id": "relevance-jsonl" },
437
+ "extract": { "column": "pair_id" }
438
+ }
439
+ },
440
+ {
441
+ "@type": "cr:Field",
442
+ "@id": "relevance-records/task_type",
443
+ "name": "task_type",
444
+ "description": "Always 'relevance' for records in this file.",
445
+ "dataType": "sc:Text",
446
+ "source": {
447
+ "fileObject": { "@id": "relevance-jsonl" },
448
+ "extract": { "column": "task_type" }
449
+ }
450
+ },
451
+ {
452
+ "@type": "cr:Field",
453
+ "@id": "relevance-records/source_benchmark",
454
+ "name": "source_benchmark",
455
+ "description": "Always 'BEIR' for records in this file.",
456
+ "dataType": "sc:Text",
457
+ "source": {
458
+ "fileObject": { "@id": "relevance-jsonl" },
459
+ "extract": { "column": "source_benchmark" }
460
+ }
461
+ },
462
+ {
463
+ "@type": "cr:Field",
464
+ "@id": "relevance-records/prompt_a",
465
+ "name": "prompt_a",
466
+ "description": "First judge prompt variant (Template A). Full pairwise relevance prompt.",
467
+ "dataType": "sc:Text",
468
+ "source": {
469
+ "fileObject": { "@id": "relevance-jsonl" },
470
+ "extract": { "column": "prompt_a" }
471
+ }
472
+ },
473
+ {
474
+ "@type": "cr:Field",
475
+ "@id": "relevance-records/prompt_b",
476
+ "name": "prompt_b",
477
+ "description": "Second judge prompt variant (Template B). Semantically equivalent to prompt_a.",
478
+ "dataType": "sc:Text",
479
+ "source": {
480
+ "fileObject": { "@id": "relevance-jsonl" },
481
+ "extract": { "column": "prompt_b" }
482
+ }
483
+ },
484
+ {
485
+ "@type": "cr:Field",
486
+ "@id": "relevance-records/response_being_judged",
487
+ "name": "response_being_judged",
488
+ "description": "Two candidate documents (A and B) separated by ' | ', drawn from BEIR.",
489
+ "dataType": "sc:Text",
490
+ "source": {
491
+ "fileObject": { "@id": "relevance-jsonl" },
492
+ "extract": { "column": "response_being_judged" }
493
+ }
494
+ },
495
+ {
496
+ "@type": "cr:Field",
497
+ "@id": "relevance-records/ground_truth_label",
498
+ "name": "ground_truth_label",
499
+ "description": "Reference more-relevant document from BEIR. Values: A or B.",
500
+ "dataType": "sc:Text",
501
+ "source": {
502
+ "fileObject": { "@id": "relevance-jsonl" },
503
+ "extract": { "column": "ground_truth_label" }
504
+ }
505
+ },
506
+ {
507
+ "@type": "cr:Field",
508
+ "@id": "relevance-records/semantic_equivalence_score",
509
+ "name": "semantic_equivalence_score",
510
+ "description": "Human-validated semantic equivalence score. 1.0 for all 125 relevance pairs.",
511
+ "dataType": "sc:Float",
512
+ "source": {
513
+ "fileObject": { "@id": "relevance-jsonl" },
514
+ "extract": { "column": "semantic_equivalence_score" }
515
+ }
516
+ }
517
+ ]
518
+ }
519
+ ],
520
+
521
+ "prov:wasDerivedFrom": [
522
+ {
523
+ "@type": "sc:Dataset",
524
+ "name": "TruthfulQA",
525
+ "url": "https://github.com/sylinrl/TruthfulQA",
526
+ "description": "Source of the 125 responses used in the factuality task."
527
+ },
528
+ {
529
+ "@type": "sc:Dataset",
530
+ "name": "SummEval",
531
+ "url": "https://github.com/Yale-LILY/SummEval",
532
+ "description": "Source of the 125 texts used in the coherence task."
533
+ },
534
+ {
535
+ "@type": "sc:Dataset",
536
+ "name": "MT-Bench",
537
+ "url": "https://github.com/lm-sys/FastChat",
538
+ "description": "Source of the 125 response pairs used in the preference task."
539
+ },
540
+ {
541
+ "@type": "sc:Dataset",
542
+ "name": "BEIR",
543
+ "url": "https://github.com/beir-cellar/beir",
544
+ "description": "Source of the 125 document pairs used in the relevance task."
545
+ }
546
+ ],
547
+
548
+ "prov:wasGeneratedBy": {
549
+ "@type": "prov:Activity",
550
+ "name": "JudgeSense Benchmark Construction",
551
+ "description": "Prompt-paraphrase pairs were constructed by applying five manually written judge-prompt templates (T1-T5) to each item drawn from four source benchmarks. Semantic equivalence was validated by human annotation followed by a GPT-4o-mini cross-check on a 10% random sample. The Template-4 polarity inversion in the factuality task was identified post-hoc and is addressed via label remapping in the evaluation code."
552
+ },
553
+
554
+ "rai:hasSyntheticData": false,
555
+
556
+ "rai:dataCollection": "Prompt-paraphrase pairs were constructed by the authors by applying five manually written judge-prompt templates (T1-T5) to each item drawn from four public benchmarks: TruthfulQA (factuality), SummEval (coherence), MT-Bench (preference), and BEIR (relevance). Each pair consists of two templates applied to the same response, forming a semantically equivalent judgment request with different surface phrasing. No new human subjects were recruited and no surveys or interviews were conducted. The responses being judged are verbatim items from the source benchmarks and were not modified.",
557
+
558
+ "rai:dataCollectionType": "Manually created / Benchmark construction from existing public datasets",
559
+
560
+ "rai:dataCollectionMissingData": "Six factuality items were excluded during curation due to ambiguous or contested ground-truth labels in TruthfulQA. The 50 Template-4 (T4) factuality pairs involving polarity inversion are retained and flagged; the evaluation code applies label remapping before computing JSS rather than excluding them. No other data was intentionally withheld.",
561
+
562
+ "rai:dataPreprocessingProtocol": "Source benchmark items were selected to provide a representative spread of difficulty levels and label classes. No text normalization, tokenization, or filtering was applied to the source responses. Five judge-prompt templates per task were written by the authors to systematically vary phrasing, instruction style, and label wording while preserving evaluation intent. Template 4 for the factuality task was identified post-hoc as polarity-inverted; corrected label mapping is implemented in utils/compute_jss.py.",
563
+
564
+ "rai:dataAnnotationProtocol": "Semantic equivalence validation was performed by a single human annotator who independently reviewed all 450 published prompt pairs. For each pair, the annotator judged whether the two prompt variants convey the same evaluation intent. Annotation options were YES (semantically equivalent), NO (not equivalent), and UNSURE. Validation was conducted in a single pass without adjudication rounds.",
565
+
566
+ "rai:dataAnnotationPlatform": "In-house manual annotation; no crowdsourcing platform was used. A supplementary automated pass used GPT-4o-mini (OpenAI Chat Completions API, temperature=0) as a semantic-equivalence classifier to cross-check human judgments on a random 10% subset of pairs.",
567
+
568
+ "rai:dataAnnotationAnalysis": "Single annotator; inter-annotator agreement is not applicable. Outcome: all 450 published pairs marked YES (semantically equivalent). T4 polarity-inverted factuality pairs), 0 UNSURE. Automated GPT-4o-mini cross-check agreed with the human annotation on 100% of the reviewed 50-pair subset.",
569
+
570
+ "rai:dataSocialImpact": "JudgeSense is a diagnostic benchmark for auditing LLM evaluation pipelines. It does not contain personal data, demographic information, or user-generated content from real individuals. The primary societal benefit is improving transparency in automated NLP evaluation: LLM judges are increasingly used as proxies for human evaluation, and undetected prompt sensitivity can silently bias research conclusions, model rankings, and deployment decisions. No harmful, offensive, or dual-use content is present in the dataset.",
571
+
572
+ "rai:dataBiases": "1. English-only: all prompts and responses are in English; findings may not generalize to multilingual judge settings. 2. Template coverage: only 5 paraphrase templates per task; other phrasing variations may produce different sensitivity profiles. 3. Single-annotator equivalence validation: no inter-annotator reliability measure is reported. 4. Source benchmark bias: items drawn from TruthfulQA, SummEval, MT-Bench, and BEIR; task difficulty distributions reflect those benchmarks. 5. Template-4 polarity-inversion artifact (factuality): uncorrected analyses will overestimate flip rates. 6. Position bias in pairwise tasks: 12 of 13 tested judges systematically select option A in preference and relevance tasks.",
573
+
574
+ "rai:dataUseCases": "Primary intended use: auditing LLM judges for prompt sensitivity using the Judge Sensitivity Score (JSS) metric. Secondary uses: prompt engineering research; meta-evaluation to detect prompt-induced artifacts in automated evaluation pipelines; comparative benchmarking of LLM judge models on decision consistency. Out-of-scope uses: training or fine-tuning LLMs; evaluating factual knowledge; leaderboard competition (no held-out test split).",
575
+
576
+ "rai:dataLimitations": "1. Single human annotator for equivalence validation; no inter-annotator reliability metric is available. 2. Only 5 prompt templates per task; broader coverage may reveal additional sensitivity patterns. 3. English-only. 4. Pairwise sensitivity only: each record compares exactly two prompt variants. 5. Source responses are from academic benchmarks and may not reflect real-world LLM output distributions. 6. The T4 polarity-inversion artifact requires evaluation-code correction; naive application without remapping will overestimate factuality flip rates. 7. Position bias renders preference and relevance JSS values degenerate for most tested models.",
577
+
578
+ "rai:dataSensitiveElement": "None. The dataset contains no personal identifiable information (PII), no demographic data, no health or financial data, no user-generated content from identifiable real individuals, and no content that could identify specific persons.",
579
+
580
+ "rai:personalSensitiveInformation": "None. No gender, health, socioeconomic, geographic, linguistic, age, cultural, political, or religious information about individuals is present in the dataset.",
581
+
582
+ "rai:annotationsPerItem": "1",
583
+
584
+ "rai:annotatorDemographics": "Single annotator who is an NLP researcher with domain expertise in LLM evaluation and benchmark design. No additional demographic information was collected, consistent with the single-annotator in-house design and the absence of a human-subjects research protocol.",
585
+
586
+ "rai:machineAnnotationTools": "GPT-4o-mini (OpenAI, model gpt-4o-mini) used as a supplementary semantic-equivalence classifier to cross-check human annotations on a 10% random sample (50 pairs). Queried via the OpenAI Chat Completions API at temperature=0. The primary annotation is human; the automated pass is supplementary validation only."
587
+ }