The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find any data file at /src/services/worker/rohitspider/cross_query_benchmark. Couldn't find 'rohitspider/cross_query_benchmark' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/rohitspider/cross_query_benchmark@4d3273756829b43eddf4f2c3abae4b9682747778/benchmark_train_v2.json' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1203, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find any data file at /src/services/worker/rohitspider/cross_query_benchmark. Couldn't find 'rohitspider/cross_query_benchmark' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/rohitspider/cross_query_benchmark@4d3273756829b43eddf4f2c3abae4b9682747778/benchmark_train_v2.json' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Cross-Query Contradiction Benchmark
Quantifying Cross-Query Contradictions in Multi-Query LLM Reasoning
Accepted at the ICLR 2026 Workshop on Logical Reasoning of Large Language Models
Rohit Kumar Salla (Virginia Tech), Ramya Manasa Amancherla (Columbia University), Manoj Saravanan (Virginia Tech)
Overview
Large language models frequently produce mutually inconsistent answers when reasoning over multiple related queries derived from the same premises. This benchmark evaluates case-file logical consistency: whether a model can maintain a globally satisfiable belief state across a bundle of interdependent queries.
The benchmark contains 390 case files across four reasoning domains, with 2,515 queries organized into bundles of 5β8 interdependent questions. Every label is machine-verified via Z3/CaDiCaL solvers, and each case includes formal representations and cross-query dependency annotations.
Key Features
- Z3-verified labels: All ENTAILED / CONTRADICTED / UNKNOWN labels are verified by SAT/SMT solvers β not just human annotation
- Cross-query dependency annotations: Each query is annotated with which other queries in its bundle it logically interacts with
- Formal representations: SMT-LIB and CNF formal encodings included for every case file
- 82.4% unique queries: Minimal duplication across bundles (vs. typical synthetic benchmarks)
- Four reasoning domains: Relational (SAT), Temporal (SMT), Policy/Rules, and Underspecified/Abductive
- Detailed reasoning traces: Average 95 characters per query (not trivial one-liners)
Benchmark Statistics
| Cases | Bundles | Queries | Logic Fragment | |
|---|---|---|---|---|
| Relational (SAT) | 120 | 120 | 787 | Propositional (seating, assignment, coloring) |
| Temporal (SMT) | 100 | 100 | 637 | Linear arithmetic (scheduling, ordering) |
| Policy / Rules | 80 | 80 | 506 | Ground first-order (access control, eligibility) |
| Underspecified | 90 | 90 | 585 | Partial information (diagnosis, investigation, fault) |
| Total | 390 | 390 | 2,515 |
Label Distribution
| Domain | ENTAILED | CONTRADICTED | UNKNOWN |
|---|---|---|---|
| Relational | 16% | 30% | 54% |
| Temporal | 11% | 32% | 57% |
| Policy | 6% | 32% | 62% |
| Underspecified | 18% | 36% | 46% |
| Overall | 13% | 32% | 55% |
Splits
| Split | Cases | Bundles |
|---|---|---|
| Train | 312 | 312 |
| Dev | 39 | 39 |
| Test | 39 | 39 |
Splits are stratified by domain at the case-file level to prevent leakage.
File Structure
cross_query_benchmark/
βββ benchmark_full_v2.json # Complete benchmark (all cases + metadata + statistics)
βββ benchmark_train_v2.json # Training split (312 cases)
βββ benchmark_dev_v2.json # Development split (39 cases)
βββ benchmark_test_v2.json # Test split (39 cases)
βββ evaluation_schema_v2.json # Metric definitions
βββ prompt_templates_v2.json # Prompt templates for extraction & repair
βββ README.md
Data Format
Case File Structure
Each case file in the JSON has the following schema:
{
"id": "rel_seating_0042",
"domain": "relational",
"subdomain": "seating_arrangement",
"logic_type": "propositional",
"premises": [
"There are 5 people seated at a circular table: Alice, Bob, Charlie, Diana, and Eve.",
"Each person occupies exactly one seat and all seats are distinct.",
"Alice must sit directly next to Bob.",
"Charlie must not sit directly next to Diana."
],
"formal_representation": "; Seating: 5 people, circular\n; Variables: pos_X in [0, 4] ...",
"entities": ["Alice", "Bob", "Charlie", "Diana", "Eve"],
"bundles": [
{
"bundle_id": "bundle_0",
"num_queries": 6,
"label_distribution": {"ENTAILED": 2, "CONTRADICTED": 2, "UNKNOWN": 2},
"queries": [
{
"query_id": "q0",
"query": "Must Alice always be seated next to Bob?",
"label": "ENTAILED",
"reasoning": "The premises explicitly require Alice to sit next to Bob, so adjacency is guaranteed in every valid arrangement.",
"query_type": "adj_forced",
"depends_on": []
},
{
"query_id": "q1",
"query": "Can Charlie sit at position 3?",
"label": "UNKNOWN",
"reasoning": "Position 3 is available to Charlie in some arrangements but not others, depending on how other constraints resolve.",
"query_type": "pos",
"depends_on": [0]
}
]
}
]
}
Field Descriptions
| Field | Description |
|---|---|
id |
Unique case identifier ({domain}_{subdomain}_{index}) |
domain |
One of: relational, temporal, policy, underspecified |
subdomain |
Specific scenario type (e.g., seating_arrangement, event_scheduling, access_control, diagnostic) |
logic_type |
Formal logic fragment: propositional, linear_arithmetic, first_order_ground, partial_information |
premises |
List of natural-language premise strings |
formal_representation |
SMT-LIB or CNF-style formal encoding of the case file |
entities |
Named entities in the case file |
bundles[].queries[].label |
Gold label: ENTAILED, CONTRADICTED, or UNKNOWN |
bundles[].queries[].reasoning |
Reasoning trace explaining the label (avg. 95 chars) |
bundles[].queries[].depends_on |
Indices of earlier queries in the bundle that this query logically interacts with |
bundles[].queries[].query_type |
Query category (e.g., adj, pos, overlap, order, eligible, is_guilty) |
Evaluation Metrics
Per-Query Metrics
| Metric | Description |
|---|---|
| Accuracy | Fraction of correct labels |
| Macro-F1 | F1 averaged across ENTAILED, CONTRADICTED, UNKNOWN |
| Unknown-F1 | F1 for the UNKNOWN class specifically |
Set-Level Metrics
| Metric | Description | Direction |
|---|---|---|
| SetConsRate | Fraction of bundles where the final belief state is satisfiable | β higher is better |
| AUC-PrefixCons | Average prefix satisfiability in sequential settings | β higher is better |
| RevisionCost | Average commitments revised to restore satisfiability | β lower is better |
| ContradictionDensity | Fraction of steps where a new contradiction emerges | β lower is better |
Compute Metrics
| Metric | Description |
|---|---|
| Overhead (OH) | Wall-clock time normalized to No-CoT baseline |
| Solver calls/bundle | Average SAT/SMT solver invocations per bundle |
Quick Start
Loading the Data
import json
# Load full benchmark
with open("benchmark_full_v2.json") as f:
benchmark = json.load(f)
print(f"Cases: {benchmark['statistics']['total_cases']}")
print(f"Queries: {benchmark['statistics']['total_queries']}")
# Iterate over cases
for case in benchmark["cases"]:
print(f"{case['id']}: {case['domain']} / {case['subdomain']}")
for bundle in case["bundles"]:
for query in bundle["queries"]:
print(f" Q: {query['query']}")
print(f" A: {query['label']} | Deps: {query['depends_on']}")
Loading Splits
# Load train/dev/test
for split in ["train", "dev", "test"]:
with open(f"benchmark_{split}_v2.json") as f:
data = json.load(f)
print(f"{split}: {data['num_cases']} cases, {data['num_bundles']} bundles")
Evaluating a Model
def evaluate_bundle(predictions, gold_labels):
"""
predictions: list of predicted labels (ENTAILED/CONTRADICTED/UNKNOWN)
gold_labels: list of gold labels
"""
correct = sum(p == g for p, g in zip(predictions, gold_labels))
accuracy = correct / len(gold_labels)
return accuracy
# Per-query evaluation
for case in benchmark["cases"]:
for bundle in case["bundles"]:
gold = [q["label"] for q in bundle["queries"]]
# your_predictions = model.predict(case["premises"], [q["query"] for q in bundle["queries"]])
# acc = evaluate_bundle(your_predictions, gold)
Using Formal Representations
# Access the formal (SMT-LIB / CNF) encoding for solver-based evaluation
for case in benchmark["cases"]:
if case["domain"] == "temporal":
print(case["formal_representation"])
# Parse and feed to Z3 or CaDiCaL
break
Prompt Templates
The repository includes prompt templates (prompt_templates_v2.json) for three tasks:
- Multi-query reasoning: Main evaluation prompt for LLMs to classify queries
- Commitment extraction: Extracts symbolic commitments (atoms/constraints) from model answers
- Repair prompting: Guides the LLM to propose minimal repairs for conflicting commitments
Domain Details
Relational (SAT) β 120 cases
Constraint satisfaction over discrete structures: circular/linear seating arrangements, office/team/color/shift assignments. Constraints include adjacency, non-adjacency, forced assignments, and bans. Verified via propositional SAT (CaDiCaL).
Temporal (SMT) β 100 cases
Event scheduling with ordering, duration, gap, overlap, and fixed-time constraints over bounded integer timelines. Also includes pure ordering problems with precedence and adjacency constraints. Verified via Z3 with QF_LIA (linear integer arithmetic).
Policy / Rules β 80 cases
Role-based access control (grant/deny/conditional rules over roles and resources) and eligibility checking (age/income/experience thresholds for programs). Verified via Z3 with Boolean + integer constraints.
Underspecified / Abductive β 90 cases
Scenarios with deliberately incomplete information: medical diagnosis (partial symptom observations, exactly-one-condition), criminal investigation (partial evidence, exactly-one-guilty), and system fault diagnosis (partial sensor data, dependency cascades). Designed to produce high UNKNOWN rates by construction.
Generation and Verification
All case files are procedurally generated with Z3 solver verification. The generation pipeline:
- Constraint generation: Random but structurally valid constraint systems per domain
- Z3 verification: Every query label is determined by checking entailment (true in ALL models), contradiction (false in ALL models), or unknown (true in some, false in others)
- Quality filtering: Cases with mono-label bundles or extreme label imbalance are filtered
- Dependency annotation: Cross-query entity sharing is detected and annotated
- Deduplication: Query uniqueness is enforced within and across bundles (82.4% unique)
Generator scripts are available in the repository.
Paper Results (Summary)
Our solver-augmented belief-state tracking method achieves:
| Method | Acc | SetCons | RevCost | Overhead |
|---|---|---|---|---|
| No-CoT (baseline) | 0.80 | 0.56 | 1.9 | 1.00Γ |
| CoT | 0.84 | 0.60 | 1.7 | 1.16Γ |
| Self-Consistency (K=20) | 0.86 | 0.63 | 1.6 | 2.75Γ |
| Ours (Check+Repair) | 0.85 | 0.94 | 0.3 | 1.55Γ |
Results shown for DeepSeek-R1 on the deep evaluation subset. See paper for full results across 8 models.
Citation
@inproceedings{salla2026crossquery,
title={Quantifying Cross-Query Contradictions in Multi-Query {LLM} Reasoning},
author={Salla, Rohit Kumar and Amancherla, Ramya Manasa and Saravanan, Manoj},
booktitle={ICLR 2026 Workshop on Logical Reasoning of Large Language Models},
year={2026}
}
License
This benchmark is released under the MIT License.
Contact
- Rohit Kumar Salla β rohits25@vt.edu
- Ramya Manasa Amancherla β ra3439@columbia.edu
- Manoj Saravanan β manoj663@vt.edu
- Downloads last month
- 72