Datasets:
license: cc-by-nc-4.0
language:
- en
pretty_name: RSFaith-Bench
size_categories:
- 10K<n<100K
task_categories:
- visual-question-answering
tags:
- remote-sensing
- vision-language
- benchmark
- scene-graph
- multiple-choice
- geospatial
configs:
- config_name: default
data_files:
- split: benchmark
path: metadata.jsonl
RSFaith-Bench
RSFaith-Bench is a remote-sensing vision-language benchmark designed to evaluate grounded visual reasoning beyond surface-level object recognition. The benchmark covers perception, relational reasoning, and temporal reasoning over remote-sensing imagery. Each example is formulated as a multiple-choice question and is paired with a compact scene graph, supporting evidence, and an executable reasoning program.
The release contains 13,511 question-answer records, 16,288 referenced images, and 12,876 compact scene graphs.
Using the Dataset
The annotation files can be loaded directly as JSON. The root-level
metadata.jsonl provides a flat index over all records:
from datasets import load_dataset
dataset = load_dataset(
"json",
data_files={"benchmark": "metadata.jsonl"},
split="benchmark",
)
print(dataset[0])
Images and scene graphs are stored in per-subcategory archives. To restore the file layout referenced by the JSON records, extract the archives in place:
huggingface-cli download <namespace>/RSFaith-Bench \
--repo-type dataset \
--local-dir RSFaith-Bench
find RSFaith-Bench -name assets.tar.zst -print0 |
while IFS= read -r -d '' archive; do
(cd "$(dirname "$archive")" && tar -I zstd -xf assets.tar.zst)
done
After extraction, the images and scene_graph fields in each subcategory
JSON file resolve relative to that subcategory directory. The image_t1,
image_t2, and scene_graph fields in metadata.jsonl resolve relative to
the repository root.
File Organization
The dataset is organized by reasoning level and subcategory. Each subcategory directory contains:
<subcategory>.json: question-answer records for the subcategory.assets.tar.zst: compressedimages/andscene_graphs/directories.
RSFaith-Bench
├── README.md
├── metadata.jsonl
├── dataset_manifest.json
├── croissant.json
├── Perception
│ ├── object_presence
│ │ ├── object_presence.json
│ │ └── assets.tar.zst
│ ├── object_counting
│ │ ├── object_counting.json
│ │ └── assets.tar.zst
│ ├── fine_grained_recognition
│ │ ├── fine_grained_recognition.json
│ │ └── assets.tar.zst
│ └── object_localization
│ ├── object_localization.json
│ └── assets.tar.zst
├── Relational reasoning
│ ├── directional
│ │ ├── directional.json
│ │ └── assets.tar.zst
│ ├── topological
│ │ ├── topological.json
│ │ └── assets.tar.zst
│ ├── proximity
│ │ ├── proximity.json
│ │ └── assets.tar.zst
│ ├── projective_ordering
│ │ ├── projective_ordering.json
│ │ └── assets.tar.zst
│ └── aggregate_distribution
│ ├── aggregate_distribution.json
│ └── assets.tar.zst
└── Temporal reasoning
├── category_turnover
│ ├── category_turnover.json
│ └── assets.tar.zst
├── net_change
│ ├── net_change.json
│ └── assets.tar.zst
└── semantic_transition
├── semantic_transition.json
└── assets.tar.zst
Data Fields
Each question-answer record contains the following fields:
question_id: anonymized question identifier.scene_id: anonymized scene identifier.level: high-level reasoning category.subcategory: fine-grained reasoning category.question: natural-language question.answer: correct answer.answer_type: answer representation.choices: multiple-choice options.images: relative image paths.scene_graph: relative scene graph path.support: grounded support evidence.program: executable reasoning specification.
Dataset Statistics
| Level | Subcategory | Records |
|---|---|---|
| Perception | object_presence | 969 |
| Perception | object_counting | 1,085 |
| Perception | fine_grained_recognition | 1,228 |
| Perception | object_localization | 1,298 |
| Relational reasoning | directional | 1,166 |
| Relational reasoning | topological | 921 |
| Relational reasoning | proximity | 987 |
| Relational reasoning | projective_ordering | 944 |
| Relational reasoning | aggregate_distribution | 866 |
| Temporal reasoning | category_turnover | 1,515 |
| Temporal reasoning | net_change | 1,273 |
| Temporal reasoning | semantic_transition | 1,259 |
Dataset Construction
RSFaith-Bench is constructed from remote-sensing scenes represented as grounded scene graphs. The scene graphs encode objects, spatial relations, temporal changes, and compact global inventories when applicable. Question-answer pairs are generated from programmatic templates and then curated to balance reasoning categories, answer distributions, and scene coverage. The released records retain the reasoning support and program specification so that each answer can be traced back to the corresponding scene graph.
Licensing
The dataset is released under the Creative Commons Attribution Non Commercial 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Citation Information
If you use RSFaith-Bench in your research, please cite the accompanying paper:
@misc{rsfaithbench2026,
title = {RSFaith-Bench: When Correct Answers Come with Unfaithful Evidence in Remote Sensing MLLMs},
author = {Anonymous},
year = {2026}
}
Acknowledgement
RSFaith-Bench is built from remote-sensing data sources including DIOR, DOTA, FAIR1M, SECOND, xBD, and ReCon1M. We thank the creators and maintainers of these datasets for making their resources available to the research community.