GeoVistaBench
GeoVistaBench is the first benchmark to evaluate agentic models’ general geolocalization ability.
GeoVistaBench is a collection of real-world photos with rich metadata for evaluating geolocation models. Each sample corresponds to one picture identified by its uid and includes both the original high-resolution imagery and a lightweight preview for rapid inspection.
Dataset Structure
raw_image_path: relative path (within this repo) to the source picture underraw_image/<uid>/.id: unique identifier.prompt: textual user query.preview: compressed JPEG preview (<=1M pixels) underpreview_image/<uid>/. This is used by HF Dataset Viewer.metadata: downstream users can parse it to obtain lat/lng, city names, multi-level location tags, and related information.data_type: string describing the imagery type.
All samples are stored in a Hugging Face-compatible parquet file.
Working with GeoBench
- Clone/download this folder (or pull it via
huggingface_hub). - Load the parquet file using Python:
from datasets import load_dataset ds = load_dataset('path/to/this/folder', split='test') sample = ds[0]
sample["raw_image_path"]` points to the higher-quality image for inference.
Related Resources
GeoVista Technical Report https://huggingface.co/papers/2511.15705
GeoVista-Bench (previewable variant): A companion dataset with resized JPEG previews intended to make image preview easier in the Hugging Face dataset viewer: https://huggingface.co/datasets/LibraTree/GeoVistaBench (Same underlying benchmark; different packaging / image formats.)
Citation
@misc{wang2025geovistawebaugmentedagenticvisual,
title={GeoVista: Web-Augmented Agentic Visual Reasoning for Geolocalization},
author={Yikun Wang and Zuyan Liu and Ziyi Wang and Han Hu and Pengfei Liu and Yongming Rao},
year={2025},
eprint={2511.15705},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.15705},
}