GeoVistaBench / README.md
LibraTree's picture
Update README.md
4364ef1 verified

GeoVistaBench

GeoVistaBench is the first benchmark to evaluate agentic models’ general geolocalization ability.

GeoVistaBench is a collection of real-world photos with rich metadata for evaluating geolocation models. Each sample corresponds to one picture identified by its uid and includes both the original high-resolution imagery and a lightweight preview for rapid inspection.

Dataset Structure

  • raw_image_path: relative path (within this repo) to the source picture under raw_image/<uid>/.
  • id: unique identifier.
  • prompt: textual user query.
  • preview: compressed JPEG preview (<=1M pixels) under preview_image/<uid>/. This is used by HF Dataset Viewer.
  • metadata: downstream users can parse it to obtain lat/lng, city names, multi-level location tags, and related information.
  • data_type: string describing the imagery type.

All samples are stored in a Hugging Face-compatible parquet file.

Working with GeoBench

  1. Clone/download this folder (or pull it via huggingface_hub).
  2. Load the parquet file using Python:
    from datasets import load_dataset
    
    ds = load_dataset('path/to/this/folder', split='test')
    sample = ds[0]
    

sample["raw_image_path"]` points to the higher-quality image for inference.

Related Resources

Citation

@misc{wang2025geovistawebaugmentedagenticvisual,
      title={GeoVista: Web-Augmented Agentic Visual Reasoning for Geolocalization}, 
      author={Yikun Wang and Zuyan Liu and Ziyi Wang and Han Hu and Pengfei Liu and Yongming Rao},
      year={2025},
      eprint={2511.15705},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2511.15705}, 
}