coord_eval / README.md
Rodrigo Ferreira Rodrigues
Updating Readme.md
a2f0878

A newer version of the Gradio SDK is available: 6.12.0

Upgrade
metadata
title: Coord_eval
datasets:
  - GeoBenchmark
tags:
  - evaluate
  - metric
description: 'TODO: add a description here'
sdk: gradio
sdk_version: 6.5.1
app_file: app.py
pinned: false

Metric Card for Coord_eval

Module Card Instructions: Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.

Metric Description

Coordinates Accuracy aims to evaluate model performance in coordinates prediction tasks where the model has to predict a coordinate of a geographic entity in the form (lat, long). It checks if the coordinates generated are inside a circle of radius d and centered at gold coordinates.

How to Use

This metric takes 2 mandatory arguments : generations (a list of string of generated coordinates), golds (a list of list of floats of gold coordinates)

import evaluate
coord_acc = evaluate.load("rfr2003/coord_eval")
results = coord_acc.compute(generations=["(12.7, 67.8)", "(16.7, 89.6)"], golds=[[12.7, 67.8], [10.9, 80.6]], d_range=20)
print(results)
{'coord_accuracy': 0.5}

This metric also accepts an optional argument:

d (int): Radius of the circle. The default value is 20.

Output Values

This metric outputs a dictionary with the following values:

coord_accuracy: The coordinates accuracy between generations and golds, which ranges from 0.0 to 1.0.

Values from Popular Papers

Examples

import evaluate
coord_acc = evaluate.load("rfr2003/coord_eval")
results = coord_acc.compute(generations=["(12.7, 67.8)", "(16.7, 89.6)"], golds=[[12.7, 67.8], [10.9, 80.6]], d_range=20)
print(results)
{'coord_accuracy': 0.5}

Limitations and Bias

Citation

Further References