Datasets:
File size: 2,368 Bytes
0edb09e 7d3bfbb 0edb09e 80bef96 cca935a b6c0679 0edb09e 38fd8ac 5e8320c 38fd8ac ef855d6 7ae954a 64aecbd 7ae954a 38fd8ac 7ae954a 38fd8ac 7ae954a e9b704e 7ae954a 38fd8ac a836a3a 7d3bfbb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
license: apache-2.0
task_categories:
- multiple-choice
- visual-question-answering
language:
- en
size_categories:
- n<1K
configs:
- config_name: benchmark
data_files:
- split: test
path: dataset.json
paperswithcode_id: mapeval-visual
tags:
- geospatial
---
# MapEval-Visual
This dataset was introduced in [MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models](https://arxiv.org/abs/2501.00316)
## Example

### Query
I am presently visiting Mount Royal Park . Could you please inform me about the nearby historical landmark?
### Options
1. Circle Stone
2. Secret pool
3. Maison William Caldwell Cottingham
4. Poste de cavalerie du Service de police de la Ville de Montreal
### Correct Option
1. Circle Stone
## Prerequisite
Download the [Vdata.zip](https://huggingface.co/datasets/MapEval/MapEval-Visual/resolve/main/Vdata.zip?download=true) and extract in the working directory. This directory contains all the images.
## Usage
```python
from datasets import load_dataset
import PIL.Image
# Load dataset
ds = load_dataset("MapEval/MapEval-Visual", name="benchmark")
for item in ds["test"]:
# Start with a clear task description
prompt = (
"You are a highly intelligent assistant. "
"Based on the given image, answer the multiple-choice question by selecting the correct option.\n\n"
"Question:\n" + item["question"] + "\n\n"
"Options:\n"
)
# List the options more clearly
for i, option in enumerate(item["options"], start=1):
prompt += f"{i}. {option}\n"
# Add a concluding sentence to encourage selection of the answer
prompt += "\nSelect the best option by choosing its number."
# Load image from Vdata/ directory
img = PIL.Image.open(item["context"])
# Use the prompt as needed
print([prompt, img]) # Replace with your processing logic
```
## Citation
If you use this dataset, please cite the original paper:
```
@article{dihan2024mapeval,
title={MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models},
author={Dihan, Mahir Labib and Hassan, Md Tanvir and Parvez, Md Tanvir and Hasan, Md Hasebul and Alam, Md Almash and Cheema, Muhammad Aamir and Ali, Mohammed Eunus and Parvez, Md Rizwan},
journal={arXiv preprint arXiv:2501.00316},
year={2024}
}
``` |