Datasets:
File size: 1,760 Bytes
45a94ba | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | ---
dataset_info:
features:
- name: image
dtype: image
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: eval_type
dtype: string
- name: relation_type
dtype: string
configs:
- config_name: default
data_files:
- split: train
path: data-*.parquet
license: cc-by-4.0
task_categories:
- visual-question-answering
language:
- en
tags:
- visual-relation
- spatial-relation
- action-relation
- comparative-relation
- dall-e
size_categories:
- 1K<n<10K
---
# MMRel
Multimodal visual relation benchmark with 3,613 Dall-E generated image-question pairs testing action, spatial, and comparative relations. Each image is synthesized in multiple artistic styles (photo-realistic, watercolor, abstract, oil painting).
Note: The full MMRel benchmark also includes Visual Genome and SPEC (SDXL) images, which require separate download from their respective sources.
## Fields
| Field | Description |
|-------|-------------|
| image | Dall-E synthesized image |
| question_id | Unique question identifier |
| question | Relation question (yes/no or open-ended) |
| answer | Ground truth answer |
| source | Image source (dall-e) |
| eval_type | discriminative / generative |
| relation_type | dall-e_action / dall-e_spatial |
## Evaluation
```
Discriminative: "Does {relation} exist? Please answer with one word."
metrics: Accuracy, Precision, Recall, F1
parser: yes/no binary
Generative: "What is the {relation_type} between {obj1} and {obj2}?"
metrics: Relation extraction accuracy
parser: free-text matching
```
## Source
Original data from [MMRel](https://huggingface.co/datasets/Jingkang50/MMRel) (arXiv 2024).
|