MMRel / README.md
chenhaoguan's picture
Upload README.md with huggingface_hub
45a94ba verified
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: question_id
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: source
      dtype: string
    - name: eval_type
      dtype: string
    - name: relation_type
      dtype: string
  configs:
    - config_name: default
      data_files:
        - split: train
          path: data-*.parquet
license: cc-by-4.0
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - visual-relation
  - spatial-relation
  - action-relation
  - comparative-relation
  - dall-e
size_categories:
  - 1K<n<10K

MMRel

Multimodal visual relation benchmark with 3,613 Dall-E generated image-question pairs testing action, spatial, and comparative relations. Each image is synthesized in multiple artistic styles (photo-realistic, watercolor, abstract, oil painting).

Note: The full MMRel benchmark also includes Visual Genome and SPEC (SDXL) images, which require separate download from their respective sources.

Fields

Field Description
image Dall-E synthesized image
question_id Unique question identifier
question Relation question (yes/no or open-ended)
answer Ground truth answer
source Image source (dall-e)
eval_type discriminative / generative
relation_type dall-e_action / dall-e_spatial

Evaluation

Discriminative: "Does {relation} exist? Please answer with one word."
  metrics: Accuracy, Precision, Recall, F1
  parser: yes/no binary

Generative: "What is the {relation_type} between {obj1} and {obj2}?"
  metrics: Relation extraction accuracy
  parser: free-text matching

Source

Original data from MMRel (arXiv 2024).