chenhaoguan commited on
Commit
45a94ba
·
verified ·
1 Parent(s): 33d7952

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: image
5
+ dtype: image
6
+ - name: question_id
7
+ dtype: string
8
+ - name: question
9
+ dtype: string
10
+ - name: answer
11
+ dtype: string
12
+ - name: source
13
+ dtype: string
14
+ - name: eval_type
15
+ dtype: string
16
+ - name: relation_type
17
+ dtype: string
18
+ configs:
19
+ - config_name: default
20
+ data_files:
21
+ - split: train
22
+ path: data-*.parquet
23
+ license: cc-by-4.0
24
+ task_categories:
25
+ - visual-question-answering
26
+ language:
27
+ - en
28
+ tags:
29
+ - visual-relation
30
+ - spatial-relation
31
+ - action-relation
32
+ - comparative-relation
33
+ - dall-e
34
+ size_categories:
35
+ - 1K<n<10K
36
+ ---
37
+
38
+ # MMRel
39
+
40
+ Multimodal visual relation benchmark with 3,613 Dall-E generated image-question pairs testing action, spatial, and comparative relations. Each image is synthesized in multiple artistic styles (photo-realistic, watercolor, abstract, oil painting).
41
+
42
+ Note: The full MMRel benchmark also includes Visual Genome and SPEC (SDXL) images, which require separate download from their respective sources.
43
+
44
+ ## Fields
45
+
46
+ | Field | Description |
47
+ |-------|-------------|
48
+ | image | Dall-E synthesized image |
49
+ | question_id | Unique question identifier |
50
+ | question | Relation question (yes/no or open-ended) |
51
+ | answer | Ground truth answer |
52
+ | source | Image source (dall-e) |
53
+ | eval_type | discriminative / generative |
54
+ | relation_type | dall-e_action / dall-e_spatial |
55
+
56
+ ## Evaluation
57
+
58
+ ```
59
+ Discriminative: "Does {relation} exist? Please answer with one word."
60
+ metrics: Accuracy, Precision, Recall, F1
61
+ parser: yes/no binary
62
+
63
+ Generative: "What is the {relation_type} between {obj1} and {obj2}?"
64
+ metrics: Relation extraction accuracy
65
+ parser: free-text matching
66
+ ```
67
+
68
+ ## Source
69
+
70
+ Original data from [MMRel](https://huggingface.co/datasets/Jingkang50/MMRel) (arXiv 2024).