Ling200424 roopalgarg commited on
Commit
46c1277
·
0 Parent(s):

Duplicate from google/imageinwords

Browse files

Co-authored-by: Roopal Garg <roopalgarg@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +55 -0
  2. README.md +174 -0
  3. imageinwords.py +175 -0
.gitattributes ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - crowdsourced
5
+ license: cc-by-4.0
6
+ task_categories:
7
+ - image-to-text
8
+ - text-to-image
9
+ - object-detection
10
+ language:
11
+ - en
12
+ size_categories:
13
+ - 1K<n<10K
14
+ tags:
15
+ - iiw
16
+ - imageinwords
17
+ - image-descriptions
18
+ - image-captions
19
+ - detailed-descriptions
20
+ - hyper-detailed-descriptions
21
+ - object-descriptions
22
+ - object-detection
23
+ - object-labels
24
+ - image-text
25
+ - t2i
26
+ - i2t
27
+ - dataset
28
+ pretty_name: ImageInWords
29
+ multilinguality:
30
+ - monolingual
31
+ ---
32
+
33
+ <h2>ImageInWords: Unlocking Hyper-Detailed Image Descriptions</h2>
34
+
35
+ Please visit the [webpage](https://google.github.io/imageinwords) for all the information about the IIW project, data downloads, visualizations, and much more.
36
+
37
+ <img src="https://github.com/google/imageinwords/blob/main/static/images/Abstract/1_white_background.png?raw=true">
38
+ <img src="https://github.com/google/imageinwords/blob/main/static/images/Abstract/2_white_background.png?raw=true">
39
+
40
+ Please reach out to iiw-dataset@google.com for thoughts/feedback/questions/collaborations.
41
+
42
+ <h3>&#129303;Hugging Face&#129303;</h3>
43
+
44
+ <li><a href="https://huggingface.co/datasets/google/imageinwords">IIW-Benchmark Eval Dataset</a></li>
45
+
46
+ ```python
47
+ from datasets import load_dataset
48
+
49
+ # `name` can be one of: IIW-400, DCI_Test, DOCCI_Test, CM_3600, LocNar_Eval
50
+ # refer: https://github.com/google/imageinwords/tree/main/datasets
51
+ dataset = load_dataset('google/imageinwords', token=None, name="IIW-400", trust_remote_code=True)
52
+ ```
53
+
54
+ <li><a href="https://huggingface.co/spaces/google/imageinwords-explorer">Dataset-Explorer</a></li>
55
+
56
+
57
+ ## Dataset Description
58
+
59
+ - **Paper:** [arXiv](https://arxiv.org/abs/2405.02793)
60
+ - **Homepage:** https://google.github.io/imageinwords/
61
+ - **Point of Contact:** iiw-dataset@google.com
62
+ - **Dataset Explorer:** [ImageInWords-Explorer](https://huggingface.co/spaces/google/imageinwords-explorer)
63
+
64
+ ### Dataset Summary
65
+
66
+ ImageInWords (IIW), a carefully designed human-in-the-loop annotation framework for curating hyper-detailed image descriptions and a new dataset resulting from this process.
67
+ We validate the framework through evaluations focused on the quality of the dataset and its utility for fine-tuning with considerations for readability, comprehensiveness, specificity, hallucinations, and human-likeness.
68
+
69
+ This Data Card describes **IIW-Benchmark: Eval Datasets**, a mixture of human annotated and machine generated data intended to help create and capture rich, hyper-detailed image descriptions.
70
+
71
+ IIW dataset has two parts: human annotations and model outputs. The main purposes of this dataset are:
72
+ 1) to provide samples from SoTA human authored outputs to promote discussion on annotation guidelines to further improve the quality
73
+ 2) to provide human SxS results and model outputs to promote development of automatic metrics to mimic human SxS judgements.
74
+
75
+ ### Supported Tasks
76
+
77
+ Text-to-Image, Image-to-Text, Object Detection
78
+
79
+ ### Languages
80
+
81
+ English
82
+
83
+ ## Dataset Structure
84
+
85
+ ### Data Instances
86
+
87
+ ### Data Fields
88
+
89
+ For details on the datasets and output keys, please refer to our [GitHub data](https://github.com/google/imageinwords/tree/main/datasets) page inside the individual folders.
90
+
91
+ IIW-400:
92
+ - `image/key`
93
+ - `image/url`
94
+ - `IIW`: Human generated image description
95
+ - `IIW-P5B`: Machine generated image description
96
+ - `iiw-human-sxs-gpt4v` and `iiw-human-sxs-iiw-p5b`: human SxS metrics
97
+ - metrics/Comprehensiveness
98
+ - metrics/Specificity
99
+ - metrics/Hallucination
100
+ - metrics/First few line(s) as tldr
101
+ - metrics/Human Like
102
+
103
+ DCI_Test:
104
+ - `image`
105
+ - `image/url`
106
+ - `ex_id`
107
+ - `IIW`: Human authored image description
108
+ - `metrics/Comprehensiveness`
109
+ - `metrics/Specificity`
110
+ - `metrics/Hallucination`
111
+ - `metrics/First few line(s) as tldr`
112
+ - `metrics/Human Like`
113
+
114
+ DOCCI_Test:
115
+ - `image`
116
+ - `image/thumbnail_url`
117
+ - `IIW`: Human generated image description
118
+ - `DOCCI`: Image description from DOCCI
119
+ - `metrics/Comprehensiveness`
120
+ - `metrics/Specificity`
121
+ - `metrics/Hallucination`
122
+ - `metrics/First few line(s) as tldr`
123
+ - `metrics/Human Like`
124
+
125
+ LocNar_Eval:
126
+ - `image/key`
127
+ - `image/url`
128
+ - `IIW-P5B`: Machine generated image description
129
+
130
+ CM_3600:
131
+ - `image/key`
132
+ - `image/url`
133
+ - `IIW-P5B`: Machine generated image description
134
+
135
+ Please note that all fields are string.
136
+
137
+ ### Data Splits
138
+
139
+ Dataset | Size
140
+ ---| ---:
141
+ IIW-400 | 400
142
+ DCI_Test | 112
143
+ DOCCI_Test | 100
144
+ LocNar_Eval | 1000
145
+ CM_3600 | 1000
146
+
147
+ ### Annotations
148
+
149
+ #### Annotation process
150
+
151
+ Some text descriptions were written by human annotators and some were generated by machine models.
152
+ The metrics are all from human SxS.
153
+
154
+ ### Personal and Sensitive Information
155
+
156
+ The images that were used for the descriptions and the machine generated text descriptions are checked (by algorithmic methods and manual inspection) for S/PII, pornographic content, and violence and any we found may contain such information have been filtered.
157
+ We asked that human annotators use an objective and respectful language for the image descriptions.
158
+
159
+ ### Licensing Information
160
+
161
+ CC BY 4.0
162
+
163
+ ### Citation Information
164
+
165
+ ```
166
+ @misc{garg2024imageinwords,
167
+ title={ImageInWords: Unlocking Hyper-Detailed Image Descriptions},
168
+ author={Roopal Garg and Andrea Burns and Burcu Karagol Ayan and Yonatan Bitton and Ceslee Montgomery and Yasumasa Onoe and Andrew Bunner and Ranjay Krishna and Jason Baldridge and Radu Soricut},
169
+ year={2024},
170
+ eprint={2405.02793},
171
+ archivePrefix={arXiv},
172
+ primaryClass={cs.CV}
173
+ }
174
+ ```
imageinwords.py ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datasets
2
+ import json
3
+ import pandas as pd
4
+ import string
5
+
6
+ _DESCRIPTION = """
7
+ ImageInWords (IIW), a carefully designed human-in-the-loop annotation framework for curating hyper-detailed image descriptions and a new dataset resulting from this process.
8
+ We validate the framework through evaluations focused on the quality of the dataset and its utility for fine-tuning with considerations for readability, comprehensiveness, specificity, hallucinations, and human-likeness.
9
+ """
10
+
11
+ _HOMEPAGE = "https://google.github.io/imageinwords/"
12
+
13
+ _LICENSE = "CC BY 4.0"
14
+
15
+
16
+ _DATASET_GITHUB_PREFIX = "https://github.com/google/imageinwords/raw/main/datasets"
17
+
18
+ _DATASET_GITHUB_URLS = {
19
+ "IIW-400": f"{_DATASET_GITHUB_PREFIX}/IIW-400/data.jsonl",
20
+ "DCI_Test": f"{_DATASET_GITHUB_PREFIX}/DCI_Test/data.jsonl",
21
+ "DOCCI_Test": f"{_DATASET_GITHUB_PREFIX}/DOCCI_Test/data.jsonl",
22
+ "CM_3600": f"{_DATASET_GITHUB_PREFIX}/CM_3600/data.jsonl",
23
+ "LocNar_Eval": f"{_DATASET_GITHUB_PREFIX}/LocNar_Eval/data.jsonl",
24
+ }
25
+
26
+ _DATASET_FEATURES = {
27
+ "IIW-400": datasets.Features({
28
+ "image/key": datasets.Value('string'),
29
+ "image/url": datasets.Value('string'),
30
+ "IIW": datasets.Value('string'),
31
+ "IIW-P5B": datasets.Value('string'),
32
+ "iiw-human-sxs-gpt4v": {
33
+ "metrics/Comprehensiveness": datasets.Value('string'),
34
+ "metrics/Specificity": datasets.Value('string'),
35
+ "metrics/Hallucination": datasets.Value('string'),
36
+ "metrics/First few line(s) as tldr": datasets.Value('string'),
37
+ "metrics/Human Like": datasets.Value('string'),
38
+ },
39
+ "iiw-human-sxs-iiw-p5b": {
40
+ "metrics/Comprehensiveness": datasets.Value('string'),
41
+ "metrics/Specificity": datasets.Value('string'),
42
+ "metrics/Hallucination": datasets.Value('string'),
43
+ "metrics/First few line(s) as tldr": datasets.Value('string'),
44
+ "metrics/Human Like": datasets.Value('string'),
45
+ },
46
+ }),
47
+ "DCI_Test": datasets.Features({
48
+ "image": datasets.Value('string'),
49
+ "ex_id": datasets.Value('string'),
50
+ "IIW": datasets.Value('string'),
51
+ "metrics/Comprehensiveness": datasets.Value('string'),
52
+ "metrics/Specificity": datasets.Value('string'),
53
+ "metrics/Hallucination": datasets.Value('string'),
54
+ "metrics/First few line(s) as tldr": datasets.Value('string'),
55
+ "metrics/Human Like": datasets.Value('string'),
56
+ }),
57
+ "DOCCI_Test": datasets.Features({
58
+ "image": datasets.Value('string'),
59
+ "image/thumbnail_url": datasets.Value('string'),
60
+ "IIW": datasets.Value('string'),
61
+ "DOCCI": datasets.Value('string'),
62
+ "metrics/Comprehensiveness": datasets.Value('string'),
63
+ "metrics/Specificity": datasets.Value('string'),
64
+ "metrics/Hallucination": datasets.Value('string'),
65
+ "metrics/First few line(s) as tldr": datasets.Value('string'),
66
+ "metrics/Human Like": datasets.Value('string'),
67
+ }),
68
+ "CM_3600": datasets.Features({
69
+ "image/key": datasets.Value('string'),
70
+ "image/url": datasets.Value('string'),
71
+ "IIW-P5B": datasets.Value('string'),
72
+ }),
73
+ "LocNar_Eval": datasets.Features({
74
+ "image/key": datasets.Value('string'),
75
+ "image/url": datasets.Value('string'),
76
+ "IIW-P5B": datasets.Value('string'),
77
+ }),
78
+ }
79
+
80
+
81
+ _CM_3600_URL_PATTERN = string.Template("https://open-images-dataset.s3.amazonaws.com/crossmodal-3600/$IMAGE_KEY.jpg")
82
+ _DOCCI_AAR_URL_PATTERN = string.Template("https://storage.googleapis.com/docci/data/images_aar/$IMAGE_KEY.jpg")
83
+ _DOCCI_THUMBNAIL_URL_PATTERN = string.Template("https://storage.googleapis.com/docci/thumbnails/$IMAGE_KEY.jpg")
84
+ _LOCNAR_VALIDATION_URL_PATTERN = string.Template("https://open-images-dataset.s3.amazonaws.com/validation/$IMAGE_KEY.jpg")
85
+
86
+
87
+ class ImageInWords(datasets.GeneratorBasedBuilder):
88
+ """ImageInWords dataset"""
89
+
90
+ VERSION = datasets.Version("1.0.0")
91
+
92
+ BUILDER_CONFIGS = [
93
+ datasets.BuilderConfig(name="IIW-400", version=VERSION, description="IIW-400"),
94
+ datasets.BuilderConfig(name="DCI_Test", version=VERSION, description="DCI_Test"),
95
+ datasets.BuilderConfig(name="DOCCI_Test", version=VERSION, description="DOCCI_Test"),
96
+ datasets.BuilderConfig(name="CM_3600", version=VERSION, description="CM_3600"),
97
+ datasets.BuilderConfig(name="LocNar_Eval", version=VERSION, description="LocNar_Eval"),
98
+ ]
99
+
100
+ DEFAULT_CONFIG_NAME = "IIW-400"
101
+
102
+ def _info(self):
103
+ return datasets.DatasetInfo(
104
+ features=_DATASET_FEATURES[self.config.name],
105
+ homepage=_HOMEPAGE,
106
+ description=_DESCRIPTION,
107
+ license=_LICENSE,
108
+ )
109
+
110
+ def _split_generators(self, dl_manager):
111
+ """Returns SplitGenerators."""
112
+ hf_auth_token = dl_manager.download_config.use_auth_token
113
+ if hf_auth_token is None:
114
+ raise ConnectionError(
115
+ "Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset"
116
+ )
117
+
118
+ downloaded_file = dl_manager.download_and_extract(_DATASET_GITHUB_URLS[self.config.name])
119
+ if self.config.name == "LocNar_Eval":
120
+ split_type = datasets.Split.VALIDATION
121
+ else:
122
+ split_type = datasets.Split.TEST
123
+ return [
124
+ datasets.SplitGenerator(name=split_type, gen_kwargs={"filepath": downloaded_file}),
125
+ ]
126
+
127
+ def _generate_examples(self, filepath):
128
+ match self.config.name:
129
+ case "IIW-400":
130
+ return self._generate_examples_iiw_400(filepath)
131
+ case "DCI_Test":
132
+ return self._generate_examples_dci_test(filepath)
133
+ case "DOCCI_Test":
134
+ return self._generate_examples_docci_test(filepath)
135
+ case "CM_3600":
136
+ return self._generate_examples_cm_3600(filepath)
137
+ case "LocNar_Eval":
138
+ return self._generate_examples_locnar_eval(filepath)
139
+
140
+ def _generate_examples_iiw_400(self, filepath):
141
+ with open(filepath) as fp:
142
+ for json_line in fp:
143
+ json_obj = json.loads(json_line.strip())
144
+ json_obj["image/url"] = _DOCCI_AAR_URL_PATTERN.substitute(IMAGE_KEY=json_obj["image/key"])
145
+ yield json_obj["image/key"], json_obj
146
+
147
+ def _generate_examples_dci_test(self, filepath):
148
+ with open(filepath) as fp:
149
+ for json_line in fp:
150
+ json_obj = json.loads(json_line.strip())
151
+ yield json_obj["image"], json_obj
152
+
153
+ def _generate_examples_docci_test(self, filepath):
154
+ with open(filepath) as fp:
155
+ for json_line in fp:
156
+ json_obj = json.loads(json_line.strip())
157
+ json_obj["image/thumbnail_url"] = _DOCCI_THUMBNAIL_URL_PATTERN.substitute(IMAGE_KEY=json_obj["image"])
158
+ yield json_obj["image"], json_obj
159
+
160
+ def _generate_examples_cm_3600(self, filepath):
161
+ with open(filepath) as fp:
162
+ for json_line in fp:
163
+ json_obj = json.loads(json_line.strip())
164
+ json_obj["image/url"] = _CM_3600_URL_PATTERN.substitute(IMAGE_KEY=json_obj["image/key"])
165
+ del json_obj["image/source"]
166
+ yield json_obj["image/key"], json_obj
167
+
168
+
169
+ def _generate_examples_locnar_eval(self, filepath):
170
+ with open(filepath) as fp:
171
+ for json_line in fp:
172
+ json_obj = json.loads(json_line.strip())
173
+ json_obj["image/url"] = _LOCNAR_VALIDATION_URL_PATTERN.substitute(IMAGE_KEY=json_obj["image/key"])
174
+ del json_obj["image/source"]
175
+ yield json_obj["image/key"], json_obj