Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
Boxue commited on
Commit
63d83d8
·
verified ·
1 Parent(s): 3b98e26

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ assets/data_pipeline.pdf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - multimodal
9
+ pretty_name: EMVista
10
+ size_categories:
11
+ - 1K<n<10K
12
+ configs:
13
+ - config_name: default
14
+ data_files:
15
+ - split: test
16
+ path: data/test.parquet
17
+ ---
18
+
19
+ # EMVista Dataset
20
+ <center><h1>EMVista</h1></center>
21
+
22
+ <p align="center">
23
+ <img src="./assets/pipeline.png" alt="EMVista" style="display: block; margin: auto; max-width: 70%;">
24
+ </p>
25
+
26
+ <p align="center">
27
+ | <a href="https://emvista-benchmark.github.io"><b>Website</b></a> |
28
+ <a href="https://arxiv.org/abs/XXXX.XXXXX"><b>Paper</b></a> |
29
+ <a href="https://huggingface.co/datasets/EMVista/EMVista"><b>HuggingFace</b></a> |
30
+ <a href="https://github.com/EMVista/EMVista"><b>Code</b></a> |
31
+ </p>
32
+
33
+ ---
34
+
35
+ ## 🔥 Latest News
36
+
37
+ - **[2026/01]** EMVista v1.0 is officially released.
38
+
39
+ <!-- <details>
40
+ <summary>Unfold to see more details.</summary>
41
+ <br>
42
+
43
+ - EMVista supports **English** prompts.
44
+
45
+ </details> -->
46
+
47
+ <!-- ---
48
+
49
+ ## Motivation: TODO
50
+
51
+ <details>
52
+ <summary>Unfold to see more details.</summary>
53
+ <br>
54
+
55
+ Recent advances in Multimodal Large Language Models (MLLMs) have demonstrated impressive performance on generic vision-language benchmarks. However, most existing benchmarks primarily assess **coarse-grained perception** or **commonsense visual understanding**, falling short in evaluating models’ abilities to reason over **complex, expert-level visual information**.
56
+
57
+ In realistic applications—such as scientific analysis, technical inspection, diagram interpretation, and abstract visual reasoning—models must go beyond recognizing objects or captions. They need to **extract structured visual cues**, **understand implicit visual attributes**, and **perform multi-step reasoning across multiple visual sources**.
58
+
59
+ To address this gap, we introduce **EMVista**, a benchmark designed to systematically evaluate multimodal models’ **visual understanding and reasoning capabilities** through carefully curated expert-level visual tasks.
60
+
61
+ </details>
62
+
63
+ --- -->
64
+ ## Overview
65
+ **EMVista** is a benchmark for evaluating **instance-level microstructural understanding** in electron microscopy (EM) images across **three core capability
66
+ dimensions**:
67
+
68
+ 1. **Microstructural Perception**
69
+ Evaluates the ability to detect, delineate, and separate individual
70
+ microstructural instances in complex EM scenes.
71
+ 2. **Microstructural Attribute Understanding**
72
+ Measures the capacity to interpret key microstructural attributes, including
73
+ morphology, density, spatial distribution, layering, and scale variation.
74
+ 3. **Robustness in Dense Scenes**
75
+ Assesses model stability and accuracy under extreme instance crowding,
76
+ overlap, and multi-scale complexity.
77
+
78
+ EMVista contains **expert-annotated EM images** with instance-level labels and
79
+ structured attribute descriptions, designed to reflect **realistic challenges**
80
+ in materials microstructure analysis.
81
+
82
+ ---
83
+ ## Dataset Characteristics
84
+
85
+ - **Task Format**: Visual Question Answering (VQA)
86
+ - **Modalities**: Image + Text
87
+ - **Languages**: English
88
+ - **Annotation**: Expert-verified
89
+ ---
90
+
91
+ ### Download EMVista Dataset
92
+
93
+ You can download the EMVista dataset using the HuggingFace `datasets` library
94
+ (make sure you have installed
95
+ [HuggingFace Datasets](https://huggingface.co/docs/datasets/quickstart)):
96
+
97
+ ```python
98
+ from datasets import load_dataset
99
+
100
+ dataset = load_dataset("InnovatorLab/EMVista")
101
+ ```
102
+
103
+ ## Evaluations
104
+
105
+ We use [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) for evaluations. Please see [here](./evaluation/README.md) for detail files.
106
+
107
+ ## License
108
+
109
+ EMVista is released under the MIT License. See [LICENSE](./LICENSE) for more details.
110
+
111
+ <!-- ## Reference
112
+
113
+ If you find EMVista useful in your research, please consider citing the following paper:
114
+
115
+ ```bibtex
116
+ @misc{EMVista,
117
+ title={xxx},
118
+ author={xxx},
119
+ year={2026},
120
+ eprint={2506.10521},
121
+ archivePrefix={arXiv},
122
+ primaryClass={cs.AI},
123
+ url={https://arxiv.org/abs/250xxxxxx},
124
+ }
125
+ ``` -->
assets/pipeline.png ADDED

Git LFS Details

  • SHA256: cfcba54a472b7cd24b893389eb975906004ca40c736636c76f54b1d79ce1392a
  • Pointer size: 132 Bytes
  • Size of remote file: 1.88 MB
data/test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:233fe06c519ea2c152bef81c5b8bf3a2fab178b2ff8fc707c782bbbe8d204610
3
+ size 494669871
evaluation/README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # Evaluations of EMVista
2
+
3
+ We evaluate the EMVista dataset using lmms-eval. The evaluation codes are listed in this folder.
evaluation/tasks/EMVista/EMVista.yaml ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset_path: "InnovatorLab/EMVista"
2
+ task: "EMVista"
3
+ test_split: "test"
4
+ output_type: "generate_until"
5
+
6
+ doc_to_visual: !function utils.doc_to_visual
7
+ doc_to_text: !function utils.doc_to_text
8
+ doc_to_target: !function utils.doc_to_target
9
+
10
+ generation_kwargs:
11
+ max_new_tokens: 256
12
+ temperature: 0.0
13
+ top_p: 1.0
14
+ num_beams: 1
15
+ do_sample: false
16
+
17
+ process_results: !function utils.process_results
18
+
19
+ metric_list:
20
+ - metric: exact_match_accuracy
21
+ aggregation: !function utils.aggregation
22
+ higher_is_better: true
23
+
24
+ lmms_eval_specific_kwargs:
25
+ default:
26
+ pre_prompt: ""
27
+ post_prompt: "\nAnswer with the option's letter from the given choices directly."
28
+
29
+ metadata:
30
+ - version: 1.0
evaluation/tasks/EMVista/utils.py ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ from PIL import Image
3
+
4
+ def doc_to_visual(doc):
5
+ image = doc.get("image")
6
+ if isinstance(image, Image.Image):
7
+ return [image.convert("RGB")]
8
+ return []
9
+
10
+ def doc_to_text(doc, lmms_eval_specific_kwargs=None):
11
+ pre_prompt = lmms_eval_specific_kwargs.get("pre_prompt", "") if lmms_eval_specific_kwargs else ""
12
+ post_prompt = lmms_eval_specific_kwargs.get("post_prompt", "") if lmms_eval_specific_kwargs else ""
13
+ content = doc.get("problem", "")
14
+ return f"{pre_prompt}{content}{post_prompt}"
15
+
16
+ def doc_to_target(doc, lmms_eval_specific_kwargs=None):
17
+ full_answer = doc.get("answer", "")
18
+ match = re.search(r"([A-D])", str(full_answer))
19
+ return match.group(1) if match else full_answer
20
+
21
+ def extract_characters_regex(s):
22
+ if not isinstance(s, str):
23
+ return ""
24
+ s = s.strip()
25
+ matches = re.search(r"\b([A-D])\b|(?<=\()([A-D])(?=\))", s.upper())
26
+ if matches:
27
+ return matches.group(1) if matches.group(1) else matches.group(2)
28
+ for char in ["A", "B", "C", "D"]:
29
+ if f" {char} " in f" {s.upper()} " or s.upper().startswith(char):
30
+ return char
31
+ return ""
32
+
33
+ def process_results(doc, results):
34
+ prediction = results[0] if isinstance(results, list) else results
35
+ pred_ans = extract_characters_regex(prediction)
36
+ target_ans = doc_to_target(doc)
37
+ question = doc_to_text(doc)
38
+ is_correct = (pred_ans == str(target_ans)) if target_ans is not None else False
39
+ return {
40
+ "exact_match_accuracy": float(is_correct),
41
+ "question": question,
42
+ "raw_output": prediction,
43
+ "ground_truth": target_ans
44
+ }
45
+
46
+ def aggregation(results):
47
+ return sum(results) / len(results) if results else 0.0