metadata
dataset_info:
features:
- name: question_id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
configs:
- config_name: default
data_files:
- split: test
path: data/test-*.parquet
MME
MME (MultiModal Evaluation) is a comprehensive benchmark for evaluating multimodal large language models across diverse perception and cognition tasks.