| --- |
| pretty_name: OmniClean |
| language: |
| - en |
| - zh |
| multilinguality: multilingual |
| license: other |
| task_categories: |
| - question-answering |
| size_categories: |
| - 1K<n<10K |
| configs: |
| - config_name: slim |
| data_files: |
| - split: test |
| path: omniclean.test.jsonl |
| --- |
| |
| # OmniClean |
|
|
| OmniClean is a leakage-aware omni-modal evaluation set built from retained examples across 9 source benchmarks. It is designed to reduce visual-shortcut effects in omni evaluation by applying visual-only probing where query-level filtering is defined, while keeping selected full subsets for protocol-exception benchmarks where a filtered subset is undefined or intentionally not reported. |
|
|
| This release contains **8,551** evaluation examples in a minimal `slim` JSONL format. |
|
|
| ## What this release is |
|
|
| Raw omni benchmark scores can be inflated by visually answerable examples. OmniClean is intended to provide a cleaner evaluation target for audio-visual-language QA and related omni understanding tasks. |
|
|
| This release is for evaluation. It is not intended as a training corpus. |
|
|
| ## Composition |
|
|
| Total examples: **8,551** |
|
|
| | Source benchmark (`dataset_source`) | Examples | Notes | |
| |---|---:|---| |
| | `AV_Odyssey_Bench` | 4555 | Full selected subset retained as a protocol exception | |
| | `VideoHolmes` | 885 | Query-level cleaned subset | |
| | `WorldSense` | 875 | Query-level cleaned subset | |
| | `IntentBench` | 660 | Query-level cleaned subset | |
| | `OmniBench` | 417 | Query-level cleaned subset | |
| | `CG-AV-Counting` | 376 | Full selected subset retained as a protocol exception | |
| | `OmniVideoBench` | 318 | Query-level cleaned subset | |
| | `Daily-Omni` | 237 | Query-level cleaned subset | |
| | `UNO-Bench` | 228 | Query-level cleaned subset | |
|
|
| ## Data format |
|
|
| Each record contains the following fields: |
|
|
| - `dataset_source`: source benchmark name |
| - `source_id`: source sample identifier |
| - `question`: question text |
| - `options`: candidate answers; may be empty for some benchmarks |
| - `answer`: benchmark-native gold answer |
| - `media_paths`: relative media references with `image`, `audio`, and `video` lists |
| - `question_type`: benchmark-native question category; may be `null` |
|
|
| Example: |
|
|
| ```json |
| { |
| "dataset_source": "OmniVideoBench", |
| "source_id": "omnivideobench:0", |
| "question": "Before picking up the kitten, the blogger explains a sign. Which concepts can it be associated with?", |
| "options": [ |
| "A.Ancient Chinese stories and Japanese anime", |
| "B.Ancient Chinese Imperial Palace Architecture and Japanese Bar Names", |
| "C.A certain type of Chinese cuisine and a certain type of Southeast Asian opera", |
| "D.Chinese garden art and Western palace architecture" |
| ], |
| "answer": "Ancient Chinese stories and Japanese anime", |
| "media_paths": { |
| "image": [], |
| "audio": [], |
| "video": ["videos/video_1.mp4"] |
| }, |
| "question_type": "reference reasoning" |
| } |
| ``` |
|
|
| ## Important notes |
|
|
| ### Benchmark-native answers |
| `answer` is not normalized into a single format across all sources. Depending on the benchmark, it may be: |
|
|
| - a single option letter such as `A` |
| - multiple option letters such as `D,E,F` |
| - a numeric answer such as `18` |
| - the full answer text |
| - a short free-form label such as `Yes` |
|
|
| Evaluation should therefore use benchmark-aware answer normalization. |
|
|
| ### Optional fields by source benchmark |
| - `options` can be empty for some examples. |
| - `question_type` can be `null` for some examples. |
| - `media_paths` always contains the keys `image`, `audio`, and `video`, but some lists are empty. |
|
|
| ### Protocol exceptions |
| Two source benchmarks are intentionally retained as selected full subsets in this release: |
|
|
| - `AV_Odyssey_Bench`: a visual-only filtered subset is not defined because some answer options contain audio-bearing content. |
| - `CG-AV-Counting`: visual-only probing is used diagnostically, but a filtered-score benchmark is not reported because further exclusion would overly shrink an already difficult subset. |
|
|
| ## Loading with `datasets` |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("che111/OmniClean", "slim", split="test") |
| print(ds[0]) |
| ``` |
|
|
| ## Limitations |
|
|
| - This release keeps benchmark-native answer formats instead of forcing a single unified answer schema. |
| - Source benchmarks differ in modality structure: some examples are video-only, some are image+audio, and some are audio+video. |
| - Relative paths in `media_paths` should be interpreted with respect to the released data layout. |
|
|
| ## Citation |
|
|
| If you use OmniClean, please cite the accompanying paper: |
|
|
| ```bibtex |
| @misc{liu2026boostingomnimodallanguagemodels, |
| title={Boosting Omni-Modal Language Models: Staged Post-Training with Visually Debiased Evaluation}, |
| author={Che Liu and Lichao Ma and Xiangyu Tony Zhang and Yuxin Zhang and Haoyang Zhang and Xuerui Yang and Fei Tian}, |
| year={2026}, |
| eprint={2605.12034}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.MM}, |
| url={https://arxiv.org/abs/2605.12034}, |
| } |
| ``` |
|
|
| ## License |
|
|
| Please replace this section with the final license and confirm that redistribution terms are compatible with all included source benchmarks and media assets. |
|
|