Datasets:
File size: 5,089 Bytes
8ab821e e20c2df 8ab821e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | ---
pretty_name: OmniClean
language:
- en
- zh
multilinguality: multilingual
license: other
task_categories:
- question-answering
size_categories:
- 1K<n<10K
configs:
- config_name: slim
data_files:
- split: test
path: omniclean.test.jsonl
---
# OmniClean
OmniClean is a leakage-aware omni-modal evaluation set built from retained examples across 9 source benchmarks. It is designed to reduce visual-shortcut effects in omni evaluation by applying visual-only probing where query-level filtering is defined, while keeping selected full subsets for protocol-exception benchmarks where a filtered subset is undefined or intentionally not reported.
This release contains **8,551** evaluation examples in a minimal `slim` JSONL format.
## What this release is
Raw omni benchmark scores can be inflated by visually answerable examples. OmniClean is intended to provide a cleaner evaluation target for audio-visual-language QA and related omni understanding tasks.
This release is for evaluation. It is not intended as a training corpus.
## Composition
Total examples: **8,551**
| Source benchmark (`dataset_source`) | Examples | Notes |
|---|---:|---|
| `AV_Odyssey_Bench` | 4555 | Full selected subset retained as a protocol exception |
| `VideoHolmes` | 885 | Query-level cleaned subset |
| `WorldSense` | 875 | Query-level cleaned subset |
| `IntentBench` | 660 | Query-level cleaned subset |
| `OmniBench` | 417 | Query-level cleaned subset |
| `CG-AV-Counting` | 376 | Full selected subset retained as a protocol exception |
| `OmniVideoBench` | 318 | Query-level cleaned subset |
| `Daily-Omni` | 237 | Query-level cleaned subset |
| `UNO-Bench` | 228 | Query-level cleaned subset |
## Data format
Each record contains the following fields:
- `dataset_source`: source benchmark name
- `source_id`: source sample identifier
- `question`: question text
- `options`: candidate answers; may be empty for some benchmarks
- `answer`: benchmark-native gold answer
- `media_paths`: relative media references with `image`, `audio`, and `video` lists
- `question_type`: benchmark-native question category; may be `null`
Example:
```json
{
"dataset_source": "OmniVideoBench",
"source_id": "omnivideobench:0",
"question": "Before picking up the kitten, the blogger explains a sign. Which concepts can it be associated with?",
"options": [
"A.Ancient Chinese stories and Japanese anime",
"B.Ancient Chinese Imperial Palace Architecture and Japanese Bar Names",
"C.A certain type of Chinese cuisine and a certain type of Southeast Asian opera",
"D.Chinese garden art and Western palace architecture"
],
"answer": "Ancient Chinese stories and Japanese anime",
"media_paths": {
"image": [],
"audio": [],
"video": ["videos/video_1.mp4"]
},
"question_type": "reference reasoning"
}
```
## Important notes
### Benchmark-native answers
`answer` is not normalized into a single format across all sources. Depending on the benchmark, it may be:
- a single option letter such as `A`
- multiple option letters such as `D,E,F`
- a numeric answer such as `18`
- the full answer text
- a short free-form label such as `Yes`
Evaluation should therefore use benchmark-aware answer normalization.
### Optional fields by source benchmark
- `options` can be empty for some examples.
- `question_type` can be `null` for some examples.
- `media_paths` always contains the keys `image`, `audio`, and `video`, but some lists are empty.
### Protocol exceptions
Two source benchmarks are intentionally retained as selected full subsets in this release:
- `AV_Odyssey_Bench`: a visual-only filtered subset is not defined because some answer options contain audio-bearing content.
- `CG-AV-Counting`: visual-only probing is used diagnostically, but a filtered-score benchmark is not reported because further exclusion would overly shrink an already difficult subset.
## Loading with `datasets`
```python
from datasets import load_dataset
ds = load_dataset("che111/OmniClean", "slim", split="test")
print(ds[0])
```
## Limitations
- This release keeps benchmark-native answer formats instead of forcing a single unified answer schema.
- Source benchmarks differ in modality structure: some examples are video-only, some are image+audio, and some are audio+video.
- Relative paths in `media_paths` should be interpreted with respect to the released data layout.
## Citation
If you use OmniClean, please cite the accompanying paper:
```bibtex
@misc{liu2026boostingomnimodallanguagemodels,
title={Boosting Omni-Modal Language Models: Staged Post-Training with Visually Debiased Evaluation},
author={Che Liu and Lichao Ma and Xiangyu Tony Zhang and Yuxin Zhang and Haoyang Zhang and Xuerui Yang and Fei Tian},
year={2026},
eprint={2605.12034},
archivePrefix={arXiv},
primaryClass={cs.MM},
url={https://arxiv.org/abs/2605.12034},
}
```
## License
Please replace this section with the final license and confirm that redistribution terms are compatible with all included source benchmarks and media assets.
|