che111 commited on
Commit
8ab821e
·
1 Parent(s): a4610e1

Update dataset card

Browse files
Files changed (1) hide show
  1. README.md +135 -3
README.md CHANGED
@@ -1,3 +1,135 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: OmniClean
3
+ language:
4
+ - en
5
+ - zh
6
+ multilinguality: multilingual
7
+ license: other
8
+ task_categories:
9
+ - question-answering
10
+ size_categories:
11
+ - 1K<n<10K
12
+ configs:
13
+ - config_name: slim
14
+ data_files:
15
+ - split: test
16
+ path: omniclean.test.jsonl
17
+ ---
18
+
19
+ # OmniClean
20
+
21
+ OmniClean is a leakage-aware omni-modal evaluation set built from retained examples across 9 source benchmarks. It is designed to reduce visual-shortcut effects in omni evaluation by applying visual-only probing where query-level filtering is defined, while keeping selected full subsets for protocol-exception benchmarks where a filtered subset is undefined or intentionally not reported.
22
+
23
+ This release contains **8,551** evaluation examples in a minimal `slim` JSONL format.
24
+
25
+ ## What this release is
26
+
27
+ Raw omni benchmark scores can be inflated by visually answerable examples. OmniClean is intended to provide a cleaner evaluation target for audio-visual-language QA and related omni understanding tasks.
28
+
29
+ This release is for evaluation. It is not intended as a training corpus.
30
+
31
+ ## Composition
32
+
33
+ Total examples: **8,551**
34
+
35
+ | Source benchmark (`dataset_source`) | Examples | Notes |
36
+ |---|---:|---|
37
+ | `AV_Odyssey_Bench` | 4555 | Full selected subset retained as a protocol exception |
38
+ | `VideoHolmes` | 885 | Query-level cleaned subset |
39
+ | `WorldSense` | 875 | Query-level cleaned subset |
40
+ | `IntentBench` | 660 | Query-level cleaned subset |
41
+ | `OmniBench` | 417 | Query-level cleaned subset |
42
+ | `CG-AV-Counting` | 376 | Full selected subset retained as a protocol exception |
43
+ | `OmniVideoBench` | 318 | Query-level cleaned subset |
44
+ | `Daily-Omni` | 237 | Query-level cleaned subset |
45
+ | `UNO-Bench` | 228 | Query-level cleaned subset |
46
+
47
+ ## Data format
48
+
49
+ Each record contains the following fields:
50
+
51
+ - `dataset_source`: source benchmark name
52
+ - `source_id`: source sample identifier
53
+ - `question`: question text
54
+ - `options`: candidate answers; may be empty for some benchmarks
55
+ - `answer`: benchmark-native gold answer
56
+ - `media_paths`: relative media references with `image`, `audio`, and `video` lists
57
+ - `question_type`: benchmark-native question category; may be `null`
58
+
59
+ Example:
60
+
61
+ ```json
62
+ {
63
+ "dataset_source": "OmniVideoBench",
64
+ "source_id": "omnivideobench:0",
65
+ "question": "Before picking up the kitten, the blogger explains a sign. Which concepts can it be associated with?",
66
+ "options": [
67
+ "A.Ancient Chinese stories and Japanese anime",
68
+ "B.Ancient Chinese Imperial Palace Architecture and Japanese Bar Names",
69
+ "C.A certain type of Chinese cuisine and a certain type of Southeast Asian opera",
70
+ "D.Chinese garden art and Western palace architecture"
71
+ ],
72
+ "answer": "Ancient Chinese stories and Japanese anime",
73
+ "media_paths": {
74
+ "image": [],
75
+ "audio": [],
76
+ "video": ["videos/video_1.mp4"]
77
+ },
78
+ "question_type": "reference reasoning"
79
+ }
80
+ ```
81
+
82
+ ## Important notes
83
+
84
+ ### Benchmark-native answers
85
+ `answer` is not normalized into a single format across all sources. Depending on the benchmark, it may be:
86
+
87
+ - a single option letter such as `A`
88
+ - multiple option letters such as `D,E,F`
89
+ - a numeric answer such as `18`
90
+ - the full answer text
91
+ - a short free-form label such as `Yes`
92
+
93
+ Evaluation should therefore use benchmark-aware answer normalization.
94
+
95
+ ### Optional fields by source benchmark
96
+ - `options` can be empty for some examples.
97
+ - `question_type` can be `null` for some examples.
98
+ - `media_paths` always contains the keys `image`, `audio`, and `video`, but some lists are empty.
99
+
100
+ ### Protocol exceptions
101
+ Two source benchmarks are intentionally retained as selected full subsets in this release:
102
+
103
+ - `AV_Odyssey_Bench`: a visual-only filtered subset is not defined because some answer options contain audio-bearing content.
104
+ - `CG-AV-Counting`: visual-only probing is used diagnostically, but a filtered-score benchmark is not reported because further exclusion would overly shrink an already difficult subset.
105
+
106
+ ## Loading with `datasets`
107
+
108
+ ```python
109
+ from datasets import load_dataset
110
+
111
+ ds = load_dataset("che111/OmniClean", "slim", split="test")
112
+ print(ds[0])
113
+ ```
114
+
115
+ ## Limitations
116
+
117
+ - This release keeps benchmark-native answer formats instead of forcing a single unified answer schema.
118
+ - Source benchmarks differ in modality structure: some examples are video-only, some are image+audio, and some are audio+video.
119
+ - Relative paths in `media_paths` should be interpreted with respect to the released data layout.
120
+
121
+ ## Citation
122
+
123
+ If you use OmniClean, please cite the accompanying paper:
124
+
125
+ ```bibtex
126
+ @misc{omniclean2026,
127
+ title={Probing and Boosting Omni Understanding: Leakage-Aware Evaluation and a Staged Post-Training Study},
128
+ author={StepFun-Audio Team},
129
+ year={2026}
130
+ }
131
+ ```
132
+
133
+ ## License
134
+
135
+ Please replace this section with the final license and confirm that redistribution terms are compatible with all included source benchmarks and media assets.