File size: 1,584 Bytes
2e7d005
 
4706e1b
 
2e7d005
 
4706e1b
2e7d005
d7bf591
2e7d005
4706e1b
 
 
4c45af6
4706e1b
 
 
d7bf591
4706e1b
 
 
 
 
 
2e7d005
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: apache-2.0
task_categories:
- image-classification
---

# OmniBenchmark-1K

OmniBenchmark-1K is a challenging benchmark for Class-Incremental Continual Learning designed to evaluate performance on very long task sequences, ranging from 100 to over 300 non-overlapping tasks. 

The dataset was introduced in the paper [Scaling Continual Learning to 300+ Tasks with Bi-Level Routing Mixture-of-Experts](https://huggingface.co/papers/2602.03473).

- **GitHub:** [https://github.com/LMMMEng/CaRE](https://github.com/LMMMEng/CaRE)
- **Paper:** [Hugging Face](https://huggingface.co/papers/2602.03473) | [arXiv](https://arxiv.org/abs/2602.03473)

## Description

OmniBenchmark-1K provides a large-scale evaluation protocol for comprehensively assessing continual learners. While standard benchmarks often focus on 5-20 tasks, this dataset allows for performance evaluation on extremely long sequences, testing the stability and plasticity of models over time.

## Citations

If you find this dataset useful for your research, please cite:

```bibtex
@inproceedings{lou2026scaling,
  title={Scaling Continual Learning to 300+ Tasks with Bi-Level Routing Mixture-of-Experts},
  author={Lou, Meng and Fu, Yunxiang and Yu, Yizhou},
  booktitle={International Conference on Machine Learning},
  year={2026},
}

@inproceedings{zhang2022benchmarking,
  title={Benchmarking omni-vision representation through the lens of visual realms},
  author={Zhang, Yuanhan and Yin, Zhenfei and Shao, Jing and Liu, Ziwei},
  booktitle={European Conference on Computer Vision},
  year={2022},
}
```