OmniBenchmark-1K / README.md
LMMM2025's picture
Update README.md
4c45af6 verified
---
license: apache-2.0
task_categories:
- image-classification
---
# OmniBenchmark-1K
OmniBenchmark-1K is a challenging benchmark for Class-Incremental Continual Learning designed to evaluate performance on very long task sequences, ranging from 100 to over 300 non-overlapping tasks.
The dataset was introduced in the paper [Scaling Continual Learning to 300+ Tasks with Bi-Level Routing Mixture-of-Experts](https://huggingface.co/papers/2602.03473).
- **GitHub:** [https://github.com/LMMMEng/CaRE](https://github.com/LMMMEng/CaRE)
- **Paper:** [Hugging Face](https://huggingface.co/papers/2602.03473) | [arXiv](https://arxiv.org/abs/2602.03473)
## Description
OmniBenchmark-1K provides a large-scale evaluation protocol for comprehensively assessing continual learners. While standard benchmarks often focus on 5-20 tasks, this dataset allows for performance evaluation on extremely long sequences, testing the stability and plasticity of models over time.
## Citations
If you find this dataset useful for your research, please cite:
```bibtex
@inproceedings{lou2026scaling,
title={Scaling Continual Learning to 300+ Tasks with Bi-Level Routing Mixture-of-Experts},
author={Lou, Meng and Fu, Yunxiang and Yu, Yizhou},
booktitle={International Conference on Machine Learning},
year={2026},
}
@inproceedings{zhang2022benchmarking,
title={Benchmarking omni-vision representation through the lens of visual realms},
author={Zhang, Yuanhan and Yin, Zhenfei and Shao, Jing and Liu, Ziwei},
booktitle={European Conference on Computer Vision},
year={2022},
}
```