--- license: cc-by-nc-4.0 task_categories: - video-classification language: - en tags: - medical - fall-detection - activity-recognition - synthetic pretty_name: 'OmniFall: A Unified Benchmark for Staged-to-Wild Fall Detection' size_categories: - 10K arXiv
Project Page # OmniFall: A Unified Benchmark for Staged-to-Wild Fall Detection OmniFall is a comprehensive fall detection benchmark with dense temporal segment annotations across three components: **OF-Staged** (8 public lab datasets), **OF-In-the-Wild** (genuine accidents from OOPS), and **OF-Synthetic** (12,000 diffusion-generated videos with demographic diversity). All components share a sixteen-class activity taxonomy. [[Paper]](https://arxiv.org/abs/2505.19889) [[Project Page]](https://simplexsigil.github.io/omnifall/) ## Quickstart ### Labels only (no video files needed) ```python from datasets import load_dataset # 8 staged datasets, cross-subject split ds = load_dataset("simplexsigil2/omnifall", "of-sta-cs") print(ds["train"][0]) # {'path': ..., 'label': 1, 'start': 0.0, 'end': 2.5, ...} # Cross-domain: train on staged, test on all (staged + itw + syn) ds = load_dataset("simplexsigil2/omnifall", "of-sta-to-all-cs") # Synthetic data with demographic metadata (19 columns) ds = load_dataset("simplexsigil2/omnifall", "of-syn") ``` ### With video loading (`pip install omnifall`) ```python import omnifall # OF-Syn videos auto-download from HF Hub (~9.1GB, cached) ds = omnifall.load("of-syn", video=True) # OF-ItW requires one-time OOPS video preparation omnifall.prepare_oops() # streams ~45GB, extracts ~2.6GB ds = omnifall.load("of-itw", video=True) # The video column contains absolute file paths (strings) print(ds["train"][0]["video"]) ``` Video paths can also be added to an already-loaded dataset via `omnifall.add_video(ds, config="of-syn")`. OOPS preparation is also available via CLI: `omnifall prepare-oops`. ## Overview | | Videos | Segments (SV) | Duration (SV) | |---|---|---|---| | **OF-Staged** (8 datasets) | 2,164 | 9,590 | 13.81h | | **OF-ItW** (OOPS) | 818 | 4,022 | 2.65h | | **OF-Syn** | 12,000 | 19,228 | 16.88h | | **Total** | **14,982** | **32,840** | **33.34h** | | Dataset | Type | Videos | Segments (SV) | Duration (SV) | Avg Seg (s) | |---|---|---|---|---|---| | [CMDFall](https://www.mica.edu.vn/perso/Tran-Thi-Thanh-Hai/CMDFALL.html) | multi (7 views) | 384 | 6,026 | 7.12h | 4.25 | | [UP-Fall](https://sites.google.com/up.edu.mx/har-up/) | multi (2 views) | 1,118 | 1,213 | 4.59h | 13.63 | | [Le2i](https://search-data.ubfc.fr/imvia/FR-13002091000019-2024-04-09_Fall-Detection-Dataset.html) | single | 190 | 967 | 0.79h | 2.95 | | [GMDCSA24](https://github.com/ekramalam/GMDCSA24-A-Dataset-for-Human-Fall-Detection-in-Videos) | single | 160 | 458 | 0.36h | 2.80 | | [CAUCAFall](https://data.mendeley.com/datasets/7w7fccy7ky/4) | single | 100 | 258 | 0.28h | 3.85 | | [EDF](https://doi.org/10.5281/zenodo.15494102) | multi (2 views) | 10 | 254 | 0.22h | 3.14 | | [OCCU](https://doi.org/10.5281/zenodo.15494102) | multi (2 views) | 10 | 245 | 0.25h | 3.54 | | [MCFD](https://www.iro.umontreal.ca/~labimage/Dataset/) | multi (8 views) | 192 | 169 | 0.20h | 4.26 | | [OOPS-Fall](https://oops.cs.columbia.edu/data/) | single | 818 | 4,022 | 2.65h | 2.38 | | OF-Syn | single | 12,000 | 19,228 | 16.88h | 3.16 | SV = single-view (one count per unique camera perspective). Multi-view datasets have additional synchronized views; see [statistics.md](statistics.md) for full multi-view counts and class distributions. ## Configs Over 70 configurations are available. Each returns train/validation/test splits. | Category | Examples | Description | |---|---|---| | **Same-domain** | `of-sta-cs`, `of-sta-cv`, `of-itw`, `of-syn` | Train and test from the same source | | **Cross-domain (to-all)** | `of-sta-to-all-cs`, `of-syn-to-all-cs` | Train on one source, test on all (staged + ItW + Syn) | | **Individual to-all** | `cmdfall-to-all-cs`, `edf-to-all-cv` | Train on single dataset, test on all | | **OF-Syn demographic** | `of-syn-cross-age`, `of-syn-cross-bmi` | Cross-demographic generalization splits | | **Aggregate** | `cs`, `cv` | All staged + OOPS combined | | **Individual** | `cmdfall-cs`, `le2i-cv` | Single staged dataset | | **Labels/metadata** | `labels`, `labels-syn`, `framewise-syn` | Raw annotations without splits | See [CONFIGS.md](CONFIGS.md) for the complete configuration reference including deprecated names. ## Citation If you use OmniFall in your research, please cite our paper as well as the sub-dataset papers: ```bibtex @misc{omnifall, title={OmniFall: From Staged Through Synthetic to Wild, A Unified Multi-Domain Dataset for Robust Fall Detection}, author={David Schneider and Zdravko Marinov and Rafael Baur and Zeyun Zhong and Rodi Düger and Rainer Stiefelhagen}, year={2025}, eprint={2505.19889}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2505.19889}, }, @inproceedings{omnifall_cmdfall, title={A multi-modal multi-view dataset for human fall analysis and preliminary investigation on modality}, author={Tran, Thanh-Hai and Le, Thi-Lan and Pham, Dinh-Tan and Hoang, Van-Nam and Khong, Van-Minh and Tran, Quoc-Toan and Nguyen, Thai-Son and Pham, Cuong}, booktitle={2018 24th International Conference on Pattern Recognition (ICPR)}, pages={1947--1952}, year={2018}, organization={IEEE} }, @article{omnifall_up-fall, title={UP-fall detection dataset: A multimodal approach}, author={Mart{\'\i}nez-Villase{\~n}or, Lourdes and Ponce, Hiram and Brieva, Jorge and Moya-Albor, Ernesto and N{\'u}{\~n}ez-Mart{\'\i}nez, Jos{\'e} and Pe{\~n}afort-Asturiano, Carlos}, journal={Sensors}, volume={19}, number={9}, pages={1988}, year={2019}, publisher={MDPI} }, @article{omnifall_le2i, title={Optimized spatio-temporal descriptors for real-time fall detection: comparison of support vector machine and Adaboost-based classification}, author={Charfi, Imen and Miteran, Johel and Dubois, Julien and Atri, Mohamed and Tourki, Rached}, journal={Journal of Electronic Imaging}, volume={22}, number={4}, pages={041106--041106}, year={2013}, publisher={Society of Photo-Optical Instrumentation Engineers} }, @article{omnifall_gmdcsa, title={GMDCSA-24: A dataset for human fall detection in videos}, author={Alam, Ekram and Sufian, Abu and Dutta, Paramartha and Leo, Marco and Hameed, Ibrahim A}, journal={Data in Brief}, volume={57}, pages={110892}, year={2024}, publisher={Elsevier} }, @article{omnifall_cauca, title={Dataset CAUCAFall}, author={Eraso, Jose Camilo and Mu{\~n}oz, Elena and Mu{\~n}oz, Mariela and Pinto, Jesus}, journal={Mendeley Data}, volume={4}, year={2022} }, @inproceedings{omnifall_edf_occu, title={Evaluating depth-based computer vision methods for fall detection under occlusions}, author={Zhang, Zhong and Conly, Christopher and Athitsos, Vassilis}, booktitle={International symposium on visual computing}, pages={196--207}, year={2014}, organization={Springer} }, @article{omnifall_mcfd, title={Multiple cameras fall dataset}, author={Auvinet, Edouard and Rougier, Caroline and Meunier, Jean and St-Arnaud, Alain and Rousseau, Jacqueline}, journal={DIRO-Universit{\'e} de Montr{\'e}al, Tech. Rep}, volume={1350}, pages={24}, year={2010} }, @inproceedings{omnifall_oops, title={Oops! predicting unintentional action in video}, author={Epstein, Dave and Chen, Boyuan and Vondrick, Carl}, booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, pages={919--929}, year={2020} } ``` ## License The annotations and split definitions are released under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). The original video data belongs to their respective owners and should be obtained from the original sources. ## Contact For questions about the dataset, please contact [david.schneider@kit.edu]. ## Documentation - [statistics.md](statistics.md) - Full dataset statistics and class distributions - [CONFIGS.md](CONFIGS.md) - Complete configuration reference and deprecated names - [STRUCTURE.md](STRUCTURE.md) - Repository structure and data formats - [LABELS.md](LABELS.md) - Label definitions and annotation guidelines - [omnifall_dataset_examples.ipynb](omnifall_dataset_examples.ipynb) - Interactive examples with video loading and visualization