| --- |
| language: |
| - en |
| license: cc-by-4.0 |
| task_categories: |
| - video-classification |
| - image-segmentation |
| - object-detection |
| - image-classification |
| - feature-extraction |
| - image-feature-extraction |
| - keypoint-detection |
| - zero-shot-object-detection |
| - zero-shot-classification |
| - zero-shot-image-classification |
| - unconditional-image-generation |
| - mask-generation |
| - image-to-video |
| - image-to-image |
| - visual-question-answering |
| - visual-document-retrieval |
| - tabular-classification |
| - tabular-regression |
| - graph-ml |
| - video-to-video |
| - any-to-any |
| - robotics |
| tags: |
| - medical |
| - surgery |
| - ophthalmology |
| - cataract |
| - phacoemulsification |
| - surgical-data-science |
| - computer-vision |
| - video-analysis |
| - phase-recognition |
| - workflow-analysis |
| - instance-segmentation |
| - object-tracking |
| - skill-assessment |
| - kinematics |
| - action-recognition |
| - surgical-ai |
| - computer-assisted-surgery |
| - deep-learning |
| - domain-adaptation |
| - artificial-intelligence |
| - dataset |
| - medtech |
| - mohammad-javad-ahmadi |
| - aras |
| - kntu |
| - tums |
| - arxiv:2510.16371 |
| pretty_name: "Cataract-LMM" |
| size_categories: |
| - 1K<n<10K |
| annotations_creators: |
| - expert-generated |
| source_datasets: |
| - original |
| --- |
| |
| <div align="center"> |
| <a href="https://mjahmadee.github.io/Cataract-LMM/"> |
| <img src="https://img.shields.io/badge/π%20Project%20Website-Explore_Now-0052CC?style=for-the-badge" alt="Project Website" /> |
| </a> |
| <a href="https://github.com/MJAHMADEE/Cataract-LMM"> |
| <img src="https://img.shields.io/badge/GitHub-Code_%26_Docs-181717?style=for-the-badge&logo=github&logoColor=white" alt="GitHub Repository" /> |
| </a> |
| <a href="https://huggingface.co/datasets/mjahmadi/Cataract-LMM"> |
| <img src="https://img.shields.io/badge/π€%20Hugging%20Face-Download_Dataset-FFD21E?style=for-the-badge&logoColor=black" alt="Hugging Face Dataset" /> |
| </a> |
| </div> |
| <br> |
| |
| # ποΈ Cataract-LMM: Large-Scale Multi-Source Multi-Task Benchmark for Surgical AI |
|
|
| Welcome to the official repository for the **Cataract-LMM** dataset. Hosted on [Hugging Face](https://huggingface.co/datasets/mjahmadi/Cataract-LMM), this dataset represents a comprehensive, clinically representative benchmark designed to accelerate deep learning research in surgical video analysis. By bridging the gap between isolated, single-task datasets and the complex reality of surgical environments, Cataract-LMM provides the robust data necessary to train generalizable, multi-task Computer-Assisted Surgery (CAS) systems. |
|
|
| --- |
|
|
| ## π Table of Contents |
|
|
| - [π Publication Details](#publication) |
| - [π Dataset Overview](#overview) |
| - [ποΈ The Five Data Subsets](#subsets) |
| - [π Global Directory Structure](#structure) |
| - [π Naming Nomenclature & Traceability](#naming) |
| - [π Versioning & Updates](#versioning) |
| - [π Citation & Academic Request](#citation) |
| - [π¬ Contact & Connect](#contact) |
|
|
| --- |
|
|
| <a id="publication"></a> |
| ## π Publication Details |
|
|
| This dataset is the foundation of the following research paper. If you find this repository useful, please consider reading and citing our work: |
|
|
| > ### **[Cataract-LMM Large-Scale Multi-Source Multi-Task Benchmark for Deep Learning in Surgical Video Analysis](https://arxiv.org/abs/2510.16371)** |
| > |
| > **Authors:** |
| > Mohammad Javad AhmadiΒΉ, Iman GandomiΒΉ, Parisa AbdiΒ², Seyed-Farzad MohammadiΒ², Amirhossein TaslimiΒΉ, Mehdi KhodaparastΒ², Hassan HashemiΒ³, Mahdi Tavakoliβ΄, Hamid D. TaghiradΒΉ |
| > |
| > **Affiliations:** |
| > ΒΉ *Applied Robotics and AI Solutions (ARAS), Faculties of Electrical and Computer Engineering, K.N. Toosi University of Technology, Tehran, Iran* |
| > Β² *Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran* |
| > Β³ *Noor Ophthalmology Research Center, Noor Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran* |
| > β΄ *Departments of Electrical and Computer Engineering & Biomedical Engineering, University of Alberta, Edmonton, AB, Canada* |
|
|
| --- |
|
|
| <a id="overview"></a> |
| ## π Dataset Overview |
|
|
| Cataract-LMM provides an unprecedented scale and depth of annotation for phacoemulsification cataract surgery, enabling researchers to tackle real-world clinical variations. |
|
|
| * **Massive Scale:** 3,000 complete surgical procedures encompassing 1,134.2 hours of continuous footage. |
| * **Multi-Source Heterogeneity (Domain Shift):** Prospectively collected from two distinct clinical centers, ensuring rigorous hardware and procedural diversity: |
| * **Center S1 (Farabi Eye Hospital):** 2,930 procedures acquired via Haag-Streit HS Hi-R NEO 900 (720Γ480 resolution @ 30 fps). |
| * **Center S2 (Noor Eye Hospital):** 70 procedures acquired via ZEISS ARTEVO 800 digital microscope (1920Γ1080 resolution @ 60 fps). |
| * **Procedural Diversity:** Captures stochastically varied workflows, unscripted intra-operative events, and a broad spectrum of surgical proficiency ranging from novice residents to expert attendings. |
|
|
| --- |
|
|
| <a id="subsets"></a> |
| ## ποΈ The Five Data Subsets |
|
|
| To facilitate modular access and targeted research, the dataset is stratified into five primary sub-repositories. Each directory is enriched with distinct, complementary layers of annotation. |
|
|
| > π‘ **Usage Tip:** Each subdirectory contains its own dedicated `README.md` file detailing exact data formats and extraction guidelines. |
|
|
| ### [1οΈβ£ Phase Recognition](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/1_Phase_Recognition) |
| * **Scope:** 150 full procedures. |
| * **Annotations:** Frame-wise temporal boundaries for 13 distinct surgical phases (e.g., *Incision*, *Phacoemulsification*, *Idle*). |
| * **Use Cases:** Automated surgical workflow analysis, real-time causal inference, and procedural summarization. |
|
|
| ### [2οΈβ£ Instance Segmentation](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/2_Instance_Segmentation) |
| * **Scope:** 6,094 precisely annotated frames sampled across all phases. |
| * **Annotations:** Pixel-level polygon masks for 12 classes (2 anatomical structures, 10 specialized surgical instruments) provided in both COCO and YOLO formats. |
| * **Use Cases:** Detailed scene parsing, multi-class instrument recognition, and cross-center domain adaptation benchmarking. |
|
|
| ### [3οΈβ£ Object Tracking](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/3_Object_Tracking) |
| * **Scope:** 170 continuous video clips of the *Capsulorhexis* phase (469,118 densely annotated frames). |
| * **Annotations:** Spatiotemporal tracking featuring instance masks, persistent tracking IDs, bounding boxes, and functional keypoints (instrument tips, centroids). |
| * **Use Cases:** Surgical instrument tracking (SOT/MOTS), kinematic analysis, and derivation of objective motion economy metrics. |
|
|
| ### [4οΈβ£ Skill Assessment](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/4_Skill_Assessment) |
| * **Scope:** The exact same 170 capsulorhexis video clips utilized in the Object Tracking subset. |
| * **Annotations:** Objective surgical skill scores evaluated on a 5-point continuous scale across 6 performance indicators (adapted from GRASIS/ICO-OSCAR), adjudicated by expert surgeons. |
| * **Use Cases:** Automated surgical evaluation, continuous skill regression, and linking geometric motion tracking to competency ratings. |
|
|
| ### [5οΈβ£ Raw Videos](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/5_Raw_Videos) |
| * **Scope:** The complete corpus of 3,000 entirely de-identified, unannotated surgical recordings. |
| * **Use Cases:** Self-Supervised Learning (SSL), Vision-Language Pre-training (VLP) via retrieval-augmented frameworks, and training controllable Generative AI models. |
|
|
| --- |
|
|
| <a id="structure"></a> |
| ## π Global Directory Structure |
|
|
| The repository is organized to maximize download stability and logical separation of tasks. Below is the high-level architecture: |
| > π¦ **[Cataract-LMM (Root)](https://huggingface.co/datasets/mjahmadi/Cataract-LMM)**<br> |
| > βββ π [`README.md`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/blob/main/README.md) *β This global documentation file*<br> |
| > βββ π [`1_Phase_Recognition/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/1_Phase_Recognition) *β Workflow and temporal phase annotations & clips*<br> |
| > βββ π [`2_Instance_Segmentation/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/2_Instance_Segmentation) *β COCO/YOLO masks and extracted static frames*<br> |
| > βββ π [`3_Object_Tracking/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/3_Object_Tracking) *β Continuous multi-layered tracking geometries*<br> |
| > βββ π [`4_Skill_Assessment/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/4_Skill_Assessment) *β Expert-adjudicated clinical proficiency rubrics*<br> |
| > βββ π [`5_Raw_Videos/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/5_Raw_Videos) *β The massive 3,000-procedure unannotated corpus*<br> |
|
|
| --- |
|
|
| <a id="naming"></a> |
| ## π Naming Nomenclature & Traceability |
|
|
| To ensure flawless traceability across the multi-task subsets and the raw data pool, all files adhere to a strict, standardized naming convention (e.g., `TR_0001_S1_P03` or `RV_2253_S1`): |
|
|
| * **Task Prefix:** Indicates the subset (`PH` = Phase, `SE` = Segmentation, `TR` = Tracking, `SK` = Skill, `RV` = Raw Video). |
| * **Global ID / Subset Index:** A unique identifier mapping the annotated subset directly back to the original raw video in the 3,000-procedure corpus. |
| * **Clinical Source (`S1` / `S2`):** Indicates the origin of the acquisition, providing crucial metadata for domain adaptation and generalization benchmarking. |
| * **Procedural Phase (`P03`):** Where applicable (e.g., tracking/skill clips), denotes the specific isolated surgical phase (*Capsulorhexis*). |
|
|
| --- |
|
|
| <a id="versioning"></a> |
| ## π Versioning & Updates |
|
|
| The Cataract-LMM dataset is a dynamically maintained benchmark. As new annotations, baseline models, or structural improvements are integrated, we will release updated versions. |
|
|
| To ensure you are working with the most current and robust data: |
| 1. **Check the Version History:** Navigate to the [History / Commits Tab](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/commits/main) of this repository to view the latest updates and version tags. |
| 2. **Stay Connected:** Feel free to reach out to the author (contact info below) to inquire about upcoming updates, report potential dataset anomalies, or discuss integrating new annotation layers. |
|
|
| --- |
|
|
| <a id="citation"></a> |
| ## π Citation & Academic Request |
|
|
| The Cataract-LMM dataset is open-access and released under the **CC-BY 4.0** license. |
|
|
| Our manuscript detailing the comprehensive methodology, algorithmic baselines, and technical validations of this dataset has been submitted to **Nature Scientific Data**. While the preprint is available for immediate reference on arXiv ([arXiv:2510.16371](https://arxiv.org/abs/2510.16371)), we professionally request that any publications, derivative works, or systems utilizing this dataset **direct their citations to the final peer-reviewed journal version** once it is officially published. |
|
|
| --- |
|
|
| <a id="contact"></a> |
| ## π¬ Contact & Connect |
|
|
| **Mohammad Javad Ahmadi** |
|
|
| I welcome collaborations, technical inquiries regarding the dataset, and discussions on advancing AI in medical applications. Feel free to connect with me through any of the channels below: |
|
|
| * π§ **Academic Email:** mjahmadi@email.kntu.ac.ir |
| * π§ **Personal Email:** mjahmadee@gmail.com |
|
|
| <br> |
|
|
| <p align="left"> |
| <a href="https://www.linkedin.com/in/mjahmadi/"> |
| <img src="https://img.shields.io/badge/LinkedIn-0A66C2?style=for-the-badge&logo=linkedin&logoColor=white" alt="LinkedIn" /> |
| </a> |
| <a href="https://scholar.google.com/citations?user=wTnN9IEAAAAJ&hl=en"> |
| <img src="https://img.shields.io/badge/Google_Scholar-4285F4?style=for-the-badge&logo=google-scholar&logoColor=white" alt="Google Scholar" /> |
| </a> |
| <a href="https://github.com/MJAHMADEE"> |
| <img src="https://img.shields.io/badge/GitHub-181717?style=for-the-badge&logo=github&logoColor=white" alt="GitHub" /> |
| </a> |
| <a href="https://huggingface.co/mjahmadi"> |
| <img src="https://img.shields.io/badge/Hugging_Face-FFD21E?style=for-the-badge&logo=huggingface&logoColor=black" alt="Hugging Face" /> |
| </a> |
| </p> |