Cataract-LMM / README.md
mjahmadi's picture
Update README.md
a741cc2 verified
---
language:
- en
license: cc-by-4.0
task_categories:
- video-classification
- image-segmentation
- object-detection
- image-classification
- feature-extraction
- image-feature-extraction
- keypoint-detection
- zero-shot-object-detection
- zero-shot-classification
- zero-shot-image-classification
- unconditional-image-generation
- mask-generation
- image-to-video
- image-to-image
- visual-question-answering
- visual-document-retrieval
- tabular-classification
- tabular-regression
- graph-ml
- video-to-video
- any-to-any
- robotics
tags:
- medical
- surgery
- ophthalmology
- cataract
- phacoemulsification
- surgical-data-science
- computer-vision
- video-analysis
- phase-recognition
- workflow-analysis
- instance-segmentation
- object-tracking
- skill-assessment
- kinematics
- action-recognition
- surgical-ai
- computer-assisted-surgery
- deep-learning
- domain-adaptation
- artificial-intelligence
- dataset
- medtech
- mohammad-javad-ahmadi
- aras
- kntu
- tums
- arxiv:2510.16371
pretty_name: "Cataract-LMM"
size_categories:
- 1K<n<10K
annotations_creators:
- expert-generated
source_datasets:
- original
---
<div align="center">
<a href="https://mjahmadee.github.io/Cataract-LMM/">
<img src="https://img.shields.io/badge/🌐%20Project%20Website-Explore_Now-0052CC?style=for-the-badge" alt="Project Website" />
</a>
<a href="https://github.com/MJAHMADEE/Cataract-LMM">
<img src="https://img.shields.io/badge/GitHub-Code_%26_Docs-181717?style=for-the-badge&logo=github&logoColor=white" alt="GitHub Repository" />
</a>
<a href="https://huggingface.co/datasets/mjahmadi/Cataract-LMM">
<img src="https://img.shields.io/badge/πŸ€—%20Hugging%20Face-Download_Dataset-FFD21E?style=for-the-badge&logoColor=black" alt="Hugging Face Dataset" />
</a>
</div>
<br>
# πŸ‘οΈ Cataract-LMM: Large-Scale Multi-Source Multi-Task Benchmark for Surgical AI
Welcome to the official repository for the **Cataract-LMM** dataset. Hosted on [Hugging Face](https://huggingface.co/datasets/mjahmadi/Cataract-LMM), this dataset represents a comprehensive, clinically representative benchmark designed to accelerate deep learning research in surgical video analysis. By bridging the gap between isolated, single-task datasets and the complex reality of surgical environments, Cataract-LMM provides the robust data necessary to train generalizable, multi-task Computer-Assisted Surgery (CAS) systems.
---
## πŸ“‘ Table of Contents
- [πŸ“œ Publication Details](#publication)
- [πŸ“Š Dataset Overview](#overview)
- [πŸ—‚οΈ The Five Data Subsets](#subsets)
- [πŸ“ Global Directory Structure](#structure)
- [πŸ”— Naming Nomenclature & Traceability](#naming)
- [πŸ”„ Versioning & Updates](#versioning)
- [πŸ“ Citation & Academic Request](#citation)
- [πŸ“¬ Contact & Connect](#contact)
---
<a id="publication"></a>
## πŸ“œ Publication Details
This dataset is the foundation of the following research paper. If you find this repository useful, please consider reading and citing our work:
> ### **[Cataract-LMM Large-Scale Multi-Source Multi-Task Benchmark for Deep Learning in Surgical Video Analysis](https://arxiv.org/abs/2510.16371)**
>
> **Authors:**
> Mohammad Javad Ahmadi¹, Iman Gandomi¹, Parisa Abdi², Seyed-Farzad Mohammadi², Amirhossein Taslimi¹, Mehdi Khodaparast², Hassan Hashemi³, Mahdi Tavakoli⁴, Hamid D. Taghirad¹
>
> **Affiliations:**
> ΒΉ *Applied Robotics and AI Solutions (ARAS), Faculties of Electrical and Computer Engineering, K.N. Toosi University of Technology, Tehran, Iran*
> Β² *Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran*
> Β³ *Noor Ophthalmology Research Center, Noor Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran*
> ⁴ *Departments of Electrical and Computer Engineering & Biomedical Engineering, University of Alberta, Edmonton, AB, Canada*
---
<a id="overview"></a>
## πŸ“Š Dataset Overview
Cataract-LMM provides an unprecedented scale and depth of annotation for phacoemulsification cataract surgery, enabling researchers to tackle real-world clinical variations.
* **Massive Scale:** 3,000 complete surgical procedures encompassing 1,134.2 hours of continuous footage.
* **Multi-Source Heterogeneity (Domain Shift):** Prospectively collected from two distinct clinical centers, ensuring rigorous hardware and procedural diversity:
* **Center S1 (Farabi Eye Hospital):** 2,930 procedures acquired via Haag-Streit HS Hi-R NEO 900 (720Γ—480 resolution @ 30 fps).
* **Center S2 (Noor Eye Hospital):** 70 procedures acquired via ZEISS ARTEVO 800 digital microscope (1920Γ—1080 resolution @ 60 fps).
* **Procedural Diversity:** Captures stochastically varied workflows, unscripted intra-operative events, and a broad spectrum of surgical proficiency ranging from novice residents to expert attendings.
---
<a id="subsets"></a>
## πŸ—‚οΈ The Five Data Subsets
To facilitate modular access and targeted research, the dataset is stratified into five primary sub-repositories. Each directory is enriched with distinct, complementary layers of annotation.
> πŸ’‘ **Usage Tip:** Each subdirectory contains its own dedicated `README.md` file detailing exact data formats and extraction guidelines.
### [1️⃣ Phase Recognition](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/1_Phase_Recognition)
* **Scope:** 150 full procedures.
* **Annotations:** Frame-wise temporal boundaries for 13 distinct surgical phases (e.g., *Incision*, *Phacoemulsification*, *Idle*).
* **Use Cases:** Automated surgical workflow analysis, real-time causal inference, and procedural summarization.
### [2️⃣ Instance Segmentation](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/2_Instance_Segmentation)
* **Scope:** 6,094 precisely annotated frames sampled across all phases.
* **Annotations:** Pixel-level polygon masks for 12 classes (2 anatomical structures, 10 specialized surgical instruments) provided in both COCO and YOLO formats.
* **Use Cases:** Detailed scene parsing, multi-class instrument recognition, and cross-center domain adaptation benchmarking.
### [3️⃣ Object Tracking](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/3_Object_Tracking)
* **Scope:** 170 continuous video clips of the *Capsulorhexis* phase (469,118 densely annotated frames).
* **Annotations:** Spatiotemporal tracking featuring instance masks, persistent tracking IDs, bounding boxes, and functional keypoints (instrument tips, centroids).
* **Use Cases:** Surgical instrument tracking (SOT/MOTS), kinematic analysis, and derivation of objective motion economy metrics.
### [4️⃣ Skill Assessment](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/4_Skill_Assessment)
* **Scope:** The exact same 170 capsulorhexis video clips utilized in the Object Tracking subset.
* **Annotations:** Objective surgical skill scores evaluated on a 5-point continuous scale across 6 performance indicators (adapted from GRASIS/ICO-OSCAR), adjudicated by expert surgeons.
* **Use Cases:** Automated surgical evaluation, continuous skill regression, and linking geometric motion tracking to competency ratings.
### [5️⃣ Raw Videos](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/5_Raw_Videos)
* **Scope:** The complete corpus of 3,000 entirely de-identified, unannotated surgical recordings.
* **Use Cases:** Self-Supervised Learning (SSL), Vision-Language Pre-training (VLP) via retrieval-augmented frameworks, and training controllable Generative AI models.
---
<a id="structure"></a>
## πŸ“ Global Directory Structure
The repository is organized to maximize download stability and logical separation of tasks. Below is the high-level architecture:
> πŸ“¦ **[Cataract-LMM (Root)](https://huggingface.co/datasets/mjahmadi/Cataract-LMM)**<br>
> β”œβ”€β”€ πŸ“„ [`README.md`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/blob/main/README.md) *β€” This global documentation file*<br>
> β”œβ”€β”€ πŸ“‚ [`1_Phase_Recognition/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/1_Phase_Recognition) *β€” Workflow and temporal phase annotations & clips*<br>
> β”œβ”€β”€ πŸ“‚ [`2_Instance_Segmentation/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/2_Instance_Segmentation) *β€” COCO/YOLO masks and extracted static frames*<br>
> β”œβ”€β”€ πŸ“‚ [`3_Object_Tracking/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/3_Object_Tracking) *β€” Continuous multi-layered tracking geometries*<br>
> β”œβ”€β”€ πŸ“‚ [`4_Skill_Assessment/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/4_Skill_Assessment) *β€” Expert-adjudicated clinical proficiency rubrics*<br>
> └── πŸ“‚ [`5_Raw_Videos/`](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/tree/main/5_Raw_Videos) *β€” The massive 3,000-procedure unannotated corpus*<br>
---
<a id="naming"></a>
## πŸ”— Naming Nomenclature & Traceability
To ensure flawless traceability across the multi-task subsets and the raw data pool, all files adhere to a strict, standardized naming convention (e.g., `TR_0001_S1_P03` or `RV_2253_S1`):
* **Task Prefix:** Indicates the subset (`PH` = Phase, `SE` = Segmentation, `TR` = Tracking, `SK` = Skill, `RV` = Raw Video).
* **Global ID / Subset Index:** A unique identifier mapping the annotated subset directly back to the original raw video in the 3,000-procedure corpus.
* **Clinical Source (`S1` / `S2`):** Indicates the origin of the acquisition, providing crucial metadata for domain adaptation and generalization benchmarking.
* **Procedural Phase (`P03`):** Where applicable (e.g., tracking/skill clips), denotes the specific isolated surgical phase (*Capsulorhexis*).
---
<a id="versioning"></a>
## πŸ”„ Versioning & Updates
The Cataract-LMM dataset is a dynamically maintained benchmark. As new annotations, baseline models, or structural improvements are integrated, we will release updated versions.
To ensure you are working with the most current and robust data:
1. **Check the Version History:** Navigate to the [History / Commits Tab](https://huggingface.co/datasets/mjahmadi/Cataract-LMM/commits/main) of this repository to view the latest updates and version tags.
2. **Stay Connected:** Feel free to reach out to the author (contact info below) to inquire about upcoming updates, report potential dataset anomalies, or discuss integrating new annotation layers.
---
<a id="citation"></a>
## πŸ“ Citation & Academic Request
The Cataract-LMM dataset is open-access and released under the **CC-BY 4.0** license.
Our manuscript detailing the comprehensive methodology, algorithmic baselines, and technical validations of this dataset has been submitted to **Nature Scientific Data**. While the preprint is available for immediate reference on arXiv ([arXiv:2510.16371](https://arxiv.org/abs/2510.16371)), we professionally request that any publications, derivative works, or systems utilizing this dataset **direct their citations to the final peer-reviewed journal version** once it is officially published.
---
<a id="contact"></a>
## πŸ“¬ Contact & Connect
**Mohammad Javad Ahmadi**
I welcome collaborations, technical inquiries regarding the dataset, and discussions on advancing AI in medical applications. Feel free to connect with me through any of the channels below:
* πŸ“§ **Academic Email:** mjahmadi@email.kntu.ac.ir
* πŸ“§ **Personal Email:** mjahmadee@gmail.com
<br>
<p align="left">
<a href="https://www.linkedin.com/in/mjahmadi/">
<img src="https://img.shields.io/badge/LinkedIn-0A66C2?style=for-the-badge&logo=linkedin&logoColor=white" alt="LinkedIn" />
</a>
<a href="https://scholar.google.com/citations?user=wTnN9IEAAAAJ&hl=en">
<img src="https://img.shields.io/badge/Google_Scholar-4285F4?style=for-the-badge&logo=google-scholar&logoColor=white" alt="Google Scholar" />
</a>
<a href="https://github.com/MJAHMADEE">
<img src="https://img.shields.io/badge/GitHub-181717?style=for-the-badge&logo=github&logoColor=white" alt="GitHub" />
</a>
<a href="https://huggingface.co/mjahmadi">
<img src="https://img.shields.io/badge/Hugging_Face-FFD21E?style=for-the-badge&logo=huggingface&logoColor=black" alt="Hugging Face" />
</a>
</p>