--- license: cc-by-4.0 task_categories: - text-generation - image-text-to-text language: - en tags: - machine-unlearning - multimodal - benchmark - evaluation - privacy - llm - vlm - neurips pretty_name: Multimodal Unlearning Evaluation Benchmark size_categories: - n<1K configs: - config_name: multimodal data_files: - split: train path: outputs/multimodal_results.json - config_name: unimodal data_files: - split: train path: outputs/unimodal_results.json - config_name: uqs_weights data_files: - split: train path: outputs/uqs_weights.json - config_name: ranking data_files: - split: train path: outputs/ranking_table.json - config_name: analysis data_files: - split: train path: outputs/analysis_results.json - config_name: kr_pilot data_files: - split: train path: outputs/kr_pilot_results.json - config_name: blip2 data_files: - split: train path: outputs/blip2_minimal_summary.json --- # 🧠 Multimodal Unlearning Evaluation Benchmark ## 📌 Overview This dataset provides evaluation outputs for studying **metric inconsistency in multimodal machine unlearning**. It supports reproducibility of results in: > *Metric Unreliability in Multimodal Machine Unlearning (NeurIPS 2026)* --- ## 📊 Contents | File | Description | |------|------------| | 📄 `multimodal_results.json` | Results on VQA benchmarks (MLLMU-Bench, UnLOK-VQA, MMUBench) | | 📄 `unimodal_results.json` | CIFAR-10 baseline results | | ⚖️ `uqs_weights.json` | Learned weights for Unified Quality Score (UQS) | | 🏆 `ranking_table.json` | Method rankings across metrics | | 📈 `analysis_results.json` | Correlation and disagreement analysis | | 🔍 `kr_pilot_results.json` | Knowledge Recoverability (KR) pilot results | | 🤖 `blip2_minimal_summary.json` | Cross-architecture validation (BLIP-2) | --- ## 🎯 Purpose This benchmark evaluates five standard unlearning metrics: - Forget Accuracy (FA) - Retain Accuracy (RA) - Membership Inference Attack (MIA) - Activation Distance (AD) - JS Divergence (JS) ⚠️ Key finding: > These metrics produce **conflicting rankings** and do not measure **knowledge recoverability (KR)**. --- ## ⚙️ Usage All results in the paper can be reproduced directly from these files. Example: ```python import json with open("multimodal_results.json") as f: data = json.load(f) 📚 Source Datasets (Not Included) This benchmark builds on: MLLMU-Bench UnLOK-VQA MMUBench CIFAR-10 These datasets are not redistributed here. Please refer to their original sources. ⚖️ License This dataset is released under the CC-BY-4.0 License. ⚠️ Notes This dataset contains evaluation outputs, not raw training data Designed for benchmarking and reproducibility Prepared to support anonymous peer review 🔗 Citation Anonymous. Metric Unreliability in Multimodal Machine Unlearning. NeurIPS 2026. ---