| --- |
| license: cc-by-4.0 |
| task_categories: |
| - text-generation |
| - image-text-to-text |
| language: |
| - en |
| tags: |
| - machine-unlearning |
| - multimodal |
| - benchmark |
| - evaluation |
| - privacy |
| - llm |
| - vlm |
| - neurips |
| pretty_name: Multimodal Unlearning Evaluation Benchmark |
| size_categories: |
| - n<1K |
| configs: |
| - config_name: multimodal |
| data_files: |
| - split: train |
| path: outputs/multimodal_results.json |
| - config_name: unimodal |
| data_files: |
| - split: train |
| path: outputs/unimodal_results.json |
| - config_name: uqs_weights |
| data_files: |
| - split: train |
| path: outputs/uqs_weights.json |
| - config_name: ranking |
| data_files: |
| - split: train |
| path: outputs/ranking_table.json |
| - config_name: analysis |
| data_files: |
| - split: train |
| path: outputs/analysis_results.json |
| - config_name: kr_pilot |
| data_files: |
| - split: train |
| path: outputs/kr_pilot_results.json |
| - config_name: blip2 |
| data_files: |
| - split: train |
| path: outputs/blip2_minimal_summary.json |
| --- |
| |
| # π§ Multimodal Unlearning Evaluation Benchmark |
|
|
| ## π Overview |
| This dataset provides evaluation outputs for studying **metric inconsistency in multimodal machine unlearning**. |
|
|
| It supports reproducibility of results in: |
| > *Metric Unreliability in Multimodal Machine Unlearning (NeurIPS 2026)* |
|
|
| --- |
|
|
| ## π Contents |
|
|
| | File | Description | |
| |------|------------| |
| | π `multimodal_results.json` | Results on VQA benchmarks (MLLMU-Bench, UnLOK-VQA, MMUBench) | |
| | π `unimodal_results.json` | CIFAR-10 baseline results | |
| | βοΈ `uqs_weights.json` | Learned weights for Unified Quality Score (UQS) | |
| | π `ranking_table.json` | Method rankings across metrics | |
| | π `analysis_results.json` | Correlation and disagreement analysis | |
| | π `kr_pilot_results.json` | Knowledge Recoverability (KR) pilot results | |
| | π€ `blip2_minimal_summary.json` | Cross-architecture validation (BLIP-2) | |
|
|
| --- |
|
|
| ## π― Purpose |
|
|
| This benchmark evaluates five standard unlearning metrics: |
|
|
| - Forget Accuracy (FA) |
| - Retain Accuracy (RA) |
| - Membership Inference Attack (MIA) |
| - Activation Distance (AD) |
| - JS Divergence (JS) |
|
|
| β οΈ Key finding: |
| > These metrics produce **conflicting rankings** and do not measure **knowledge recoverability (KR)**. |
|
|
| --- |
|
|
| ## βοΈ Usage |
|
|
| All results in the paper can be reproduced directly from these files. |
|
|
| Example: |
|
|
| ```python |
| import json |
| |
| with open("multimodal_results.json") as f: |
| data = json.load(f) |
| |
| |
| π Source Datasets (Not Included) |
| |
| This benchmark builds on: |
| |
| MLLMU-Bench |
| UnLOK-VQA |
| MMUBench |
| CIFAR-10 |
| |
| These datasets are not redistributed here. Please refer to their original sources. |
| |
| βοΈ License |
| |
| This dataset is released under the CC-BY-4.0 License. |
| |
| β οΈ Notes |
| This dataset contains evaluation outputs, not raw training data |
| Designed for benchmarking and reproducibility |
| Prepared to support anonymous peer review |
| π Citation |
| Anonymous. Metric Unreliability in Multimodal Machine Unlearning. NeurIPS 2026. |
| |
| --- |
| |