File size: 2,907 Bytes
9f8d2a5 a9cdae5 f2c3add a9cdae5 f2c3add a9cdae5 f2c3add a9cdae5 f2c3add a9cdae5 f2c3add a9cdae5 f2c3add a9cdae5 f2c3add 9f8d2a5 a9cdae5 9f8d2a5 a9cdae5 9f8d2a5 a9cdae5 9f8d2a5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | ---
license: cc-by-4.0
task_categories:
- text-generation
- image-text-to-text
language:
- en
tags:
- machine-unlearning
- multimodal
- benchmark
- evaluation
- privacy
- llm
- vlm
- neurips
pretty_name: Multimodal Unlearning Evaluation Benchmark
size_categories:
- n<1K
configs:
- config_name: multimodal
data_files:
- split: train
path: outputs/multimodal_results.json
- config_name: unimodal
data_files:
- split: train
path: outputs/unimodal_results.json
- config_name: uqs_weights
data_files:
- split: train
path: outputs/uqs_weights.json
- config_name: ranking
data_files:
- split: train
path: outputs/ranking_table.json
- config_name: analysis
data_files:
- split: train
path: outputs/analysis_results.json
- config_name: kr_pilot
data_files:
- split: train
path: outputs/kr_pilot_results.json
- config_name: blip2
data_files:
- split: train
path: outputs/blip2_minimal_summary.json
---
# π§ Multimodal Unlearning Evaluation Benchmark
## π Overview
This dataset provides evaluation outputs for studying **metric inconsistency in multimodal machine unlearning**.
It supports reproducibility of results in:
> *Metric Unreliability in Multimodal Machine Unlearning (NeurIPS 2026)*
---
## π Contents
| File | Description |
|------|------------|
| π `multimodal_results.json` | Results on VQA benchmarks (MLLMU-Bench, UnLOK-VQA, MMUBench) |
| π `unimodal_results.json` | CIFAR-10 baseline results |
| βοΈ `uqs_weights.json` | Learned weights for Unified Quality Score (UQS) |
| π `ranking_table.json` | Method rankings across metrics |
| π `analysis_results.json` | Correlation and disagreement analysis |
| π `kr_pilot_results.json` | Knowledge Recoverability (KR) pilot results |
| π€ `blip2_minimal_summary.json` | Cross-architecture validation (BLIP-2) |
---
## π― Purpose
This benchmark evaluates five standard unlearning metrics:
- Forget Accuracy (FA)
- Retain Accuracy (RA)
- Membership Inference Attack (MIA)
- Activation Distance (AD)
- JS Divergence (JS)
β οΈ Key finding:
> These metrics produce **conflicting rankings** and do not measure **knowledge recoverability (KR)**.
---
## βοΈ Usage
All results in the paper can be reproduced directly from these files.
Example:
```python
import json
with open("multimodal_results.json") as f:
data = json.load(f)
π Source Datasets (Not Included)
This benchmark builds on:
MLLMU-Bench
UnLOK-VQA
MMUBench
CIFAR-10
These datasets are not redistributed here. Please refer to their original sources.
βοΈ License
This dataset is released under the CC-BY-4.0 License.
β οΈ Notes
This dataset contains evaluation outputs, not raw training data
Designed for benchmarking and reproducibility
Prepared to support anonymous peer review
π Citation
Anonymous. Metric Unreliability in Multimodal Machine Unlearning. NeurIPS 2026.
---
|