[Submission] mindbomber/aana-presidio-piimb-policy-v1
Adds PIIMB result files for mindbomber/aana-presidio-piimb-policy-v1.
Model page: https://huggingface.co/mindbomber/aana-presidio-piimb-policy-v1
Benchmark: PIIMB sentences test split, dataset revision df8299e90ff053fa6fd1d3678f6693a454f4ecc0.
Base detector: microsoft/presidio-analyzer.
Presidio-only average masking F2: 0.4492985573.
Presidio+AANA average masking F2: 0.5629171363.
Absolute F2 gain: 0.1136185790.
This is a zero-parameter AANA verifier/correction policy architecture paired with a specialist detector, not a trained transformer checkpoint.
Peer review request for the AANA + Presidio submission
This PR is intended as a benchmark-reviewable submission for mindbomber/aana-presidio-piimb-policy-v1.
What is being evaluated:
- Base detector:
microsoft/presidio-analyzerpredefined recognizers. - AANA-enhanced system: Presidio outputs plus explicit verifier/correction layers for offset validity, PII-pattern evidence, overlap correction, and high-risk identifier coverage.
- This is a zero-parameter policy architecture paired with a specialist detector, not a trained transformer checkpoint.
Full PIIMB run:
- Dataset:
piimb/pii-masking-benchmark - Dataset revision:
df8299e90ff053fa6fd1d3678f6693a454f4ecc0 - Subset:
sentences - Metric/schema: PIIMB
0.2.0
Ablation result:
- Presidio-only average masking F2:
0.4492985573 - Presidio + AANA average masking F2:
0.5629171363 - Absolute F2 gain:
+0.1136185790 - Presidio-only average recall:
0.4008557794 - Presidio + AANA average recall:
0.5159532273 - Absolute recall gain:
+0.1150974479
Per-source AANA F2:
- OpenPII:
0.4879480402 - Gretel:
0.6281397502 - Nemotron-PII:
0.6161414756 - Privy:
0.5194392792
Repro/model card:
https://huggingface.co/mindbomber/aana-presidio-piimb-policy-v1
Scope and limitations:
- The claim is only that the AANA verifier/correction layer improved masking F2 and recall over the Presidio-only baseline on this PIIMB run.
- I am not claiming state-of-the-art performance, guaranteed PII removal, or production readiness for regulated workflows.
- Lower strict/type NER scores are expected because this architecture optimizes broad masking coverage and offset correctness rather than exact source-dataset entity taxonomy.
The submission was also checked through an AANA model-evaluation release gate before publication: gate_decision=pass, recommended_action=accept, candidate_gate=pass, no hard blockers. Review on methodology, result-file shape, and whether this policy-architecture submission fits leaderboard criteria would be appreciated.
Public Try AANA endpoint
The interactive AANA demo Space is now live and should be used as the public
"try AANA" endpoint for this submission and future benchmark references:
https://huggingface.co/spaces/mindbomber/aana-demo
It accepts:
- candidate answer/action
- evidence
- constraints
It returns:
- AANA route:
accept,revise,ask,defer, orrefuse - AIx score
- hard blockers
- suggested revision/route
- JSON audit summary
Canonical AANA model card:
https://huggingface.co/mindbomber/aana
Benchmark context remains bounded to this PIIMB ablation: Presidio-only average
masking F2 0.4492985573; Presidio + AANA average masking F2 0.5629171363;
absolute F2 gain +0.1136185790. This is not a claim of state-of-the-art
performance, guaranteed PII removal, guaranteed hallucination removal, trained
weights, or production readiness.
Same comment as https://huggingface.co/datasets/piimb/pii-masking-benchmark-results/discussions/2 : I'm not looking to add regex baselines to the benchmark at this point.