FalseMemBench / docs /DESIGN.md
codysnider's picture
Initial FalseMemBench dataset
25be136

Design Notes

Purpose

Most memory benchmarks measure semantic recall in benign settings.

This benchmark targets retrieval failure modes that matter in agent memory systems:

  • retrieving the wrong person
  • retrieving the wrong environment
  • retrieving an outdated fact instead of the current one
  • retrieving something semantically close but operationally wrong

Benchmark principles

  • retrieval-focused, not generation-focused
  • one query, many plausible distractors
  • exact relevant entry ids are known in advance
  • metadata such as tags, depth, speaker, and timestamp may be present but are optional
  • cases should remain small enough to inspect by hand

Scoring

Suggested scoring:

  • Recall@1
  • Recall@5
  • MRR
  • error bucket counts by adversary_type

Expansion ideas

  • more software-specific adversaries
  • benchmark splits by domain
  • fact-update and contradiction-specific suites
  • Hugging Face dataset packaging