--- pretty_name: FalseMemBench license: mit task_categories: - text-retrieval language: - en tags: - retrieval - memory - llm-agents - adversarial size_categories: - n<1K --- # FalseMemBench `FalseMemBench` is a benchmark project for evaluating memory retrieval systems under adversarial distractors. The goal is to measure whether a system can retrieve the right memory when many nearby but wrong memories are present. ## Focus The benchmark is designed for memory systems used by LLM agents. It emphasizes: - entity confusion - environment confusion - time/version confusion - stale facts vs current facts - speaker confusion - near-duplicate paraphrases ## Layout - `schema/case.schema.json`: benchmark case schema - `data/cases.jsonl`: current benchmark cases - `docs/`: benchmark design notes - `scripts/validate.py`: schema validator for the JSONL dataset - `scripts/run_benchmark.py`: simple keyword baseline - `scripts/run_tagmem_benchmark.py`: run the benchmark against a real `tagmem` binary ## Case format Each case contains: - a `query` - a set of `entries` - one or more `relevant_ids` - a single `adversary_type` - optional metadata for analysis ## Example ```json { "id": "env-001", "query": "What database does staging use?", "adversary_type": "environment_swap", "entries": [ { "id": "e1", "text": "The staging environment uses db-staging.internal.", "tags": ["staging", "database", "infra"], "depth": 1 }, { "id": "e2", "text": "The production environment uses db-prod.internal.", "tags": ["production", "database", "infra"], "depth": 1 } ], "relevant_ids": ["e1"] } ``` ## Current adversary types - `entity_swap` - `environment_swap` - `time_swap` - `state_update` - `speaker_swap` - `near_duplicate_paraphrase` Current dataset size: - `573` cases ## Intended use The benchmark is intended to be: - model-agnostic - storage-agnostic - metadata-friendly - easy to publish to GitHub and Hugging Face