File size: 2,028 Bytes
72c5de8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25be136
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
pretty_name: FalseMemBench
license: mit
task_categories:
  - text-retrieval
language:
  - en
tags:
  - retrieval
  - memory
  - llm-agents
  - adversarial
size_categories:
  - n<1K
---

# FalseMemBench

`FalseMemBench` is a benchmark project for evaluating memory retrieval systems under adversarial distractors.

The goal is to measure whether a system can retrieve the right memory when many nearby but wrong memories are present.

## Focus

The benchmark is designed for memory systems used by LLM agents.

It emphasizes:

- entity confusion
- environment confusion
- time/version confusion
- stale facts vs current facts
- speaker confusion
- near-duplicate paraphrases

## Layout

- `schema/case.schema.json`: benchmark case schema
- `data/cases.jsonl`: current benchmark cases
- `docs/`: benchmark design notes
- `scripts/validate.py`: schema validator for the JSONL dataset
- `scripts/run_benchmark.py`: simple keyword baseline
- `scripts/run_tagmem_benchmark.py`: run the benchmark against a real `tagmem` binary

## Case format

Each case contains:

- a `query`
- a set of `entries`
- one or more `relevant_ids`
- a single `adversary_type`
- optional metadata for analysis

## Example

```json
{
  "id": "env-001",
  "query": "What database does staging use?",
  "adversary_type": "environment_swap",
  "entries": [
    {
      "id": "e1",
      "text": "The staging environment uses db-staging.internal.",
      "tags": ["staging", "database", "infra"],
      "depth": 1
    },
    {
      "id": "e2",
      "text": "The production environment uses db-prod.internal.",
      "tags": ["production", "database", "infra"],
      "depth": 1
    }
  ],
  "relevant_ids": ["e1"]
}
```

## Current adversary types

- `entity_swap`
- `environment_swap`
- `time_swap`
- `state_update`
- `speaker_swap`
- `near_duplicate_paraphrase`

Current dataset size:

- `573` cases

## Intended use

The benchmark is intended to be:

- model-agnostic
- storage-agnostic
- metadata-friendly
- easy to publish to GitHub and Hugging Face