temporal-twins / README_REPO.md
temporal-twins-anon's picture
Replace hosted code and metadata placeholders
472980b verified

Temporal Twins: A Matched-Control Benchmark for Temporal Fraud Detection

Temporal Twins is a synthetic UPI-style temporal transaction benchmark where fraud and benign trajectories are statically matched but differ in delayed event-order structure. The benchmark is designed to test whether models can exploit temporal ordering under matched-prefix controls rather than relying on static transaction summaries or prefix-length shortcuts.

Installation

Recommended Python version: 3.11+ (3.13 also works in the checked environment).

pip

pip install -r requirements.txt

conda

If environment.yml is present:

conda env create -f environment.yml
conda activate temporal-twins

Repository Structure

  • src/: synthetic user, transaction, risk, fraud, graph, and core config code
  • models/: learned baselines and probe/oracle wrappers, including SeqGRU and temporal GNNs
  • experiments/: benchmark runner and matched-prefix evaluation code
  • config/: checked-in YAML configs used as base configs for experiments
  • results/: frozen experiment artifacts, including the final deterministic paper suite
  • metadata/: Croissant metadata and release-side validation notes
  • release/: manual-hosting bundle prepared for later upload

Quick Smoke Test

The public CLI supports a fast audit-mode smoke test:

PYTHONPATH=. python3 experiments/run_all.py \
  --fast \
  --seed 0 \
  --benchmark-mode temporal_twins_oracle_calib \
  --experiments audit \
  --device cpu

Exact Paper-Style Group Runner

The checked-in CLI does not expose --difficulty, --num-users, or --simulation-days flags. The exact grouped reproductions below therefore use the existing helper functions in experiments/run_all.py through an inline Python wrapper.

Define this shell helper once in your session:

run_group() {
  local group="$1"
  local seed="$2"
  local out_json="$3"

  PYTHONPATH=. python3 - "$group" "$seed" "$out_json" <<'PY'
import json
import math
import sys
import time
from pathlib import Path

from src.core.config_loader import load_config
from experiments.run_all import (
    build_gate_pool_from_frames,
    gate_volume_is_sufficient,
    generate_single_difficulty,
    offset_gate_namespace,
    prepare_gate_subset,
    run_motif_validity_check,
    set_global_determinism,
)


def normalize(value):
    if isinstance(value, dict):
        return {k: normalize(v) for k, v in value.items()}
    if isinstance(value, (list, tuple)):
        return [normalize(v) for v in value]
    if hasattr(value, "item"):
        try:
            value = value.item()
        except Exception:
            pass
    if isinstance(value, float) and not math.isfinite(value):
        return None
    return value


group = sys.argv[1]
seed = int(sys.argv[2])
out_json = Path(sys.argv[3])

if group == "oracle_calib":
    benchmark_mode = "temporal_twins_oracle_calib"
    difficulty = "easy"
    hard_abort = True
    force_temporal_models = True
else:
    benchmark_mode = "temporal_twins"
    difficulty = group
    hard_abort = False
    force_temporal_models = True

cfg = load_config("config/default.yaml")
cfg = cfg.model_copy(
    update={
        "num_users": 350,
        "simulation_days": 45,
        "benchmark_mode": benchmark_mode,
        "random_seed": seed,
    }
)

set_global_determinism(seed)
pool = generate_single_difficulty(
    cfg,
    difficulty=difficulty,
    seed=seed,
    benchmark_mode=benchmark_mode,
)
gate = prepare_gate_subset(pool, seed=seed, fast_mode=False)
pack_count = 1

while (not gate_volume_is_sufficient(gate["volume"], False)) and pack_count <= 6:
    extra_seed = seed + pack_count * 10007
    extra_pack = generate_single_difficulty(
        cfg,
        difficulty=difficulty,
        seed=extra_seed,
        benchmark_mode=benchmark_mode,
    )
    extra_pack = offset_gate_namespace(extra_pack, pack_count)
    pool = build_gate_pool_from_frames([pool, extra_pack])
    gate = prepare_gate_subset(pool, seed=seed, fast_mode=False)
    pack_count += 1

gate["source_pool_events"] = int(len(pool))
gate["source_pool_pairs"] = int(pool.loc[pool["twin_pair_id"] >= 0, "twin_pair_id"].nunique()) if "twin_pair_id" in pool.columns else 0
gate["source_pool_packs"] = int(pack_count)

start = time.time()
gate_pass, report = run_motif_validity_check(
    df=pool,
    config=cfg,
    seed=seed,
    device="cpu",
    num_epochs=3,
    node_epochs=150,
    n_checkpoints=8,
    hard_abort=hard_abort,
    benchmark_mode=benchmark_mode,
    fast_mode=False,
    force_temporal_models=force_temporal_models,
    prebuilt_gate=gate,
)
elapsed = time.time() - start

result = {
    "benchmark_group": group,
    "benchmark_mode": benchmark_mode,
    "seed": seed,
    "primary_metric_label": report["audit_metric_label"],
    "secondary_metric_label": report["raw_metric_label"],
    "gate_pass": bool(gate_pass),
    "run_wall_time_sec": float(elapsed),
    **report,
}

out_json.parent.mkdir(parents=True, exist_ok=True)
out_json.write_text(json.dumps(normalize(result), indent=2) + "\n")
print(f"Wrote {out_json}")
PY
}

Reproduce Oracle Calibration

Non-fast, reliable-volume temporal_twins_oracle_calib, seed 0, num_users=350, simulation_days=45:

run_group oracle_calib 0 results/paper_suite_repro/jobs/oracle_calib_0.json

Reproduce Easy / Medium / Hard

Each command below reproduces the matched-prefix grouped benchmark for seed 0 with the paper-scale non-fast settings (num_users=350, simulation_days=45, n_checkpoints=8, deterministic CPU runtime):

run_group easy   0 results/paper_suite_repro/jobs/easy_0.json
run_group medium 0 results/paper_suite_repro/jobs/medium_0.json
run_group hard   0 results/paper_suite_repro/jobs/hard_0.json

Reproduce Full Paper Suite

There is no single checked-in paper_suite driver script. The exact grouped reproduction can be run as a shell loop over benchmark groups and seeds, followed by a small aggregation step that writes the artifact files:

1. Generate per-run JSON files

mkdir -p results/paper_suite_repro/jobs

for group in oracle_calib easy medium hard; do
  for seed in 0 1 2 3 4; do
    run_group "$group" "$seed" "results/paper_suite_repro/jobs/${group}_${seed}.json"
  done
done

2. Aggregate into paper-suite CSV and Markdown files

PYTHONPATH=. python3 - <<'PY'
import json
from pathlib import Path

import pandas as pd


def summarize_mean_std(df, group_col):
    numeric_cols = [c for c in df.columns if c != group_col and pd.api.types.is_numeric_dtype(df[c])]
    grouped = df.groupby(group_col, dropna=False)[numeric_cols].agg(["mean", "std"]).reset_index()
    grouped.columns = [
        group_col if col == group_col else f"{col}_{stat}"
        for col, stat in grouped.columns
    ]
    return grouped


def volume_failures(row):
    fails = []
    if row["matched_eval_pairs"] < 2000:
        fails.append(f"matched_eval_pairs={row['matched_eval_pairs']} (<2000)")
    if row["positives"] < 500:
        fails.append(f"positives={row['positives']} (<500)")
    if row["negatives"] < 500:
        fails.append(f"negatives={row['negatives']} (<500)")
    if row["unique_fraud_users"] < 50:
        fails.append(f"unique_fraud_users={row['unique_fraud_users']} (<50)")
    if row["unique_benign_users"] < 50:
        fails.append(f"unique_benign_users={row['unique_benign_users']} (<50)")
    if not (0.35 <= row["positive_rate"] <= 0.65):
        fails.append(f"positive_rate={row['positive_rate']:.4f} (outside [0.35,0.65])")
    return " | ".join(fails)


def hard_gate_failures(row):
    checks = [
        (row["primary_metric_label"], row["audit_roc_auc"], ">=", 0.99),
        (f"{row['primary_metric_label']} pair-sep", row["audit_pair_sep"], ">=", 0.99),
        (row["secondary_metric_label"], row["raw_roc_auc"], ">=", 0.95),
        (f"{row['secondary_metric_label']} pair-sep", row["raw_pair_sep"], ">=", 0.90),
        ("static_agg_auc", row["static_agg_auc"], "<=", 0.60),
        ("XGBoost ROC-AUC", row["xgb_roc_auc"], "<=", 0.65),
        ("StaticGNN ROC-AUC", row["static_gnn_roc"], "<=", 0.70),
        ("SeqGRU ROC-AUC", row["seqgru_roc_auc"], ">=", 0.80),
        ("SeqGRU shuffle delta", row["seqgru_shuffle_delta"], "<=", -0.10),
    ]
    fails = []
    for label, value, op, threshold in checks:
        ok = value >= threshold if op == ">=" else value <= threshold
        if not ok:
            fails.append(f"{label}: {value:.4f} ({op}{threshold})")
    return " | ".join(fails)


def advisory_failures(row):
    checks = [
        ("TGN ROC-AUC", row["tgn_roc_auc"], ">=", 0.75),
        ("TGN shuffle delta", row["tgn_shuffle_delta"], "<=", -0.10),
        ("TGAT ROC-AUC", row["tgat_roc_auc"], ">=", 0.75),
        ("TGAT shuffle delta", row["tgat_shuffle_delta"], "<=", -0.10),
        ("DyRep ROC-AUC", row["dyrep_roc_auc"], ">=", 0.75),
        ("DyRep shuffle delta", row["dyrep_shuffle_delta"], "<=", -0.10),
        ("JODIE ROC-AUC", row["jodie_roc_auc"], ">=", 0.75),
        ("JODIE shuffle delta", row["jodie_shuffle_delta"], "<=", -0.10),
    ]
    fails = []
    for label, value, op, threshold in checks:
        ok = value >= threshold if op == ">=" else value <= threshold
        if not ok:
            fails.append(f"{label}: {value:.4f} ({op}{threshold})")
    return " | ".join(fails)


jobs_dir = Path("results/paper_suite_repro/jobs")
out_dir = jobs_dir.parent
rows = [json.loads(path.read_text()) for path in sorted(jobs_dir.glob("*.json"))]
df = pd.DataFrame(rows).sort_values(["benchmark_group", "seed"]).reset_index(drop=True)

runs_path = out_dir / "paper_suite_runs.csv"
summary_path = out_dir / "paper_suite_summary.csv"
runtime_path = out_dir / "paper_suite_runtime.csv"
failed_path = out_dir / "paper_suite_failed_checks.csv"
summary_md_path = out_dir / "paper_suite_summary.md"
meta_path = out_dir / "paper_suite_meta.json"

df.to_csv(runs_path, index=False)

summary = summarize_mean_std(df, "benchmark_group")
summary.to_csv(summary_path, index=False)

runtime_cols = [
    "benchmark_group",
    "seed",
    "run_wall_time_sec",
    "static_gnn_eval_time_sec",
    "static_gnn_unique_prefix_cutoffs",
    "static_gnn_graph_builds",
    "static_gnn_cache_hit_rate",
]
df[runtime_cols].to_csv(runtime_path, index=False)

failed = df[["benchmark_group", "seed", "gate_pass"]].copy()
failed["volume_failures"] = df.apply(volume_failures, axis=1)
failed["hard_gate_failures"] = df.apply(hard_gate_failures, axis=1)
failed["advisory_failures"] = df.apply(advisory_failures, axis=1)
failed.to_csv(failed_path, index=False)

meta = {
    "device": "cpu",
    "num_users": 350,
    "simulation_days": 45,
    "num_epochs": 3,
    "node_epochs": 150,
    "n_checkpoints": 8,
    "fast_mode": False,
    "seeds": [0, 1, 2, 3, 4],
}
meta_path.write_text(json.dumps(meta, indent=2) + "\n")

headline = summary[
    [
        "benchmark_group",
        "xgb_roc_auc_mean",
        "static_gnn_roc_mean",
        "seqgru_roc_auc_mean",
        "seqgru_shuffle_delta_mean",
    ]
].copy()

lines = [
    "# Paper Suite Summary",
    "",
    "| benchmark_group | xgb_roc_auc_mean | static_gnn_roc_mean | seqgru_roc_auc_mean | seqgru_shuffle_delta_mean |",
    "|---|---:|---:|---:|---:|",
]
for row in headline.itertuples(index=False):
    lines.append(
        f"| {row.benchmark_group} | {row.xgb_roc_auc_mean:.4f} | {row.static_gnn_roc_mean:.4f} | {row.seqgru_roc_auc_mean:.4f} | {row.seqgru_shuffle_delta_mean:.4f} |"
    )
summary_md_path.write_text("\n".join(lines) + "\n")

print(f"Wrote {runs_path}")
print(f"Wrote {summary_path}")
print(f"Wrote {runtime_path}")
print(f"Wrote {failed_path}")
print(f"Wrote {summary_md_path}")
print(f"Wrote {meta_path}")
PY

This aggregation step writes:

  • results/paper_suite_repro/paper_suite_runs.csv
  • results/paper_suite_repro/paper_suite_summary.csv
  • results/paper_suite_repro/paper_suite_runtime.csv
  • results/paper_suite_repro/paper_suite_failed_checks.csv
  • results/paper_suite_repro/paper_suite_summary.md
  • results/paper_suite_repro/paper_suite_meta.json

The frozen reference artifacts checked into this repository live in results/paper_suite_20260503_202810.

Expected Headline Results

benchmark_group XGBoost ROC-AUC StaticGNN ROC-AUC SeqGRU ROC-AUC SeqGRU shuffle delta
oracle_calib 0.5000 0.5222 1.0000 -0.5032
easy 0.5000 0.4946 1.0000 -0.5003
medium 0.5000 0.4922 0.8391 -0.3337
hard 0.5000 0.5026 0.6876 -0.1883

Determinism

  • Deterministic CPU runtime is enabled in experiments/run_all.py.
  • The same seed should produce identical matched-prefix data and identical metrics under the same deterministic environment.
  • Deterministic settings intentionally trade speed for repeatability and will slow larger runs.

For more detail, see docs/DETERMINISM.md.

Runtime Note

Mean wall-clock runtime per benchmark group in the final deterministic paper suite:

  • oracle_calib: 1136.6s
  • easy: 1345.9s
  • medium: 2181.9s
  • hard: 2613.7s
  • cumulative summed runtime across all 20 runs: about 10.11 hours

Data and Metadata

Hosted URLs:

License

  • Code: Apache License 2.0 (Apache-2.0)
  • Dataset and generated benchmark artifacts: Creative Commons Attribution 4.0 International (CC-BY-4.0)
  • Code SPDX-License-Identifier: Apache-2.0
  • Dataset SPDX-License-Identifier: CC-BY-4.0
  • No real UPI data or personal financial records are included.

Citation

Anonymous NeurIPS 2026 submission. The paper or preprint is not available during double-blind review and will be added after publication.

Warning

  • Synthetic data only
  • No real UPI transactions
  • Not for production fraud deployment