temporal-twins / data /README_GENERATION.md
temporal-twins-anon's picture
Upload README_GENERATION.md
3c400c8 verified

Generating Release Data Files

The repository currently includes the results of the final paper suite, but it does not include pre-exported per-seed release files under release/data/. This document explains how to generate them using the existing Temporal Twins benchmark code without changing generator logic, labels, matched-prefix construction, or model logic.

Expected Outputs Per Seed

Each directory release/data/<mode>/seed_<seed>/ is expected to contain:

  • transactions.parquet
  • matched_pairs.parquet
  • audit_summary.csv
  • schema.json
  • config.yaml

Where:

  • <mode> is one of oracle_calib, easy, medium, hard
  • <seed> is one of 0, 1, 2, 3, 4

Benchmark Mapping

  • oracle_calib uses benchmark_mode = "temporal_twins_oracle_calib" and difficulty = "easy"
  • easy uses benchmark_mode = "temporal_twins" and difficulty = "easy"
  • medium uses benchmark_mode = "temporal_twins" and difficulty = "medium"
  • hard uses benchmark_mode = "temporal_twins" and difficulty = "hard"

Exact Export Command

Run this command from the repository root:

PYTHONPATH=. python3 - <<'PY'
from pathlib import Path
import json
import pandas as pd
import yaml

from src.core.config_loader import load_config
from experiments.run_all import (
    build_matched_control_tables,
    generate_single_difficulty,
    report_matched_control_audits,
    set_global_determinism,
)

release_root = Path("release/data")
seeds = [0, 1, 2, 3, 4]
mode_specs = [
    ("oracle_calib", "temporal_twins_oracle_calib", "easy"),
    ("easy", "temporal_twins", "easy"),
    ("medium", "temporal_twins", "medium"),
    ("hard", "temporal_twins", "hard"),
]

base_cfg = load_config("config/default.yaml")
base_cfg.num_users = 350
base_cfg.simulation_days = 45

for release_mode, benchmark_mode, difficulty in mode_specs:
    for seed in seeds:
        cfg = base_cfg.model_copy(deep=True)
        cfg.benchmark_mode = benchmark_mode
        cfg.random_seed = seed
        set_global_determinism(seed)

        df = generate_single_difficulty(
            cfg,
            difficulty=difficulty,
            seed=seed,
            benchmark_mode=benchmark_mode,
        )
        matched_examples, pair_rows, pair_counts = build_matched_control_tables(df)
        audit = report_matched_control_audits(matched_examples, pair_rows, pair_counts)

        out_dir = release_root / release_mode / f"seed_{seed}"
        out_dir.mkdir(parents=True, exist_ok=True)

        matched_export = matched_examples.rename(
            columns={"eval_local_event_idx": "matched_local_event_idx"}
        ).copy()
        matched_export["benchmark_mode"] = benchmark_mode
        matched_export["difficulty"] = release_mode
        matched_export["seed"] = seed

        df.to_parquet(out_dir / "transactions.parquet", index=False)
        matched_export.to_parquet(out_dir / "matched_pairs.parquet", index=False)
        pd.DataFrame([audit]).to_csv(out_dir / "audit_summary.csv", index=False)

        schema = {
            "transactions_columns": {k: str(v) for k, v in df.dtypes.items()},
            "matched_pairs_columns": {k: str(v) for k, v in matched_export.dtypes.items()},
            "files": [
                "transactions.parquet",
                "matched_pairs.parquet",
                "audit_summary.csv",
                "schema.json",
                "config.yaml",
            ],
        }
        (out_dir / "schema.json").write_text(json.dumps(schema, indent=2) + "\\n")
        (out_dir / "config.yaml").write_text(
            yaml.safe_dump(
                {
                    **cfg.model_dump(),
                    "benchmark_mode": benchmark_mode,
                    "difficulty": difficulty,
                    "release_mode": release_mode,
                    "seed": seed,
                    "fast_mode": False,
                    "n_checkpoints": 8,
                },
                sort_keys=False,
            )
        )
PY

Paper Result Reproduction

After generating the release data files, the final paper-suite metrics can be reproduced from the benchmark runner with the frozen deterministic settings and the same num_users, simulation_days, seeds, and n_checkpoints recorded in release/results/paper_suite_meta.json.

Loading the Hosted Data Archive

Download temporal_twins_data.zip from:

https://huggingface.co/datasets/temporal-twins-benchmark/temporal-twins/resolve/main/temporal_twins_data.zip

It contains 20 transactions.parquet files and 20 matched_pairs.parquet files. You can read files directly from the zip archive with pandas/pyarrow, or unzip the archive first.

import zipfile
import pandas as pd

zip_path = "temporal_twins_data.zip"

with zipfile.ZipFile(zip_path) as zf:
    with zf.open("data/medium/seed_0/transactions.parquet") as f:
        transactions = pd.read_parquet(f)
    with zf.open("data/medium/seed_0/matched_pairs.parquet") as f:
        matched_pairs = pd.read_parquet(f)

print(transactions.columns.tolist())
print(matched_pairs.columns.tolist())
print(transactions.head())
print(matched_pairs.head())