emergentphysicslab's picture
Upload README.md with huggingface_hub
2db3993 verified
metadata
license: mit
task_categories:
  - tabular-classification
tags:
  - anomaly-detection
  - time-series
  - time-series-classification
  - server-monitoring
  - cybersecurity
  - benchmark
  - waveguard
  - zero-training
pretty_name: WaveGuard Anomaly Detection Benchmarks
size_categories:
  - 1K<n<10K

WaveGuard Anomaly Detection Benchmarks

Curated benchmark datasets and comparison results for evaluating anomaly detection models. Includes labeled training (normal) and test (mixed normal + anomalous) splits, plus head-to-head comparisons between WaveGuard and traditional methods.

Benchmark Comparisons (benchmark_results/)

WaveGuard vs. IsolationForest, LOF, and OneClassSVM across 12 datasets.

Summary: WaveGuard ranked #1 on all 12 datasets by F1 score.

Dataset WaveGuard IsolationForest LOF OneClassSVM Winner
Credit Card Fraud* 0.653 0.607 0.601 0.472 WaveGuard
Network Intrusion* 0.598 0.252 0.232 0.546 WaveGuard
Crypto Fraud 1.000 0.933 0.946 0.897 WaveGuard
Prompt Injection 0.976 0.952 0.976 0.889 WaveGuard
Phish Guard 0.976 0.905 0.952 0.816 WaveGuard
Content Guard 0.975 0.842 0.879 0.784 WaveGuard
Fraud Lens 0.949 0.896 0.882 0.800 WaveGuard
Ad Click Fraud 0.988 0.952 0.930 0.889 WaveGuard
Insurance Claims 0.972 0.921 0.959 0.833 WaveGuard
Network Security 0.990 0.962 0.980 0.952 WaveGuard
API Monitoring 0.959 0.909 0.933 0.814 WaveGuard
Log Anomalies 0.946 0.875 0.875 0.805 WaveGuard

*Real-world datasets. Others use domain-specific test suites with realistic feature schemas.

See benchmark_results/comparison.json for full details including sample sizes, feature counts, and anomaly rates.

Datasets

1. Server Metrics (server_metrics/)

Simulated server health metrics with injected failure events.

  • Features: cpu, memory, disk_io, network, errors (5 numeric)
  • Training: 500 normal samples
  • Test: 100 samples (15 anomalous)
  • Anomaly types: CPU spike, memory leak, disk saturation, network flood

2. Synthetic Time Series (synthetic_timeseries/)

Controlled synthetic signals with known anomaly injection points.

  • Patterns: sinusoidal, trend, seasonal, random walk
  • Anomaly types: point (spike), contextual (subtle shift), collective (regime change)
  • Training: 200 clean windows per pattern
  • Test: 50 windows per pattern (10 anomalous each)

Format

Each dataset is provided as Parquet files:

dataset_name/
  train.parquet     # Normal samples only
  test.parquet      # Mixed normal + anomalous
  metadata.json     # Feature descriptions, anomaly counts, creation params

Usage

from datasets import load_dataset

ds = load_dataset("emergentphysicslab/waveguard-benchmarks", "server_metrics")
train = ds["train"].to_pandas()
test = ds["test"].to_pandas()

Evaluation Protocol

  1. Train/fit your detector on train.parquet only
  2. Score each row in test.parquet
  3. Report: Precision, Recall, F1, AUC-ROC, Average Latency
  4. Compare against WaveGuard baseline in the model card

Citation

@dataset{waveguard_benchmarks2026,
  title={WaveGuard Anomaly Detection Benchmarks},
  author={Partin, Greg},
  year={2026},
  url={https://huggingface.co/datasets/emergentphysicslab/waveguard-benchmarks}
}