contentos-preprint / REPRODUCIBILITY.md
gshevchenko's picture
mirror REPRODUCIBILITY.md from gshevchenko/contentos-preprint
29c5159 verified

HWAI AI-detection ensemble — reproducibility benchmark

Thesis. Closed commercial benchmarks (Originality.ai, Winston AI, GPTZero Premium) publish AUROC numbers on proprietary corpora you cannot audit. HWAI publishes its exact smoke corpus, exact evaluation script, and exact calibration files so anyone can reproduce our numbers in ≤10 min.

What you need to reproduce

  1. A running instance of ml-services-hwai (Hetzner CX43 ~$25/mo, or local Docker).
  2. Our calibration file (/opt/ml-services/calibration.json, shipped in v1.10).
  3. The eval script: services/ml-services-hwai/scripts/eval_ensemble_corpus.py.
  4. The hand-curated corpus: embedded in the eval script as CORPUS (44 texts, EN + RU, human + AI).

All four above are open in this repo.

Reference numbers (v1.10, 2026-04-24, 44-text hand-curated OOD)

Lang Ensemble AUROC Brier Detectors
EN 0.770 (target: ≥0.80 post-weight-tuning) ~0.18 ai_detect + radar + binoculars + desklib
RU 0.837 ~0.12 ai_detect + radar + binoculars

Per-detector numbers: see notes/contentos_ensemble_weights_tuning_2026-04-24.md.

How to reproduce

# On any host with Python 3.11+ and access to ml-services:
export ML_SERVICES_URL="http://<your-host>:3300"
export ML_SERVICES_API_KEY="<your-key>"

python3 services/ml-services-hwai/scripts/eval_ensemble_corpus.py
# Output: /tmp/eval_ensemble_<timestamp>.json + .md

The output includes:

  • Per-detector raw + calibrated scores, per sample.
  • Per-lang AUROC + Brier (ensemble + individual detectors).
  • Confusion-matrix style pass/fail per sample.

Why 44 texts?

Smoke battery sized for fast iteration (< 2 min wall-time), hand-picked to cover the failure modes we've seen in production:

  • EN human formal (press release, court filing, product manual) — biggest false-positive risk for models that learned "AI = formal").
  • RU human news/journalism — known FP mode for RADAR-Vicuna.
  • EN + RU AI 2026-era (Claude-4, Gemini 2.5, GPT-4o style) — the distribution shift that breaks 2022-era-trained detectors.
  • Edge cases (interview transcripts, casual parent notes, product reviews) where detectors commonly overreach.

Full statistical-grade numbers on n=750 OOD calibration split: services/ml-services-hwai/corpus/cal_test.jsonl. Run:

python3 services/ml-services-hwai/scripts/eval_ensemble_corpus.py \
  --from-cal-test services/ml-services-hwai/corpus/cal_test.jsonl --parallel 4

Compare with external detectors

Free-tier cross-check (Sapling AI, 50 req/day):

export SAPLING_API_KEY="..."
python3 services/ml-services-hwai/scripts/bench_competitors.py \
  --out /tmp/bench_competitors.json

Produces side-by-side AUROC on identical corpus.

What this benchmark does NOT claim

  • It is not a universal ranking of AI detectors. Different corpora + different objectives (academic integrity vs SEO vs marketing QA) produce different rankings.
  • It is not statistically powered for p<0.05 differences of <0.02 AUROC. For that, use cal_test.jsonl (n=750).
  • It is not stable under adversarial paraphrase attack (no detector is).

What it DOES claim

  • Full reproducibility: bit-identical inputs → bit-identical scores.
  • Calibration stability: pinned baseline via test_calibration_regression.py auto-runs on every cal swap; rollback on drop >0.05 AUROC.
  • Honest distribution reporting: the corpus is public so you can audit whether it matches your use-case.

Cost to run full reproducibility

Step Cost
Spin up ml-services (Hetzner CX43 monthly) $25
Run smoke eval $0
Run cal_test.jsonl n=750 statistical eval $0 (self-hosted)
Sapling cross-check (free tier) $0
Total $25 one-time hosting

Commercial equivalents:

  • Originality.ai: $15 trial + per-call costs, no AUROC audit, closed corpus.
  • GPTZero Premium: $15/mo, closed corpus.
  • Winston AI: $29/mo, closed corpus.

Cross-references

  • Corpus balance analysis: notes/contentos_corpus_balance_analysis_2026-04-24.md
  • Ensemble weights tuning: notes/contentos_ensemble_weights_tuning_2026-04-24.md
  • Fork #2 v1 → v2 progression: notes/contentos_fork2_en_selfgen_spec.md
  • Regression test pins: services/ml-services-hwai/tests/test_calibration_regression.py
  • 7-blocks gap assessment: notes/contentos_7blocks_gap_assessment_2026-04-24.md