gshevchenko commited on
Commit
bab86f1
·
verified ·
1 Parent(s): 19172cc

upload REPRODUCIBILITY.md

Browse files
Files changed (1) hide show
  1. REPRODUCIBILITY.md +113 -0
REPRODUCIBILITY.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HWAI AI-detection ensemble — reproducibility benchmark
2
+
3
+ > **Thesis.** Closed commercial benchmarks (Originality.ai, Winston AI, GPTZero
4
+ > Premium) publish AUROC numbers on proprietary corpora you cannot audit.
5
+ > HWAI publishes its **exact smoke corpus**, **exact evaluation script**, and
6
+ > **exact calibration files** so anyone can reproduce our numbers in ≤10 min.
7
+
8
+ ## What you need to reproduce
9
+
10
+ 1. A running instance of `ml-services-hwai` (Hetzner CX43 ~$25/mo, or local Docker).
11
+ 2. Our calibration file (`/opt/ml-services/calibration.json`, shipped in v1.10).
12
+ 3. The eval script: `services/ml-services-hwai/scripts/eval_ensemble_corpus.py`.
13
+ 4. The hand-curated corpus: **embedded in the eval script as `CORPUS`** (44 texts, EN + RU, human + AI).
14
+
15
+ All four above are open in this repo.
16
+
17
+ ## Reference numbers (v1.10, 2026-04-24, 44-text hand-curated OOD)
18
+
19
+ | Lang | Ensemble AUROC | Brier | Detectors |
20
+ |---|---|---|---|
21
+ | EN | **0.770** (target: ≥0.80 post-weight-tuning) | ~0.18 | ai_detect + radar + binoculars + desklib |
22
+ | RU | **0.837** | ~0.12 | ai_detect + radar + binoculars |
23
+
24
+ Per-detector numbers: see `notes/contentos_ensemble_weights_tuning_2026-04-24.md`.
25
+
26
+ ## How to reproduce
27
+
28
+ ```bash
29
+ # On any host with Python 3.11+ and access to ml-services:
30
+ export ML_SERVICES_URL="http://<your-host>:3300"
31
+ export ML_SERVICES_API_KEY="<your-key>"
32
+
33
+ python3 services/ml-services-hwai/scripts/eval_ensemble_corpus.py
34
+ # Output: /tmp/eval_ensemble_<timestamp>.json + .md
35
+ ```
36
+
37
+ The output includes:
38
+ - Per-detector raw + calibrated scores, per sample.
39
+ - Per-lang AUROC + Brier (ensemble + individual detectors).
40
+ - Confusion-matrix style pass/fail per sample.
41
+
42
+ ## Why 44 texts?
43
+
44
+ Smoke battery sized for fast iteration (< 2 min wall-time), **hand-picked to
45
+ cover the failure modes** we've seen in production:
46
+
47
+ - **EN human formal** (press release, court filing, product manual) — biggest
48
+ false-positive risk for models that learned "AI = formal").
49
+ - **RU human news/journalism** — known FP mode for RADAR-Vicuna.
50
+ - **EN + RU AI 2026-era** (Claude-4, Gemini 2.5, GPT-4o style) — the
51
+ distribution shift that breaks 2022-era-trained detectors.
52
+ - **Edge cases** (interview transcripts, casual parent notes, product reviews)
53
+ where detectors commonly overreach.
54
+
55
+ Full statistical-grade numbers on `n=750` OOD calibration split:
56
+ `services/ml-services-hwai/corpus/cal_test.jsonl`. Run:
57
+
58
+ ```bash
59
+ python3 services/ml-services-hwai/scripts/eval_ensemble_corpus.py \
60
+ --from-cal-test services/ml-services-hwai/corpus/cal_test.jsonl --parallel 4
61
+ ```
62
+
63
+ ## Compare with external detectors
64
+
65
+ Free-tier cross-check (Sapling AI, 50 req/day):
66
+
67
+ ```bash
68
+ export SAPLING_API_KEY="..."
69
+ python3 services/ml-services-hwai/scripts/bench_competitors.py \
70
+ --out /tmp/bench_competitors.json
71
+ ```
72
+
73
+ Produces side-by-side AUROC on identical corpus.
74
+
75
+ ## What this benchmark does NOT claim
76
+
77
+ - It is **not** a universal ranking of AI detectors. Different corpora +
78
+ different objectives (academic integrity vs SEO vs marketing QA) produce
79
+ different rankings.
80
+ - It is **not** statistically powered for `p<0.05` differences of `<0.02` AUROC.
81
+ For that, use `cal_test.jsonl` (n=750).
82
+ - It is **not** stable under adversarial paraphrase attack (no detector is).
83
+
84
+ ## What it DOES claim
85
+
86
+ - Full reproducibility: bit-identical inputs → bit-identical scores.
87
+ - Calibration stability: pinned baseline via `test_calibration_regression.py`
88
+ auto-runs on every cal swap; rollback on drop >0.05 AUROC.
89
+ - Honest distribution reporting: the corpus is public so you can audit
90
+ whether it matches your use-case.
91
+
92
+ ## Cost to run full reproducibility
93
+
94
+ | Step | Cost |
95
+ |---|---|
96
+ | Spin up ml-services (Hetzner CX43 monthly) | $25 |
97
+ | Run smoke eval | $0 |
98
+ | Run `cal_test.jsonl` n=750 statistical eval | $0 (self-hosted) |
99
+ | Sapling cross-check (free tier) | $0 |
100
+ | **Total** | **$25 one-time hosting** |
101
+
102
+ Commercial equivalents:
103
+ - Originality.ai: $15 trial + per-call costs, no AUROC audit, closed corpus.
104
+ - GPTZero Premium: $15/mo, closed corpus.
105
+ - Winston AI: $29/mo, closed corpus.
106
+
107
+ ## Cross-references
108
+
109
+ - Corpus balance analysis: `notes/contentos_corpus_balance_analysis_2026-04-24.md`
110
+ - Ensemble weights tuning: `notes/contentos_ensemble_weights_tuning_2026-04-24.md`
111
+ - Fork #2 v1 → v2 progression: `notes/contentos_fork2_en_selfgen_spec.md`
112
+ - Regression test pins: `services/ml-services-hwai/tests/test_calibration_regression.py`
113
+ - 7-blocks gap assessment: `notes/contentos_7blocks_gap_assessment_2026-04-24.md`