Datasets:
File size: 4,485 Bytes
29c5159 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | # HWAI AI-detection ensemble — reproducibility benchmark
> **Thesis.** Closed commercial benchmarks (Originality.ai, Winston AI, GPTZero
> Premium) publish AUROC numbers on proprietary corpora you cannot audit.
> HWAI publishes its **exact smoke corpus**, **exact evaluation script**, and
> **exact calibration files** so anyone can reproduce our numbers in ≤10 min.
## What you need to reproduce
1. A running instance of `ml-services-hwai` (Hetzner CX43 ~$25/mo, or local Docker).
2. Our calibration file (`/opt/ml-services/calibration.json`, shipped in v1.10).
3. The eval script: `services/ml-services-hwai/scripts/eval_ensemble_corpus.py`.
4. The hand-curated corpus: **embedded in the eval script as `CORPUS`** (44 texts, EN + RU, human + AI).
All four above are open in this repo.
## Reference numbers (v1.10, 2026-04-24, 44-text hand-curated OOD)
| Lang | Ensemble AUROC | Brier | Detectors |
|---|---|---|---|
| EN | **0.770** (target: ≥0.80 post-weight-tuning) | ~0.18 | ai_detect + radar + binoculars + desklib |
| RU | **0.837** | ~0.12 | ai_detect + radar + binoculars |
Per-detector numbers: see `notes/contentos_ensemble_weights_tuning_2026-04-24.md`.
## How to reproduce
```bash
# On any host with Python 3.11+ and access to ml-services:
export ML_SERVICES_URL="http://<your-host>:3300"
export ML_SERVICES_API_KEY="<your-key>"
python3 services/ml-services-hwai/scripts/eval_ensemble_corpus.py
# Output: /tmp/eval_ensemble_<timestamp>.json + .md
```
The output includes:
- Per-detector raw + calibrated scores, per sample.
- Per-lang AUROC + Brier (ensemble + individual detectors).
- Confusion-matrix style pass/fail per sample.
## Why 44 texts?
Smoke battery sized for fast iteration (< 2 min wall-time), **hand-picked to
cover the failure modes** we've seen in production:
- **EN human formal** (press release, court filing, product manual) — biggest
false-positive risk for models that learned "AI = formal").
- **RU human news/journalism** — known FP mode for RADAR-Vicuna.
- **EN + RU AI 2026-era** (Claude-4, Gemini 2.5, GPT-4o style) — the
distribution shift that breaks 2022-era-trained detectors.
- **Edge cases** (interview transcripts, casual parent notes, product reviews)
where detectors commonly overreach.
Full statistical-grade numbers on `n=750` OOD calibration split:
`services/ml-services-hwai/corpus/cal_test.jsonl`. Run:
```bash
python3 services/ml-services-hwai/scripts/eval_ensemble_corpus.py \
--from-cal-test services/ml-services-hwai/corpus/cal_test.jsonl --parallel 4
```
## Compare with external detectors
Free-tier cross-check (Sapling AI, 50 req/day):
```bash
export SAPLING_API_KEY="..."
python3 services/ml-services-hwai/scripts/bench_competitors.py \
--out /tmp/bench_competitors.json
```
Produces side-by-side AUROC on identical corpus.
## What this benchmark does NOT claim
- It is **not** a universal ranking of AI detectors. Different corpora +
different objectives (academic integrity vs SEO vs marketing QA) produce
different rankings.
- It is **not** statistically powered for `p<0.05` differences of `<0.02` AUROC.
For that, use `cal_test.jsonl` (n=750).
- It is **not** stable under adversarial paraphrase attack (no detector is).
## What it DOES claim
- Full reproducibility: bit-identical inputs → bit-identical scores.
- Calibration stability: pinned baseline via `test_calibration_regression.py`
auto-runs on every cal swap; rollback on drop >0.05 AUROC.
- Honest distribution reporting: the corpus is public so you can audit
whether it matches your use-case.
## Cost to run full reproducibility
| Step | Cost |
|---|---|
| Spin up ml-services (Hetzner CX43 monthly) | $25 |
| Run smoke eval | $0 |
| Run `cal_test.jsonl` n=750 statistical eval | $0 (self-hosted) |
| Sapling cross-check (free tier) | $0 |
| **Total** | **$25 one-time hosting** |
Commercial equivalents:
- Originality.ai: $15 trial + per-call costs, no AUROC audit, closed corpus.
- GPTZero Premium: $15/mo, closed corpus.
- Winston AI: $29/mo, closed corpus.
## Cross-references
- Corpus balance analysis: `notes/contentos_corpus_balance_analysis_2026-04-24.md`
- Ensemble weights tuning: `notes/contentos_ensemble_weights_tuning_2026-04-24.md`
- Fork #2 v1 → v2 progression: `notes/contentos_fork2_en_selfgen_spec.md`
- Regression test pins: `services/ml-services-hwai/tests/test_calibration_regression.py`
- 7-blocks gap assessment: `notes/contentos_7blocks_gap_assessment_2026-04-24.md`
|