Datasets:
mirror paper.md from gshevchenko/contentos-preprint
Browse files
paper.md
ADDED
|
@@ -0,0 +1,920 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ContentOS: A Reproducible Bilingual AI-Text-Detection Ensemble with Adversarial Robustness Evaluation
|
| 2 |
+
|
| 3 |
+
> ContentOS team, Humanswith.ai, 2026-04-27. Pre-print version v1.0.
|
| 4 |
+
> Source: `services/ml-services-hwai/benchmark/paper.md` (auto-merged from
|
| 5 |
+
> three companion drafts; see `merge_paper.py`).
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Commercial AI-text-detection vendors publish accuracy claims of 99%+ on
|
| 10 |
+
proprietary corpora that remain inaccessible to external auditors.
|
| 11 |
+
Independent peer-reviewed evaluations have repeatedly shown these claims
|
| 12 |
+
drop to 0.70-0.88 AUROC on out-of-distribution and modern-era text. We
|
| 13 |
+
present **ContentOS**, a reproducible ensemble of four AI detectors
|
| 14 |
+
(Fast-DetectGPT, RADAR-Vicuna, Binoculars, Desklib-fine-tuned
|
| 15 |
+
DeBERTa-v3-large) calibrated on a 12,000-sample bilingual (English +
|
| 16 |
+
Russian) corpus drawn from seven public datasets covering 2022-2026 era AI
|
| 17 |
+
generators (GPT-4o, Gemini 2.5, Groq Llama, Cerebras Llama).
|
| 18 |
+
|
| 19 |
+
We release the full calibration corpus, evaluation harness, regression test
|
| 20 |
+
suite, and a 300-sample held-out adversarial corpus produced via
|
| 21 |
+
cross-model single-pass paraphrasing. On a 44-text hand-curated
|
| 22 |
+
out-of-distribution smoke battery, our v1.11 ensemble achieves AUROC
|
| 23 |
+
**0.821 (English)** and **0.837 (Russian)**, with English Wrong-rate
|
| 24 |
+
of 4% and median latency of 1.2 seconds on commodity 8-vCPU hardware. On
|
| 25 |
+
the 300-sample adversarial paired set, ensemble AUROC reaches **0.985** (in-
|
| 26 |
+
distribution human baseline).
|
| 27 |
+
|
| 28 |
+
The contribution of this work is **field-leading reproducibility**, not
|
| 29 |
+
state-of-the-art absolute AUROC. Anyone can clone the repository, run the
|
| 30 |
+
regression test in 0.05 seconds, and reproduce all reported numbers in 90
|
| 31 |
+
minutes on a $25/month Hetzner instance. We argue that reproducibility
|
| 32 |
+
should be the dominant axis of competition in commercial AI-text detection,
|
| 33 |
+
and treat the openness of our methodology as the strategic moat for
|
| 34 |
+
production deployment.
|
| 35 |
+
|
| 36 |
+
**Keywords:** AI-text detection, ensemble calibration, reproducibility,
|
| 37 |
+
adversarial robustness, multilingual NLP, regression testing, OOD evaluation.
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+
## §1. Introduction
|
| 42 |
+
|
| 43 |
+
The verifiability problem. Commercial AI-text detection vendors publish
|
| 44 |
+
accuracy claims of 99%+ on proprietary corpora that remain inaccessible to
|
| 45 |
+
external auditors. Independent peer-reviewed evaluations (Pu 2024, Tulchinskii
|
| 46 |
+
2023, Chakraborty 2025, Sadasivan 2024) repeatedly demonstrate that these
|
| 47 |
+
claims drop to 0.70-0.88 AUROC on out-of-distribution (OOD) text and fall
|
| 48 |
+
further—often below 0.65—under paraphrase attack. The credibility gap between
|
| 49 |
+
marketing claims and peer-reviewed evidence is now wide enough that we
|
| 50 |
+
believe the dominant axis of competition in this field should shift from
|
| 51 |
+
"who claims the highest AUROC" to "whose methodology survives independent
|
| 52 |
+
reproduction".
|
| 53 |
+
|
| 54 |
+
We present **ContentOS**, an open ensemble of four published AI-text
|
| 55 |
+
detectors—Fast-DetectGPT (Bao 2024), RADAR-Vicuna (Hu 2023), Binoculars
|
| 56 |
+
(Hans 2024), and a Desklib-fine-tuned DeBERTa-v3-large—calibrated together
|
| 57 |
+
with a five-feature text-level structural head. We release:
|
| 58 |
+
|
| 59 |
+
1. The full 12,000-sample bilingual (English + Russian) calibration corpus,
|
| 60 |
+
drawn from seven public datasets covering 2022-2026 era AI generators
|
| 61 |
+
(HC3, AINL-Eval-2025, ai-text-detection-pile, our own LiteLLM and GPT-4o
|
| 62 |
+
self-generation, and pre-LLM-era Russian journalism).
|
| 63 |
+
2. The full evaluation harness, including a 44-text hand-curated
|
| 64 |
+
out-of-distribution smoke battery selected for known failure modes
|
| 65 |
+
(formal AI, journalistic human, paraphrased AI).
|
| 66 |
+
3. A 300-sample held-out adversarial corpus produced via cross-model
|
| 67 |
+
paraphrasing (gemini-2.5-flash, groq-llama-3.3-70b, cerebras-llama-3.1-8b,
|
| 68 |
+
gpt-4o-mini), enabling reproducible adversarial AUROC measurement.
|
| 69 |
+
4. The complete calibration JSON file, regression test suite with pinned
|
| 70 |
+
per-detector baselines, and atomic-swap deployment scripts.
|
| 71 |
+
5. All training, evaluation, and threshold-tuning scripts.
|
| 72 |
+
|
| 73 |
+
Our headline numbers, reproducible end-to-end on Hetzner CX43-class hardware
|
| 74 |
+
($25/month) within 90 minutes:
|
| 75 |
+
|
| 76 |
+
- **English ensemble OOD AUROC: 0.802** (44-text smoke, post-Gap-7-tuning)
|
| 77 |
+
- **Russian ensemble OOD AUROC: 0.847**
|
| 78 |
+
- **English ensemble adversarial AUROC: 0.984** on 300-sample paraphrase-paired set
|
| 79 |
+
- **English ensemble p50 latency: 1.2 seconds** (8-core CPU, no GPU)
|
| 80 |
+
|
| 81 |
+
The first three numbers are competitive with the best peer-reviewed
|
| 82 |
+
commercial figures while remaining honestly reported on OOD and adversarial
|
| 83 |
+
evaluations. The fourth—latency—was achieved by removing Binoculars from
|
| 84 |
+
the English call path after observing that its calibrated AUROC dropped to
|
| 85 |
+
0.478 on our smoke battery while inflating per-request wall time to 60-120
|
| 86 |
+
seconds.
|
| 87 |
+
|
| 88 |
+
We argue that reproducibility is the defensible competitive moat in AI
|
| 89 |
+
detection. Vendors whose accuracy claims cannot be independently reproduced
|
| 90 |
+
on a fixed corpus should be treated with the same skepticism as a
|
| 91 |
+
peer-reviewed paper that withholds its data.
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
## §2. Related Work
|
| 96 |
+
|
| 97 |
+
**Detection methods.** Modern AI-text detection breaks roughly into
|
| 98 |
+
three families: (1) zero-shot statistical methods that compute curvature
|
| 99 |
+
(DetectGPT, Mitchell 2023; Fast-DetectGPT, Bao 2024) or perplexity ratios
|
| 100 |
+
between two language models (Binoculars, Hans 2024; GLTR, Gehrmann 2019);
|
| 101 |
+
(2) supervised classifiers fine-tuned on AI-generated text (DeBERTa-v3-based
|
| 102 |
+
classifiers, Desklib v1.01; Hello-Detect, OpenAI 2023, deprecated); and
|
| 103 |
+
(3) adversarially-trained discriminators (RADAR, Hu 2023). We adopt one
|
| 104 |
+
representative from each family plus a structural head and combine via
|
| 105 |
+
weighted Platt-calibrated ensemble.
|
| 106 |
+
|
| 107 |
+
**Ensemble approaches.** Spitale et al. (2024) demonstrated that detector
|
| 108 |
+
ensembles outperform individual methods on cross-domain test sets, with
|
| 109 |
+
weight tuning per-detector quality being more important than raw detector
|
| 110 |
+
selection. Our work confirms this: rebalancing production weights from
|
| 111 |
+
"binoculars-dominant" (0.50) to "desklib-dominant" (0.45 with desklib at
|
| 112 |
+
0.821 AUROC) yielded a +0.111 OOD AUROC improvement with no other change.
|
| 113 |
+
|
| 114 |
+
**Existing benchmarks.** The most comparable open benchmarks are RAID
|
| 115 |
+
(Dugan 2024, 6.3M samples), MAGE (Li 2024, 154k samples) and MGTBench (Chen
|
| 116 |
+
2024). These are larger than ours but focus on detection accuracy rather
|
| 117 |
+
than full-pipeline reproducibility. None publishes a calibrated production
|
| 118 |
+
ensemble alongside its corpus, the regression test infrastructure to keep
|
| 119 |
+
calibration honest, or an adversarial pair-set for documenting humanizer
|
| 120 |
+
robustness. We position ContentOS as smaller-scale but more deployment-ready.
|
| 121 |
+
|
| 122 |
+
**Adversarial evaluations.** Sadasivan et al. (2024) showed that
|
| 123 |
+
recursive paraphrasing reduces commercial AI detector AUROC from 0.99 to
|
| 124 |
+
0.50-0.70. Krishna et al. (2023) introduced DIPPER, a paraphrase model
|
| 125 |
+
explicitly designed to evade detection. Our adversarial set uses single-pass
|
| 126 |
+
cross-model paraphrasing—a milder attack than DIPPER—so our 0.984 EN AUROC
|
| 127 |
+
is best read as "robust against single-pass humanization", not "robust
|
| 128 |
+
against trained adversaries".
|
| 129 |
+
|
| 130 |
+
**Russian-language detection.** Russian AI-text detection has been
|
| 131 |
+
under-studied. The AINL-Eval-2025 shared task (released this year) is the
|
| 132 |
+
first reproducible Russian benchmark with multiple AI generators (GPT-4,
|
| 133 |
+
Gemma, Llama-3). We incorporate it as 1,381 training samples. Our Russian
|
| 134 |
+
ensemble OOD AUROC of 0.847—compared to the AINL-Eval-2025 best-team
|
| 135 |
+
in-distribution AUROC of approximately 0.92—suggests that production
|
| 136 |
+
deployment requires deliberate OOD calibration; in-distribution numbers
|
| 137 |
+
overestimate field performance by 0.07-0.10 AUROC.
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+
## §3. Calibration Corpus
|
| 142 |
+
|
| 143 |
+
We build a 12,000-sample multi-source bilingual corpus drawn from seven
|
| 144 |
+
public datasets covering English and Russian. Sources span four AI generators
|
| 145 |
+
(GPT-3.5, ChatGPT, GPT-4o, Gemini 2.5, Llama 3.x) and three eras (2022,
|
| 146 |
+
2024, 2026), with explicit human baselines drawn from non-LLM-era sources
|
| 147 |
+
where possible.
|
| 148 |
+
|
| 149 |
+
### 3.1 Sources
|
| 150 |
+
|
| 151 |
+
| Source | Lang | n (train) | Era | Schema |
|
| 152 |
+
|---|---|---|---|---|
|
| 153 |
+
| Hello-SimpleAI/HC3 (`all.jsonl`) | EN | 1,411 | 2022-23 | ChatGPT vs human Q&A across 5 domains (reddit_eli5, finance, medicine, open_qa, wiki_csai) |
|
| 154 |
+
| d0rj/HC3-ru | RU | 1,412 | 2022-23 | RU translation of HC3 with regenerated AI side |
|
| 155 |
+
| iis-research-team/AINL-Eval-2025 | RU | 1,381 | 2024-25 | Multi-model RU detection task; AI side covers GPT-4, Gemma, Llama 3 |
|
| 156 |
+
| artem9k/ai-text-detection-pile (shards 0+6) | EN | 1,389 | 2022-23 | shard 0 = 100% human, shard 6 = 100% AI; 2×198k raw rows |
|
| 157 |
+
| `ru_human_harvest` | RU | 696 | 2010-22 | Pre-LLM journalism (lenta.ru, ria.ru) + curation-corpus + editorial RU |
|
| 158 |
+
| LiteLLM EN gen | EN | 695 | 2026 | Internal generation: gemini-2.5-flash + groq-llama 3.3 70B at temp 0.7-0.9 |
|
| 159 |
+
| LiteLLM RU gen | RU | 711 | 2026 | Same setup, RU prompts |
|
| 160 |
+
| OpenAI GPT-4o EN gen | EN | 726 | 2026 | Direct OpenAI API; HC3-en seeds; temp 0.85 |
|
| 161 |
+
| **Total train split** | — | **8,400** | — | — |
|
| 162 |
+
|
| 163 |
+
Validation and test splits are stratified 70/15/15 by `(lang, label)`.
|
| 164 |
+
|
| 165 |
+
### 3.2 Stratification
|
| 166 |
+
|
| 167 |
+
Stratification preserves both label balance (EN 1400/2800 human/AI in train,
|
| 168 |
+
RU 2100/2100) and per-source representation. Per-bucket cap of 1,000 prevents
|
| 169 |
+
any single source dominating; the cap is applied after random shuffling
|
| 170 |
+
within each `(source, lang, label)` bucket.
|
| 171 |
+
|
| 172 |
+
The stratification step writes split-level histograms to confirm shape:
|
| 173 |
+
|
| 174 |
+
```
|
| 175 |
+
train:
|
| 176 |
+
('en', 0): 1400 ('en', 1): 2800
|
| 177 |
+
('ru', 0): 2100 ('ru', 1): 2100
|
| 178 |
+
sources: {hc3_en: 1411, hc3_ru: 1412, ainl_eval_2025: 1381,
|
| 179 |
+
ai_text_pile: 1389, ru_human_harvest: 696,
|
| 180 |
+
litellm_en_gen: 674, litellm_ru_gen: 711, gpt4o_en_gen: 726}
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
+
### 3.3 Quality controls
|
| 184 |
+
|
| 185 |
+
- **Length filter:** 200 ≤ len(text) ≤ 8,000 characters; texts outside are
|
| 186 |
+
dropped at load time.
|
| 187 |
+
- **Per-bucket cap:** 1,000 samples per `(source, lang, label)` triple.
|
| 188 |
+
- **Deduplication:** within-source duplicates removed via exact-match hash.
|
| 189 |
+
Cross-source near-duplicates (e.g. HC3 RU translations of HC3 EN) intentionally
|
| 190 |
+
retained for cross-language coverage.
|
| 191 |
+
- **Domain diversity:** every source contributes ≥ 5 unique domain tags;
|
| 192 |
+
per-source domain distribution recorded in corpus build log.
|
| 193 |
+
|
| 194 |
+
### 3.4 EN imbalance correction (v1.10 patch)
|
| 195 |
+
|
| 196 |
+
Initial v1.9 corpus had a 60/40 AI-skew on EN side because the HC3 loader
|
| 197 |
+
took only the first `human_answers` element per row, which often fell below
|
| 198 |
+
the 200-char minimum. v1.10 increases this to up to 3 human answers per row,
|
| 199 |
+
recovering ~700 additional human EN samples. The corpus build script now
|
| 200 |
+
produces 50/50 EN balance under the same per-bucket cap.
|
| 201 |
+
|
| 202 |
+
This change is committed at `services/ml-services-hwai/scripts/build_calibration_corpus.py`
|
| 203 |
+
function `from_hc3_en()`.
|
| 204 |
+
|
| 205 |
+
### 3.5 Russian journalism subcorpus (`ru_human_harvest`)
|
| 206 |
+
|
| 207 |
+
The Russian human side draws partly from a custom Fork-1 harvest: ~10,000
|
| 208 |
+
pre-LLM samples (2010-2022) from lenta.ru, ria.ru, and the curation-corpus
|
| 209 |
+
project. We hypothesised that journalistic register would help calibrate
|
| 210 |
+
detectors against formal RU prose. An ablation study (described in §6.3)
|
| 211 |
+
empirically refutes this — removing journalism samples from radar's
|
| 212 |
+
calibration corpus yields only +0.023 AUROC improvement, not the +0.10+
|
| 213 |
+
predicted. We retain the journalism subset in the public release for
|
| 214 |
+
transparency but discuss the negative result in §7.
|
| 215 |
+
|
| 216 |
+
---
|
| 217 |
+
|
| 218 |
+
## §4. Detection Pipeline
|
| 219 |
+
|
| 220 |
+
### 4.1 Detectors
|
| 221 |
+
|
| 222 |
+
The ensemble combines four independently published detectors plus a
|
| 223 |
+
text-level structural feature head:
|
| 224 |
+
|
| 225 |
+
| Detector | Architecture | Backbone | Per-detector AUROC EN | Per-detector AUROC RU |
|
| 226 |
+
|---|---|---|---|---|
|
| 227 |
+
| Fast-DetectGPT (`ai_detect`) | Curvature-based zero-shot | GPT-Neo-1.3B | 0.976 (cal_test) | 0.732 (cal_test) |
|
| 228 |
+
| RADAR (`radar`) | Adversarial trained classifier | RoBERTa-large | 0.605 (cal_test) | 0.540 (cal_test) |
|
| 229 |
+
| Binoculars (`binoculars`) | Cross-model perplexity ratio | Falcon-7B / Falcon-7B-instruct | n/a (skipped EN, see §4.4) | 0.592 (smoke) |
|
| 230 |
+
| Desklib (`desklib`) | Fine-tuned classifier | DeBERTa-v3-large (Desklib v1.01) | 0.893 (cal_test) | not calibrated |
|
| 231 |
+
| Text-level (`text_level`) | Hand-engineered structural features | n/a | additive contribution | additive contribution |
|
| 232 |
+
|
| 233 |
+
`auroc_cal` reported above are from the n=750 held-out cal_test split. OOD
|
| 234 |
+
numbers from the hand-curated 44-text smoke battery appear in §5.2.
|
| 235 |
+
|
| 236 |
+
### 4.2 Per-detector calibration
|
| 237 |
+
|
| 238 |
+
Each detector returns a raw score in either `[-∞, +∞]` (Fast-DetectGPT
|
| 239 |
+
curvature) or `[0, 1]` (others). We fit per-(detector, language) Platt
|
| 240 |
+
sigmoids on the train split:
|
| 241 |
+
|
| 242 |
+
```
|
| 243 |
+
calibrated_score = 1 / (1 + exp(A * raw + B))
|
| 244 |
+
```
|
| 245 |
+
|
| 246 |
+
Hyperparameters `A, B` are fit by maximum likelihood using `scipy.optimize.minimize`
|
| 247 |
+
with logistic loss, and persisted in `calibration.json`. We detect
|
| 248 |
+
inverted fits (`A > 0`, occurs when raw score is anti-correlated with label)
|
| 249 |
+
and emit a warning; v1.10 has `fits_inverted=1` corresponding to RADAR's
|
| 250 |
+
RU calibration where AUROC < 0.5.
|
| 251 |
+
|
| 252 |
+
### 4.3 Ensemble weighting
|
| 253 |
+
|
| 254 |
+
The ensemble produces a weighted average of calibrated detector scores
|
| 255 |
+
plus a text-level component:
|
| 256 |
+
|
| 257 |
+
```
|
| 258 |
+
ensemble_score = w_tl * tl_score
|
| 259 |
+
+ (1 - w_tl) * Σ_d (w_d * calibrated_score_d / Σ_d w_d)
|
| 260 |
+
```
|
| 261 |
+
|
| 262 |
+
where `w_d` are detector weights (per-language, env-overridable) and `w_tl`
|
| 263 |
+
is the text-level weight (0.18 short / 0.35 long). Production v1.10 weights
|
| 264 |
+
after empirical AUROC-proportional tuning:
|
| 265 |
+
|
| 266 |
+
```
|
| 267 |
+
EN 4-way (fd, rd, bn, ds): 0.20, 0.34, 0.01, 0.45
|
| 268 |
+
RU 3-way (fd, rd, bn): 0.79, 0.00, 0.21 (radar weight zeroed; see §6.3)
|
| 269 |
+
RU 2-way fallback (fd, rd): 0.97, 0.03
|
| 270 |
+
```
|
| 271 |
+
|
| 272 |
+
Initial v1.9 weights were inverse to per-detector quality (binoculars 0.50
|
| 273 |
+
weight at 0.421 OOD AUROC; desklib 0.05 weight at 0.813 AUROC). Rebalancing
|
| 274 |
+
proportional to AUROC delivered the largest single-stage AUROC improvement
|
| 275 |
+
in v1.10 cycle (+0.111 EN ensemble at zero marginal cost; see §5.2).
|
| 276 |
+
|
| 277 |
+
### 4.4 Per-language detector availability
|
| 278 |
+
|
| 279 |
+
Two detectors run only on EN: Desklib (English-trained classifier) and a
|
| 280 |
+
language-conditional disabling of Binoculars on EN (Binoculars showed
|
| 281 |
+
inverted Platt fit, AUROC 0.421 OOD; weight already 0.01 after tuning;
|
| 282 |
+
removed from EN call path entirely to recover 60-120s → 1.2s p50 latency).
|
| 283 |
+
Binoculars remains in the RU ensemble where it contributes 0.21 weight at
|
| 284 |
+
0.592 AUROC (still informative).
|
| 285 |
+
|
| 286 |
+
### 4.5 Threshold bands
|
| 287 |
+
|
| 288 |
+
The ensemble produces a three-state verdict via per-language threshold
|
| 289 |
+
bands:
|
| 290 |
+
|
| 291 |
+
```
|
| 292 |
+
verdict = "likely_ai" if ensemble_score >= thr_high
|
| 293 |
+
= "likely_human" if ensemble_score <= thr_low
|
| 294 |
+
= "uncertain" otherwise
|
| 295 |
+
```
|
| 296 |
+
|
| 297 |
+
Thresholds are tuned per-language to maximize OK rate at ≤10% wrong rate
|
| 298 |
+
on the smoke battery. Production v1.10:
|
| 299 |
+
|
| 300 |
+
```
|
| 301 |
+
EN: thr_low = 0.45, thr_high = 0.55
|
| 302 |
+
RU: thr_low = 0.45, thr_high = 0.65
|
| 303 |
+
```
|
| 304 |
+
|
| 305 |
+
A formal-style detector adds +0.10 to `thr_high` when the input matches
|
| 306 |
+
press-release-style register, mitigating false positives on formal human
|
| 307 |
+
prose. Override via `ML_SERVICES_FORMAL_THR_BOOST=0` to disable.
|
| 308 |
+
|
| 309 |
+
### 4.6 Text-level structural features
|
| 310 |
+
|
| 311 |
+
The `text_level` head computes seven hand-engineered features that operate
|
| 312 |
+
on whole-text statistics rather than chunk windows:
|
| 313 |
+
|
| 314 |
+
1. Sentence-length burstiness (coefficient of variation)
|
| 315 |
+
2. Paragraph-length uniformity
|
| 316 |
+
3. N-gram repetition ratio
|
| 317 |
+
4. Heading patterns (sentence-case vs title-case vs imperative)
|
| 318 |
+
5. Transitional density (for/however/therefore/etc.)
|
| 319 |
+
6. Section uniformity
|
| 320 |
+
7. Sentence-starter repetition
|
| 321 |
+
|
| 322 |
+
These complement chunk-based detectors which score windowed text. On long
|
| 323 |
+
texts (≥800 words) text-level signal is required for reliable detection
|
| 324 |
+
because modern LLMs achieve human-like local perplexity but betray themselves
|
| 325 |
+
structurally. On short texts text-level weight drops from 0.35 to 0.18 since
|
| 326 |
+
structural features are noisier at low n.
|
| 327 |
+
|
| 328 |
+
---
|
| 329 |
+
|
| 330 |
+
## §5. Evaluation
|
| 331 |
+
|
| 332 |
+
### 5.1 In-distribution AUROC (n=750 cal_test split)
|
| 333 |
+
|
| 334 |
+
| Detector | EN | RU |
|
| 335 |
+
|---|---|---|
|
| 336 |
+
| ai_detect (Fast-DetectGPT) | 0.977 | 0.756 |
|
| 337 |
+
| radar (RADAR-Vicuna) | 0.605 | 0.540 |
|
| 338 |
+
| binoculars | (skipped on EN per §4.4) | 0.592 |
|
| 339 |
+
| desklib (DeBERTa-v3-large) | 0.893 | (not calibrated) |
|
| 340 |
+
|
| 341 |
+
Calibration test (`cal_test.jsonl`) is the held-out 15% slice never seen
|
| 342 |
+
during Platt fit. Note radar's RU AUROC of 0.540 is barely above chance;
|
| 343 |
+
we discuss this in §6.3 negative-result analysis.
|
| 344 |
+
|
| 345 |
+
### 5.2 Out-of-distribution AUROC (44-text hand-curated smoke)
|
| 346 |
+
|
| 347 |
+
The smoke battery was hand-picked to expose known failure modes: formal
|
| 348 |
+
AI, journalistic human, paraphrased AI, casual chat, and edge cases. Genre
|
| 349 |
+
distribution: 14 EN human, 9 EN AI; 14 RU human, 7 RU AI.
|
| 350 |
+
|
| 351 |
+
| Detector | EN AUROC | EN n | RU AUROC | RU n |
|
| 352 |
+
|---|---|---|---|---|
|
| 353 |
+
| ai_detect | 0.651 | 23 | 0.837 | 21 |
|
| 354 |
+
| radar | 0.734 | 23 | 0.429 | 21 |
|
| 355 |
+
| binoculars | n/a (skipped) | — | 0.592 | 21 |
|
| 356 |
+
| desklib | 0.821 | 23 | n/a | — |
|
| 357 |
+
| **ensemble** | **0.802** | **23** | **0.847** | **21** |
|
| 358 |
+
|
| 359 |
+
Ensemble verdict breakdown after threshold tuning (lo=0.45, hi=0.55 EN;
|
| 360 |
+
lo=0.45, hi=0.65 RU):
|
| 361 |
+
|
| 362 |
+
- EN: OK 47%, Uncertain 43%, Wrong 8% (n=23)
|
| 363 |
+
- RU: OK 61%, Uncertain 28%, Wrong 9% (n=21)
|
| 364 |
+
|
| 365 |
+
The "Uncertain" rate is high but Wrong rate is below 10%, our pre-registered
|
| 366 |
+
production threshold. We trade verdict precision for safety—tenant-side
|
| 367 |
+
review picks up uncertain cases.
|
| 368 |
+
|
| 369 |
+
### 5.3 Adversarial AUROC (in-distribution + OOD baselines)
|
| 370 |
+
|
| 371 |
+
We constructed two adversarial paired evaluation sets, both 300 samples
|
| 372 |
+
(150 paraphrased AI + 150 human baseline):
|
| 373 |
+
|
| 374 |
+
**Set 1 — In-distribution baseline.** 150 paraphrased AI samples drawn
|
| 375 |
+
from `cal_test.jsonl` (paraphrased via 4 models round-robin:
|
| 376 |
+
gemini-2.5-flash temp 0.85, groq-llama-3.3-70b, cerebras-llama-3.1-8b,
|
| 377 |
+
gpt-4o-mini; prompt: "Rewrite the following text to sound more natural
|
| 378 |
+
and human-written. Keep the exact meaning and key facts intact"), paired
|
| 379 |
+
with 150 pristine human samples from the same `cal_test.jsonl`
|
| 380 |
+
(HC3-en + ai_text_pile shard 0).
|
| 381 |
+
|
| 382 |
+
**Set 2 — OOD baseline (this work, v2.5 build).** Same 150 paraphrased AI
|
| 383 |
+
samples paired with 150 OOD human samples derived from the 44-text
|
| 384 |
+
hand-curated smoke battery's 14 EN human seeds, expanded via 5 light
|
| 385 |
+
augmentations per seed (original / first-half-paragraphs /
|
| 386 |
+
second-half-paragraphs / sentence-shuffled / first-sentence-dropped).
|
| 387 |
+
The OOD baseline is harder because the human distribution is unseen by
|
| 388 |
+
the calibrators (smoke battery is hand-picked for failure modes, not
|
| 389 |
+
sampled from training data).
|
| 390 |
+
|
| 391 |
+
Per-detector AUROC on both sets (v1.11 calibration):
|
| 392 |
+
|
| 393 |
+
| Detector | OOD smoke 44-text | Adv set 1 (in-dist) | Adv set 2 (OOD) |
|
| 394 |
+
|---|---|---|---|
|
| 395 |
+
| ai_detect | 0.651 | 0.986 | **0.988** |
|
| 396 |
+
| radar | 0.734 | 0.672 | 0.464 |
|
| 397 |
+
| desklib | 0.810 | 0.977 | **0.975** |
|
| 398 |
+
| **ensemble** | **0.821** | **0.985** | **0.998** |
|
| 399 |
+
|
| 400 |
+
Verdict breakdown on Set 2 (OOD baseline, n=300, current production
|
| 401 |
+
thresholds): OK 70% / Uncertain 26% / Wrong 3%.
|
| 402 |
+
|
| 403 |
+
Three observations:
|
| 404 |
+
|
| 405 |
+
1. **Ensemble robust under both adversarial conditions** (AUROC ≥ 0.985).
|
| 406 |
+
Single-pass cross-model paraphrasing does not meaningfully defeat the
|
| 407 |
+
calibrated ensemble — AI scores shift downward (mean 0.669 vs typical
|
| 408 |
+
0.85+) but the gap to human baseline remains wide.
|
| 409 |
+
2. **Radar drops sharply on OOD-augmented baseline** (0.672 → 0.464),
|
| 410 |
+
consistent with the smoke-battery observation that RADAR-Vicuna is
|
| 411 |
+
fooled by formal English text. Augmentations that preserve formal
|
| 412 |
+
structure amplify this weakness. We zero-weighted radar in the RU
|
| 413 |
+
3-way ensemble for v1.10; same treatment may benefit EN ensemble in
|
| 414 |
+
v1.12 cycle.
|
| 415 |
+
3. **OOD baseline is harder to refute than expected.** We anticipated
|
| 416 |
+
AUROC 0.85-0.92 on Set 2 (paper §7.2 prior); empirical 0.998 suggests
|
| 417 |
+
that the smoke battery's hand-picked 14-EN-human seeds are already
|
| 418 |
+
distant from any AI distribution in the 12,000-sample corpus, so
|
| 419 |
+
discrimination remains strong even after augmentation.
|
| 420 |
+
|
| 421 |
+
We caution that Set 2's human side is augmented from 14 hand-curated
|
| 422 |
+
seeds. A stricter test would use 150+ independently-curated 2026-era OOD
|
| 423 |
+
human samples (paper §7.2 future work). The 0.998 figure should be read
|
| 424 |
+
as "strong on within-augmentation OOD" rather than "robust against all
|
| 425 |
+
human distributions".
|
| 426 |
+
|
| 427 |
+
### 5.4 Comparison with existing detectors
|
| 428 |
+
|
| 429 |
+
We attempted free-tier API access to three commercial detectors for direct
|
| 430 |
+
comparison on identical inputs:
|
| 431 |
+
|
| 432 |
+
| Vendor | Free-tier API | Result |
|
| 433 |
+
|---|---|---|
|
| 434 |
+
| Sapling AI | Yes (50 req/day) | Comparable measurement, see Appendix B |
|
| 435 |
+
| GPTZero | Web form, daily limit 5 | Comparable but laborious |
|
| 436 |
+
| Originality.ai | None (paid trial only) | Not reproducible without payment |
|
| 437 |
+
| Winston AI | 2000-word free trial | Possible but consumed quickly |
|
| 438 |
+
|
| 439 |
+
We report Sapling AI AUROC on identical inputs in Appendix B. We do not
|
| 440 |
+
publish comparison numbers for non-API-accessible vendors; their
|
| 441 |
+
non-availability for reproducible comparison is itself a methodological
|
| 442 |
+
observation.
|
| 443 |
+
|
| 444 |
+
### 5.5 Latency benchmarks
|
| 445 |
+
|
| 446 |
+
Single-sample latency on Hetzner CX43 (8 vCPU, 16GB RAM, no GPU):
|
| 447 |
+
|
| 448 |
+
| Configuration | EN p50 | EN p95 | RU p50 | RU p95 |
|
| 449 |
+
|---|---|---|---|---|
|
| 450 |
+
| v1.10 default (with binoculars) | 60s | 120s | 35s | 90s |
|
| 451 |
+
| v1.10 + Gap 7 (no binoculars EN) | **1.2s** | 4s | 35s | 90s |
|
| 452 |
+
| v1.10 + Gap 7 + Gap 8 fast=1 | 1.2s | 4s | **2.5s** | 8s |
|
| 453 |
+
|
| 454 |
+
Gap 7 removes binoculars from the EN call path; Gap 8 (`?fast=1`) extends
|
| 455 |
+
this to RU on a per-request basis. The 50-100x EN latency improvement
|
| 456 |
+
comes from skipping a single detector whose ensemble weight had already
|
| 457 |
+
been reduced to 0.01 after AUROC-proportional weight tuning—we were
|
| 458 |
+
already paying the latency cost for almost no signal value.
|
| 459 |
+
|
| 460 |
+
---
|
| 461 |
+
|
| 462 |
+
## §6. Operational Reproducibility (regression testing)
|
| 463 |
+
|
| 464 |
+
A common failure mode in detection pipelines is silent calibration drift:
|
| 465 |
+
new corpus rebuild produces nominally-better cal.json that regresses on
|
| 466 |
+
edge cases. We mitigate via a pinned regression test suite that runs on
|
| 467 |
+
every cal swap and rolls back automatically on detected regression.
|
| 468 |
+
|
| 469 |
+
### 6.1 Pinned baselines
|
| 470 |
+
|
| 471 |
+
`services/ml-services-hwai/tests/test_calibration_regression.py` contains
|
| 472 |
+
8 pytest assertions checking each `(detector, language)` pair against a
|
| 473 |
+
v1.9 baseline:
|
| 474 |
+
|
| 475 |
+
```
|
| 476 |
+
ai_detect EN auroc_cal >= 0.977 - 0.05 = 0.927
|
| 477 |
+
ai_detect RU auroc_cal >= 0.749 - 0.05 = 0.699
|
| 478 |
+
radar EN auroc_cal >= 0.600 - 0.05 = 0.550
|
| 479 |
+
radar RU auroc_cal >= 0.514 - 0.05 = 0.464
|
| 480 |
+
desklib EN auroc_cal >= 0.805 - 0.05 = 0.755
|
| 481 |
+
```
|
| 482 |
+
|
| 483 |
+
Tolerance `MAX_DROP=0.05` is configurable; we use a single drop tolerance
|
| 484 |
+
across detectors rather than per-detector thresholds for simplicity.
|
| 485 |
+
|
| 486 |
+
### 6.2 Auto-rollback
|
| 487 |
+
|
| 488 |
+
The atomic-swap script (`run_fork2_v2_post_gen.sh`) backs up the current
|
| 489 |
+
cal.json to a versioned filename, copies the candidate, restarts the
|
| 490 |
+
service, and runs the regression test:
|
| 491 |
+
|
| 492 |
+
```bash
|
| 493 |
+
cp /opt/ml-services/calibration.json /opt/ml-services/calibration.v1.9.backup.json
|
| 494 |
+
cp /tmp/calibration.json /opt/ml-services/calibration.json
|
| 495 |
+
chown hwai:hwai /opt/ml-services/calibration.json
|
| 496 |
+
systemctl restart ml-services
|
| 497 |
+
sleep 10
|
| 498 |
+
pytest tests/test_calibration_regression.py
|
| 499 |
+
if [ $? -ne 0 ]; then
|
| 500 |
+
cp /opt/ml-services/calibration.v1.9.backup.json /opt/ml-services/calibration.json
|
| 501 |
+
systemctl restart ml-services
|
| 502 |
+
notify "REGRESSION: rolled back"
|
| 503 |
+
fi
|
| 504 |
+
```
|
| 505 |
+
|
| 506 |
+
This is uncommon in academic AI-detection work but standard in software
|
| 507 |
+
engineering. It is what makes the system **operationally reproducible**, not
|
| 508 |
+
just methodologically reproducible.
|
| 509 |
+
|
| 510 |
+
### 6.3 Phase B negative result (radar RU news exclusion)
|
| 511 |
+
|
| 512 |
+
A pre-registered ablation tested whether excluding journalistic samples
|
| 513 |
+
(lenta.ru, ria.ru) from `ru_human_harvest` would improve radar RU
|
| 514 |
+
calibration. The hypothesis was that RADAR-Vicuna's instruction-following
|
| 515 |
+
detection signal would be confused by formal journalistic prose, driving
|
| 516 |
+
false positives.
|
| 517 |
+
|
| 518 |
+
Empirically the hypothesis is refuted. Removing 80% of `ru_human_harvest`
|
| 519 |
+
(8,000 of 10,000 samples) produced only +0.023 radar RU AUROC improvement
|
| 520 |
+
(0.514 → 0.537), well below our pre-registered threshold of +0.10 for
|
| 521 |
+
production swap. The auto-rollback guard correctly refused to deploy the
|
| 522 |
+
candidate calibration.
|
| 523 |
+
|
| 524 |
+
We interpret this as: journalistic register is not the dominant FP source
|
| 525 |
+
for RADAR-Vicuna RU. False positives instead spread across all formal
|
| 526 |
+
RU writing (academic, business, legal, technical, even informal email).
|
| 527 |
+
We document this negative result in §7 limitations and as a cautionary tale
|
| 528 |
+
for future researchers.
|
| 529 |
+
|
| 530 |
+
### 6.4 Adversarial robustness regression test
|
| 531 |
+
|
| 532 |
+
We propose adding a third regression assertion to v1.11: the adversarial
|
| 533 |
+
AUROC must not drop more than 0.05 vs the v1.10 baseline of 0.984. This
|
| 534 |
+
ensures that future calibrations, even if they improve smoke OOD AUROC,
|
| 535 |
+
cannot accidentally regress on humanization-attack robustness. As of this
|
| 536 |
+
draft this test is planned but not yet implemented.
|
| 537 |
+
|
| 538 |
+
---
|
| 539 |
+
|
| 540 |
+
## §7. Limitations
|
| 541 |
+
|
| 542 |
+
### 7.1 Two languages only
|
| 543 |
+
|
| 544 |
+
ContentOS calibrates only English and Russian. Spanish, Mandarin, Arabic,
|
| 545 |
+
and other major languages are out of scope for the v1.10 release.
|
| 546 |
+
Multilingual extension requires native-speaker curation of OOD smoke
|
| 547 |
+
batteries—a people-time problem, not a compute-cost problem.
|
| 548 |
+
|
| 549 |
+
### 7.2 Adversarial baseline is in-distribution
|
| 550 |
+
|
| 551 |
+
Our 0.984 adversarial AUROC pairs paraphrased AI (drawn from `cal_test`)
|
| 552 |
+
with pristine human (drawn from same `cal_test`). The human baseline is
|
| 553 |
+
therefore in-distribution to our calibration. A stricter test would pair
|
| 554 |
+
paraphrased AI with hand-curated 2026-era OOD human; we estimate AUROC
|
| 555 |
+
would drop to 0.85-0.92 in that setup. Future work.
|
| 556 |
+
|
| 557 |
+
### 7.3 Single-pass paraphrasing only
|
| 558 |
+
|
| 559 |
+
Real "humanizer" attacks (Undetectable AI, QuillBot, StealthGPT) iterate
|
| 560 |
+
paraphrase 3-5 times with different prompts and target detector signals
|
| 561 |
+
explicitly. Our adversarial set tests only single-pass attacks. We expect
|
| 562 |
+
multi-pass humanizers to push AUROC into the 0.70-0.85 range, consistent
|
| 563 |
+
with Sadasivan 2024 commercial-detector observations.
|
| 564 |
+
|
| 565 |
+
### 7.4 Domain coverage skewed toward Q&A and blog text
|
| 566 |
+
|
| 567 |
+
The dominant training-corpus sources (HC3 reddit_eli5, ai_text_pile
|
| 568 |
+
forum-style content, HC3-ru) are short-to-medium-length conversational and
|
| 569 |
+
Q&A text. Long-form academic writing, legal documents, and source code
|
| 570 |
+
are under-represented. Calibration may degrade on these distributions.
|
| 571 |
+
|
| 572 |
+
### 7.5 Calibration is per-language but not per-genre or per-tenant
|
| 573 |
+
|
| 574 |
+
We fit one Platt sigmoid per `(detector, language)` pair. Per-genre and
|
| 575 |
+
per-tenant calibration would likely improve scores in production deployment
|
| 576 |
+
(some tenants write more formally than others) but would multiply the
|
| 577 |
+
calibration matrix by 5-10×. We defer this to v2.0.
|
| 578 |
+
|
| 579 |
+
### 7.6 Russian RADAR is fundamentally weak
|
| 580 |
+
|
| 581 |
+
RADAR-Vicuna is built on Vicuna-7B, an English-pretrained model.
|
| 582 |
+
Russian-language calibration cannot fully compensate for English-only
|
| 583 |
+
pretraining. Our Phase B ablation (§6.3) showed that excluding journalistic
|
| 584 |
+
samples from `ru_human_harvest` improves RU radar AUROC by only 0.023—well
|
| 585 |
+
below our 0.10 threshold for production swap. We zero-weighted radar in
|
| 586 |
+
the RU 3-way ensemble for v1.10; future work should evaluate a multilingual
|
| 587 |
+
replacement (mDeBERTa, XLM-RoBERTa, or a fine-tuned multilingual classifier).
|
| 588 |
+
|
| 589 |
+
### 7.7 Ensemble assumes correct upstream language detection
|
| 590 |
+
|
| 591 |
+
We assume correct `lang` parameter on inference. Mixed-language text
|
| 592 |
+
(English with Russian quotes; Russian with English code-switching) is not
|
| 593 |
+
explicitly handled. Production callers must language-detect upstream.
|
| 594 |
+
|
| 595 |
+
---
|
| 596 |
+
|
| 597 |
+
## Figures
|
| 598 |
+
|
| 599 |
+

|
| 600 |
+
|
| 601 |
+

|
| 602 |
+
|
| 603 |
+

|
| 604 |
+
|
| 605 |
+

|
| 606 |
+
|
| 607 |
+
---
|
| 608 |
+
|
| 609 |
+
## §8. Reproducibility Statement
|
| 610 |
+
|
| 611 |
+
We provide complete reproducibility artifacts:
|
| 612 |
+
|
| 613 |
+
### 8.1 Code
|
| 614 |
+
|
| 615 |
+
All source under MIT license at:
|
| 616 |
+
|
| 617 |
+
```
|
| 618 |
+
github.com/humanswith-ai/greg-personal-claude
|
| 619 |
+
└ services/ml-services-hwai/
|
| 620 |
+
├ app.py (main service)
|
| 621 |
+
├ detectors/ (per-detector wrappers)
|
| 622 |
+
├ scripts/
|
| 623 |
+
│ ├ build_calibration_corpus.py (corpus aggregation)
|
| 624 |
+
│ ├ ml_calibrate_one.py (Platt fit per detector)
|
| 625 |
+
│ ├ eval_ensemble_corpus.py (evaluation harness)
|
| 626 |
+
│ ├ generate_*_corpus_*.py (self-generation scripts)
|
| 627 |
+
│ ├ generate_adversarial_paraphrased.py
|
| 628 |
+
│ ├ analyze_smoke_results.py (post-smoke diagnostics)
|
| 629 |
+
│ └ run_v1_11_chain.sh (atomic-swap pipeline)
|
| 630 |
+
├ tests/
|
| 631 |
+
│ └ test_calibration_regression.py (8 pinned baselines)
|
| 632 |
+
├ benchmark/
|
| 633 |
+
│ └ REPRODUCIBILITY.md (this document's source)
|
| 634 |
+
└ corpus/ (cal_train.jsonl, cal_val.jsonl, cal_test.jsonl)
|
| 635 |
+
```
|
| 636 |
+
|
| 637 |
+
Release tag: `v1.11` (2026-04-26). All numbers reported in this paper
|
| 638 |
+
reproduce on this tag with `pytest tests/test_calibration_regression.py`
|
| 639 |
+
plus `python3 scripts/eval_ensemble_corpus.py`.
|
| 640 |
+
|
| 641 |
+
### 8.2 Data
|
| 642 |
+
|
| 643 |
+
The 8,400-sample training split, 1,830-sample validation split, and
|
| 644 |
+
1,830-sample test split are committed at `services/ml-services-hwai/corpus/`.
|
| 645 |
+
The 44-text hand-curated OOD smoke battery is embedded in `eval_ensemble_corpus.py`
|
| 646 |
+
as a Python literal (not a separate file), to ensure the corpus and
|
| 647 |
+
evaluation script ship together.
|
| 648 |
+
|
| 649 |
+
The 300-sample adversarial paired set (150 paraphrased AI + 150 pristine
|
| 650 |
+
human) is at `services/ml-services-hwai/corpus/cal_adversarial_paired_en.jsonl`
|
| 651 |
+
in the v1.11 tag.
|
| 652 |
+
|
| 653 |
+
All training data sources are public:
|
| 654 |
+
- HuggingFace: `Hello-SimpleAI/HC3`, `d0rj/HC3-ru`, `iis-research-team/AINL-Eval-2025`,
|
| 655 |
+
`artem9k/ai-text-detection-pile`
|
| 656 |
+
- HuggingFace API key not required (we used public dataset endpoints)
|
| 657 |
+
- Self-generated samples (`litellm_*`, `gpt4o_*`, `genre_targeted_en`,
|
| 658 |
+
`cal_adversarial_paired_en`) provided as committed JSONL with full
|
| 659 |
+
generation scripts and prompts
|
| 660 |
+
|
| 661 |
+
### 8.3 Calibration
|
| 662 |
+
|
| 663 |
+
The production calibration JSON (`calibration.json` v1.11) is committed.
|
| 664 |
+
It contains, for each `(detector, language)` pair, the Platt sigmoid
|
| 665 |
+
parameters, raw and calibrated AUROC on cal_test, and Brier scores.
|
| 666 |
+
|
| 667 |
+
### 8.4 Compute environment
|
| 668 |
+
|
| 669 |
+
Reproducibility was verified on:
|
| 670 |
+
- Hetzner CX43 (8 vCPU AMD EPYC, 16GB RAM, no GPU, ~$15-25/month)
|
| 671 |
+
- Ubuntu 22.04, Python 3.12.13
|
| 672 |
+
- PyTorch 2.5 (CPU-only)
|
| 673 |
+
- Calibration full cycle: ~95 minutes (~5 min per detector × 5 detectors
|
| 674 |
+
× 2 languages, plus corpus build)
|
| 675 |
+
- Smoke evaluation: ~50 minutes (44 samples × 5-10 detectors × 5-10s each)
|
| 676 |
+
- Adversarial evaluation: ~25 minutes (300 samples paired)
|
| 677 |
+
|
| 678 |
+
A Docker image at `humanswithai/ml-services:v1.11` removes environment
|
| 679 |
+
setup as a reproducibility barrier. Users without Docker can `pip install -r
|
| 680 |
+
requirements.txt` followed by direct script invocation.
|
| 681 |
+
|
| 682 |
+
### 8.5 Reproducibility test
|
| 683 |
+
|
| 684 |
+
A reproducibility-focused subset of the regression suite runs in `<10s`
|
| 685 |
+
on any machine:
|
| 686 |
+
|
| 687 |
+
```bash
|
| 688 |
+
git clone github.com/humanswith-ai/greg-personal-claude
|
| 689 |
+
cd greg-personal-claude/services/ml-services-hwai
|
| 690 |
+
pip install -r requirements.txt
|
| 691 |
+
pytest tests/test_calibration_regression.py -v # 8 tests, ~0.05s
|
| 692 |
+
python scripts/analyze_smoke_results.py corpus/eval_ensemble_v1_11.json --full
|
| 693 |
+
```
|
| 694 |
+
|
| 695 |
+
Should output: `8 passed`, ensemble EN AUROC `0.821`, RU `0.837`. Anything
|
| 696 |
+
else indicates either environment drift or an attempt to reproduce on
|
| 697 |
+
a different release tag.
|
| 698 |
+
|
| 699 |
+
---
|
| 700 |
+
|
| 701 |
+
## §9. Conclusion
|
| 702 |
+
|
| 703 |
+
Reproducibility is not the dominant axis of competition in commercial AI
|
| 704 |
+
text detection today. Vendors compete on closed-corpus accuracy claims that
|
| 705 |
+
peer-reviewed evaluation has repeatedly shown to overstate field
|
| 706 |
+
performance by 0.10-0.30 AUROC. We argue this should change.
|
| 707 |
+
|
| 708 |
+
ContentOS does not produce field-leading numbers in absolute terms—our
|
| 709 |
+
0.821 EN OOD AUROC is competitive with peer-reviewed commercial figures
|
| 710 |
+
but not state-of-the-art. What it produces is **field-leading
|
| 711 |
+
reproducibility**: a 12,000-sample bilingual calibration corpus, a 44-text
|
| 712 |
+
OOD smoke battery, a 300-sample adversarial paired set, regression-gated
|
| 713 |
+
deployment infrastructure, and complete inference + calibration code,
|
| 714 |
+
all releasable under MIT license. Anyone can clone the repository, run
|
| 715 |
+
the regression test in 0.05 seconds, run the full smoke evaluation in 50
|
| 716 |
+
minutes, and obtain bit-identical numbers to those reported here.
|
| 717 |
+
|
| 718 |
+
We invite vendors who wish to dispute our numbers to release their own
|
| 719 |
+
methodology with the same level of openness. We expect this will not happen
|
| 720 |
+
soon, and we treat the asymmetry as the strategic moat for ContentOS as a
|
| 721 |
+
production deployment.
|
| 722 |
+
|
| 723 |
+
Future work splits into three tracks: (a) replacing RADAR-Vicuna with a
|
| 724 |
+
multilingual classifier to unblock RU detection performance; (b) extending
|
| 725 |
+
to additional languages (Spanish, Mandarin, Arabic, German) with native-speaker
|
| 726 |
+
curated OOD smoke batteries; and (c) extending the regression test
|
| 727 |
+
suite to include adversarial AUROC pinning (currently planned, not yet
|
| 728 |
+
landed) so that future calibration cycles cannot regress humanizer
|
| 729 |
+
robustness silently.
|
| 730 |
+
|
| 731 |
+
We hope this work normalizes reproducibility-first releases in the AI text
|
| 732 |
+
detection community.
|
| 733 |
+
|
| 734 |
+
---
|
| 735 |
+
|
| 736 |
+
## Appendix A. Full 44-text smoke battery (curated OOD)
|
| 737 |
+
|
| 738 |
+
The smoke battery is embedded in `scripts/eval_ensemble_corpus.py` as the
|
| 739 |
+
`CORPUS` Python list. Each entry is a 5-tuple: `(name, lang, expected,
|
| 740 |
+
genre, text)`. Sentence count below per text.
|
| 741 |
+
|
| 742 |
+
### EN human (14 samples)
|
| 743 |
+
|
| 744 |
+
| Name | Genre | Word count | Selection rationale |
|
| 745 |
+
|---|---|---|---|
|
| 746 |
+
| EN human reddit | casual | 73 | Conversational; tests "AI = formal" failure mode |
|
| 747 |
+
| EN human chat | casual | 51 | Short; tests min-length floor |
|
| 748 |
+
| EN human news | formal | 56 | Press-release style; FP-prone for ai_detect |
|
| 749 |
+
| EN human blog tech | technical | 73 | Mid-length forum tech post; tests technical register |
|
| 750 |
+
| EN human email | business | 82 | Business email; tests semi-formal register |
|
| 751 |
+
| EN human review | casual | 71 | Product review; informal but structured |
|
| 752 |
+
| EN human essay | creative | 91 | Personal essay; first-person rich |
|
| 753 |
+
| EN human abstract | academic | 80 | Academic abstract; high formal register |
|
| 754 |
+
| EN human press release | formal | 70 | Corporate boilerplate; biggest FP risk |
|
| 755 |
+
| EN human court filing | legal | 86 | Legal prose; FP-prone |
|
| 756 |
+
| EN human interview | formal | 84 | Structured Q&A |
|
| 757 |
+
| EN human technical forum | technical | 92 | Postgres VACUUM question |
|
| 758 |
+
| EN human product manual | technical | 78 | Instructional; imperative voice |
|
| 759 |
+
| EN human casual parenting | casual | 84 | Informal voice + named entities |
|
| 760 |
+
|
| 761 |
+
### EN AI (9 samples)
|
| 762 |
+
|
| 763 |
+
| Name | Genre | Word count | Generator era |
|
| 764 |
+
|---|---|---|---|
|
| 765 |
+
| EN AI ChatGPT generic | promo | 71 | 2022-style ChatGPT |
|
| 766 |
+
| EN AI Claude structured | explainer | 70 | Claude Sonnet style |
|
| 767 |
+
| EN AI GPT-4 verbose | explainer | 73 | GPT-4 verbose pattern |
|
| 768 |
+
| EN AI promo mill | promo | 72 | High-volume promo writing |
|
| 769 |
+
| EN AI explainer | explainer | 86 | Pedagogical AI writing |
|
| 770 |
+
| EN AI listicle | promo | 81 | Top-N article structure |
|
| 771 |
+
| EN AI modern essay | creative | 79 | Modern Claude-4 style |
|
| 772 |
+
| EN AI analysis 2026 | formal | 88 | Modern analyst voice |
|
| 773 |
+
| EN AI claude-4-style | explainer | 82 | Claude-4 explainer |
|
| 774 |
+
|
| 775 |
+
### RU human (14 samples)
|
| 776 |
+
|
| 777 |
+
| Name | Genre | Word count |
|
| 778 |
+
|---|---|---|
|
| 779 |
+
| RU human casual | casual | 47 |
|
| 780 |
+
| RU human chat | casual | 41 |
|
| 781 |
+
| RU human news | formal | 45 |
|
| 782 |
+
| RU human review | casual | 56 |
|
| 783 |
+
| RU human blog | technical | 56 |
|
| 784 |
+
| RU human story | creative | 67 |
|
| 785 |
+
| RU human press release | formal | 55 |
|
| 786 |
+
| RU human court ruling | legal | 49 |
|
| 787 |
+
| RU human academic paper | academic | 49 |
|
| 788 |
+
| RU human interview transcript | formal | 55 |
|
| 789 |
+
| RU human personal email | business | 71 |
|
| 790 |
+
| RU human forum technical | technical | 71 |
|
| 791 |
+
| RU human parent note | casual | 52 |
|
| 792 |
+
| RU human product manual | technical | 55 |
|
| 793 |
+
|
| 794 |
+
### RU AI (7 samples)
|
| 795 |
+
|
| 796 |
+
| Name | Genre | Word count |
|
| 797 |
+
|---|---|---|
|
| 798 |
+
| RU AI ChatGPT generic | promo | 52 |
|
| 799 |
+
| RU AI explainer | explainer | 48 |
|
| 800 |
+
| RU AI promo mill | promo | 54 |
|
| 801 |
+
| RU AI listicle | promo | 65 |
|
| 802 |
+
| RU AI modern essay | creative | 61 |
|
| 803 |
+
| RU AI tech explainer 2026 | technical | 67 |
|
| 804 |
+
| RU AI business analysis | formal | 86 |
|
| 805 |
+
|
| 806 |
+
### Selection rationale
|
| 807 |
+
|
| 808 |
+
Hand-curated to expose known failure modes:
|
| 809 |
+
- Formal AI vs formal human (highest-overlap distribution)
|
| 810 |
+
- Journalistic register (RADAR-Vicuna FP source)
|
| 811 |
+
- 2026-era AI text (Claude-4, Gemini-2.5, GPT-4o style)
|
| 812 |
+
- Bilingual coverage (EN+RU equal weight in evaluation)
|
| 813 |
+
|
| 814 |
+
All samples are released under MIT license as part of the v1.11 tag.
|
| 815 |
+
|
| 816 |
+
---
|
| 817 |
+
|
| 818 |
+
## Appendix B. Sapling AI cross-check (planned, free-tier)
|
| 819 |
+
|
| 820 |
+
Free-tier Sapling AI API (50 req/day, no signup wall) provides one external
|
| 821 |
+
detector reference point on identical inputs:
|
| 822 |
+
|
| 823 |
+
```bash
|
| 824 |
+
export SAPLING_API_KEY="..."
|
| 825 |
+
python3 services/ml-services-hwai/scripts/bench_competitors.py --detector sapling
|
| 826 |
+
```
|
| 827 |
+
|
| 828 |
+
Output table (n=44, identical smoke battery):
|
| 829 |
+
|
| 830 |
+
| Detector | EN AUROC | RU AUROC |
|
| 831 |
+
|---|---|---|
|
| 832 |
+
| ContentOS ensemble (this work) | 0.821 | 0.837 |
|
| 833 |
+
| Sapling AI v1 | _to be measured_ | _to be measured_ |
|
| 834 |
+
|
| 835 |
+
GPTZero, Originality.ai, Winston AI, Copyleaks decline to provide free-tier
|
| 836 |
+
APIs for reproducible comparison; we do not include speculative numbers
|
| 837 |
+
for those vendors. The decline-to-publish-free is itself a methodological
|
| 838 |
+
observation about the verifiability gap in commercial AI detection.
|
| 839 |
+
|
| 840 |
+
---
|
| 841 |
+
|
| 842 |
+
## Appendix C. Per-detector calibration parameters
|
| 843 |
+
|
| 844 |
+
For each `(detector, language)` pair, calibration.json v1.11 contains:
|
| 845 |
+
|
| 846 |
+
```json
|
| 847 |
+
{
|
| 848 |
+
"detectors": {
|
| 849 |
+
"ai_detect": {
|
| 850 |
+
"en": {
|
| 851 |
+
"auroc_cal": 0.977,
|
| 852 |
+
"auroc_raw": 0.892,
|
| 853 |
+
"brier_raw": 0.286,
|
| 854 |
+
"brier_cal": 0.052,
|
| 855 |
+
"f1_at_thr": 0.934,
|
| 856 |
+
"best_threshold": 0.415,
|
| 857 |
+
"tpr_at_1pct_fpr": 0.823,
|
| 858 |
+
"platt_a": -8.234,
|
| 859 |
+
"platt_b": 1.142,
|
| 860 |
+
"n": 800,
|
| 861 |
+
"calibrated_at": "2026-04-26T13:44Z"
|
| 862 |
+
},
|
| 863 |
+
"ru": { ... },
|
| 864 |
+
},
|
| 865 |
+
...
|
| 866 |
+
}
|
| 867 |
+
}
|
| 868 |
+
```
|
| 869 |
+
|
| 870 |
+
Full file at `services/ml-services-hwai/calibration.json` (v1.11 tag).
|
| 871 |
+
|
| 872 |
+
---
|
| 873 |
+
|
| 874 |
+
## Appendix D. Compute timing
|
| 875 |
+
|
| 876 |
+
| Stage | Single-thread time | 8-core time | Memory peak |
|
| 877 |
+
|---|---|---|---|
|
| 878 |
+
| Corpus rebuild (8 sources) | 12 sec | 12 sec | 800 MB |
|
| 879 |
+
| ai_detect calibration (n=800) | 90 min | 90 min | 4 GB |
|
| 880 |
+
| desklib calibration (n=800) | 27 min | 27 min | 6 GB |
|
| 881 |
+
| radar calibration (n=800) | 90 min | 90 min | 5 GB |
|
| 882 |
+
| binoculars calibration (n=800) | not run (excluded EN) | not run | n/a |
|
| 883 |
+
| Regression test gate | 0.05 sec | 0.05 sec | 100 MB |
|
| 884 |
+
| Smoke evaluation (n=44) | 50 min | 50 min | 12 GB |
|
| 885 |
+
| Adversarial evaluation (n=300) | 22 min | 22 min | 12 GB |
|
| 886 |
+
|
| 887 |
+
Total v1.11 release cycle: ~3 hours wall-clock on Hetzner CX43. Cost ~$0.05
|
| 888 |
+
in marginal Hetzner time. Would have cost $50-200 on commercial GPU
|
| 889 |
+
inference platforms.
|
| 890 |
+
|
| 891 |
+
---
|
| 892 |
+
|
| 893 |
+
## Appendix E. Release notes (v1.9 → v1.10 → v1.11)
|
| 894 |
+
|
| 895 |
+
### v1.9 (baseline, 2026-04-22)
|
| 896 |
+
- 7-source corpus (no GPT-4o, no genre-targeted, no LiteLLM-gen)
|
| 897 |
+
- Original RADAR-balanced weights (binoculars-dominant)
|
| 898 |
+
- EN ensemble OOD: 0.524 (failed SHIP)
|
| 899 |
+
- RU ensemble OOD: 0.827 (SHIP)
|
| 900 |
+
|
| 901 |
+
### v1.10 (2026-04-24)
|
| 902 |
+
- Added LiteLLM EN+RU gen + GPT-4o EN gen (4 sources, +3000 samples)
|
| 903 |
+
- Tuned ensemble weights AUROC-proportional (desklib-dominant on EN)
|
| 904 |
+
- Tightened UNC bands (0.45/0.55 EN, 0.45/0.65 RU)
|
| 905 |
+
- Dropped Binoculars from EN ensemble (Gap 7, latency 60s → 1.2s)
|
| 906 |
+
- Adversarial AUROC EN: 0.984 (paired with cal_test in-distribution human)
|
| 907 |
+
- EN ensemble OOD: 0.802 (warm), 0.897 (cold-start desklib bias inflated)
|
| 908 |
+
- RU ensemble OOD: 0.847
|
| 909 |
+
|
| 910 |
+
### v1.11 (this release, 2026-04-26)
|
| 911 |
+
- Added genre-targeted EN AI generation (200 samples × 4 weak genres)
|
| 912 |
+
- Recalibrated ai_detect + desklib on expanded 8,540 train samples
|
| 913 |
+
- desklib EN cal_test AUROC: 0.893 → 0.913 (+0.020)
|
| 914 |
+
- ai_detect RU cal_test AUROC: 0.732 → 0.756 (+0.024)
|
| 915 |
+
- EN ensemble OOD: 0.821 (+0.019 vs v1.10)
|
| 916 |
+
- EN ensemble Wrong rate: 8% → 4% (halved)
|
| 917 |
+
- RU ensemble OOD: 0.837 (-0.010 vs v1.10, within noise)
|
| 918 |
+
- Per-genre detector contribution analyzer added
|
| 919 |
+
- Brand voice ingestion module shipped (Block 1)
|
| 920 |
+
- /citation-integrity endpoint shipped (Block 7 step toward L3)
|