| <!DOCTYPE html> |
| <html xmlns="http://www.w3.org/1999/xhtml"> |
| <head> |
| <meta charset="utf-8" /> |
| <meta name="generator" content="pandoc" /> |
| <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> |
| <title>ContentOS Preprint v1.0.2</title> |
| <style> |
| |
| |
| |
| html { |
| color: #1a1a1a; |
| background-color: #fdfdfd; |
| } |
| body { |
| margin: 0 auto; |
| max-width: 36em; |
| padding-left: 50px; |
| padding-right: 50px; |
| padding-top: 50px; |
| padding-bottom: 50px; |
| hyphens: auto; |
| overflow-wrap: break-word; |
| text-rendering: optimizeLegibility; |
| font-kerning: normal; |
| } |
| @media (max-width: 600px) { |
| body { |
| font-size: 0.9em; |
| padding: 12px; |
| } |
| h1 { |
| font-size: 1.8em; |
| } |
| } |
| @media print { |
| html { |
| background-color: white; |
| } |
| body { |
| background-color: transparent; |
| color: black; |
| font-size: 12pt; |
| } |
| p, h2, h3 { |
| orphans: 3; |
| widows: 3; |
| } |
| h2, h3, h4 { |
| page-break-after: avoid; |
| } |
| } |
| p { |
| margin: 1em 0; |
| } |
| a { |
| color: #1a1a1a; |
| } |
| a:visited { |
| color: #1a1a1a; |
| } |
| img { |
| max-width: 100%; |
| } |
| svg { |
| height: auto; |
| max-width: 100%; |
| } |
| h1, h2, h3, h4, h5, h6 { |
| margin-top: 1.4em; |
| } |
| h5, h6 { |
| font-size: 1em; |
| font-style: italic; |
| } |
| h6 { |
| font-weight: normal; |
| } |
| ol, ul { |
| padding-left: 1.7em; |
| margin-top: 1em; |
| } |
| li > ol, li > ul { |
| margin-top: 0; |
| } |
| blockquote { |
| margin: 1em 0 1em 1.7em; |
| padding-left: 1em; |
| border-left: 2px solid #e6e6e6; |
| color: #606060; |
| } |
| code { |
| white-space: pre-wrap; |
| font-family: Menlo, Monaco, Consolas, 'Lucida Console', monospace; |
| font-size: 85%; |
| margin: 0; |
| hyphens: manual; |
| } |
| pre { |
| margin: 1em 0; |
| overflow: auto; |
| } |
| pre code { |
| padding: 0; |
| overflow: visible; |
| overflow-wrap: normal; |
| } |
| .sourceCode { |
| background-color: transparent; |
| overflow: visible; |
| } |
| hr { |
| border: none; |
| border-top: 1px solid #1a1a1a; |
| height: 1px; |
| margin: 1em 0; |
| } |
| table { |
| margin: 1em 0; |
| border-collapse: collapse; |
| width: 100%; |
| overflow-x: auto; |
| display: block; |
| font-variant-numeric: lining-nums tabular-nums; |
| } |
| table caption { |
| margin-bottom: 0.75em; |
| } |
| tbody { |
| margin-top: 0.5em; |
| border-top: 1px solid #1a1a1a; |
| border-bottom: 1px solid #1a1a1a; |
| } |
| th { |
| border-top: 1px solid #1a1a1a; |
| padding: 0.25em 0.5em 0.25em 0.5em; |
| } |
| td { |
| padding: 0.125em 0.5em 0.25em 0.5em; |
| } |
| header { |
| margin-bottom: 4em; |
| text-align: center; |
| } |
| #TOC li { |
| list-style: none; |
| } |
| #TOC ul { |
| padding-left: 1.3em; |
| } |
| #TOC > ul { |
| padding-left: 0; |
| } |
| #TOC a:not(:hover) { |
| text-decoration: none; |
| } |
| span.smallcaps{font-variant: small-caps;} |
| div.columns{display: flex; gap: min(4vw, 1.5em);} |
| div.column{flex: auto; overflow-x: auto;} |
| div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;} |
| |
| |
| ul.task-list[class]{list-style: none;} |
| ul.task-list li input[type="checkbox"] { |
| font-size: inherit; |
| width: 0.8em; |
| margin: 0 0.8em 0.2em -1.6em; |
| vertical-align: middle; |
| } |
| .display.math{display: block; text-align: center; margin: 0.5rem auto;} |
| |
| html { -webkit-text-size-adjust: 100%; } |
| pre > code.sourceCode { white-space: pre; position: relative; } |
| pre > code.sourceCode > span { display: inline-block; line-height: 1.25; } |
| pre > code.sourceCode > span:empty { height: 1.2em; } |
| .sourceCode { overflow: visible; } |
| code.sourceCode > span { color: inherit; text-decoration: inherit; } |
| div.sourceCode { margin: 1em 0; } |
| pre.sourceCode { margin: 0; } |
| @media screen { |
| div.sourceCode { overflow: auto; } |
| } |
| @media print { |
| pre > code.sourceCode { white-space: pre-wrap; } |
| pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; } |
| } |
| pre.numberSource code |
| { counter-reset: source-line 0; } |
| pre.numberSource code > span |
| { position: relative; left: -4em; counter-increment: source-line; } |
| pre.numberSource code > span > a:first-child::before |
| { content: counter(source-line); |
| position: relative; left: -1em; text-align: right; vertical-align: baseline; |
| border: none; display: inline-block; |
| -webkit-touch-callout: none; -webkit-user-select: none; |
| -khtml-user-select: none; -moz-user-select: none; |
| -ms-user-select: none; user-select: none; |
| padding: 0 4px; width: 4em; |
| color: #aaaaaa; |
| } |
| pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; } |
| div.sourceCode |
| { } |
| @media screen { |
| pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; } |
| } |
| code span.al { color: #ff0000; font-weight: bold; } |
| code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } |
| code span.at { color: #7d9029; } |
| code span.bn { color: #40a070; } |
| code span.bu { color: #008000; } |
| code span.cf { color: #007020; font-weight: bold; } |
| code span.ch { color: #4070a0; } |
| code span.cn { color: #880000; } |
| code span.co { color: #60a0b0; font-style: italic; } |
| code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } |
| code span.do { color: #ba2121; font-style: italic; } |
| code span.dt { color: #902000; } |
| code span.dv { color: #40a070; } |
| code span.er { color: #ff0000; font-weight: bold; } |
| code span.ex { } |
| code span.fl { color: #40a070; } |
| code span.fu { color: #06287e; } |
| code span.im { color: #008000; font-weight: bold; } |
| code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } |
| code span.kw { color: #007020; font-weight: bold; } |
| code span.op { color: #666666; } |
| code span.ot { color: #007020; } |
| code span.pp { color: #bc7a00; } |
| code span.sc { color: #4070a0; } |
| code span.ss { color: #bb6688; } |
| code span.st { color: #4070a0; } |
| code span.va { color: #19177c; } |
| code span.vs { color: #4070a0; } |
| code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } |
| </style> |
| </head> |
| <body> |
| <header id="title-block-header"> |
| <h1 class="title">ContentOS Preprint v1.0.2</h1> |
| </header> |
| <h1 |
| id="contentos-a-reproducible-bilingual-ai-text-detection-ensemble-with-adversarial-robustness-evaluation">ContentOS: |
| A Reproducible Bilingual AI-Text-Detection Ensemble with Adversarial |
| Robustness Evaluation</h1> |
| <blockquote> |
| <p>ContentOS team, Humanswith.ai, 2026-04-27. Pre-print version v1.0. |
| Source: <code>services/ml-services-hwai/benchmark/paper.md</code> |
| (auto-merged from three companion drafts; see |
| <code>merge_paper.py</code>).</p> |
| </blockquote> |
| <h2 id="abstract">Abstract</h2> |
| <p>Commercial AI-text-detection vendors publish accuracy claims of 99%+ |
| on proprietary corpora that remain inaccessible to external auditors. |
| Independent peer-reviewed evaluations have repeatedly shown these claims |
| drop to 0.70-0.88 AUROC on out-of-distribution and modern-era text. We |
| present <strong>ContentOS</strong>, a reproducible ensemble of four AI |
| detectors (Fast-DetectGPT, RADAR-Vicuna, Binoculars, Desklib-fine-tuned |
| DeBERTa-v3-large) calibrated on a 12,000-sample bilingual (English + |
| Russian) corpus drawn from seven public datasets covering 2022-2026 era |
| AI generators (GPT-4o, Gemini 2.5, Groq Llama, Cerebras Llama).</p> |
| <p>We release the full calibration corpus, evaluation harness, |
| regression test suite, and a 300-sample held-out adversarial corpus |
| produced via cross-model single-pass paraphrasing.</p> |
| <p><strong>Headline numbers — v1.11 ensemble on 176-sample expanded |
| smoke battery (2026-04-29 measurement):</strong> AUROC <strong>0.864 |
| (English)</strong> and <strong>0.846 (Russian)</strong>, with English |
| Wrong-rate of 4% and median latency of 1.2 seconds on commodity 8-vCPU |
| hardware. Earlier 44-text hand-curated smoke (v1.0 paper measurement) |
| reported 0.821 EN / 0.837 RU; the 4× expanded battery with proper class |
| balance per (lang, genre) cell stabilized the numbers upward.</p> |
| <p>On the 300-sample adversarial paired set (cross-model paraphrasing |
| attack, OOD-augmented baseline), ensemble AUROC reaches |
| <strong>0.998</strong> (re-measured 2026-04-29 with current |
| calibration). Earlier v1.0 paper measurement was 0.985 — the slight |
| increase reflects the intervening calibration tuning between Gap-7 and |
| current state.</p> |
| <p>The contribution of this work is <strong>field-leading |
| reproducibility</strong>, not state-of-the-art absolute AUROC. Anyone |
| can clone the repository, run the regression test in 0.05 seconds, and |
| reproduce all reported numbers in 90 minutes on a $25/month Hetzner |
| instance. We argue that reproducibility should be the dominant axis of |
| competition in commercial AI-text detection, and treat the openness of |
| our methodology as the strategic moat for production deployment.</p> |
| <p><strong>Keywords:</strong> AI-text detection, ensemble calibration, |
| reproducibility, adversarial robustness, multilingual NLP, regression |
| testing, OOD evaluation.</p> |
| <hr /> |
| <h2 id="introduction">§1. Introduction</h2> |
| <p>The verifiability problem. Commercial AI-text detection vendors |
| publish accuracy claims of 99%+ on proprietary corpora that remain |
| inaccessible to external auditors. Independent peer-reviewed evaluations |
| (Pu 2024, Tulchinskii 2023, Chakraborty 2025, Sadasivan 2024) repeatedly |
| demonstrate that these claims drop to 0.70-0.88 AUROC on |
| out-of-distribution (OOD) text and fall further—often below 0.65—under |
| paraphrase attack. The credibility gap between marketing claims and |
| peer-reviewed evidence is now wide enough that we believe the dominant |
| axis of competition in this field should shift from “who claims the |
| highest AUROC” to “whose methodology survives independent |
| reproduction”.</p> |
| <p>We present <strong>ContentOS</strong>, an open ensemble of four |
| published AI-text detectors—Fast-DetectGPT (Bao 2024), RADAR-Vicuna (Hu |
| 2023), Binoculars (Hans 2024), and a Desklib-fine-tuned |
| DeBERTa-v3-large—calibrated together with a five-feature text-level |
| structural head. We release:</p> |
| <ol type="1"> |
| <li>The full 12,000-sample bilingual (English + Russian) calibration |
| corpus, drawn from seven public datasets covering 2022-2026 era AI |
| generators (HC3, AINL-Eval-2025, ai-text-detection-pile, our own LiteLLM |
| and GPT-4o self-generation, and pre-LLM-era Russian journalism).</li> |
| <li>The full evaluation harness, including a 44-text hand-curated |
| out-of-distribution smoke battery selected for known failure modes |
| (formal AI, journalistic human, paraphrased AI).</li> |
| <li>A 300-sample held-out adversarial corpus produced via cross-model |
| paraphrasing (gemini-2.5-flash, groq-llama-3.3-70b, |
| cerebras-llama-3.1-8b, gpt-4o-mini), enabling reproducible adversarial |
| AUROC measurement.</li> |
| <li>The complete calibration JSON file, regression test suite with |
| pinned per-detector baselines, and atomic-swap deployment scripts.</li> |
| <li>All training, evaluation, and threshold-tuning scripts.</li> |
| </ol> |
| <p>Our headline numbers, reproducible end-to-end on Hetzner CX43-class |
| hardware ($25/month) within 90 minutes:</p> |
| <ul> |
| <li><strong>English ensemble OOD AUROC: 0.864</strong> (176-sample |
| expanded smoke, 2026-04-29)</li> |
| <li><strong>Russian ensemble OOD AUROC: 0.846</strong> (176-sample |
| expanded smoke, 2026-04-29)</li> |
| <li><strong>English ensemble adversarial AUROC: 0.998</strong> on |
| 300-sample paraphrase-paired OOD-augmented set (re-measured |
| 2026-04-29)</li> |
| <li><strong>English ensemble p50 latency: 1.2 seconds</strong> (8-core |
| CPU, no GPU)</li> |
| </ul> |
| <p>Earlier v1.0 paper reported 0.802/0.847 on the original 44-text |
| smoke; the expanded 176-sample battery with class balance per (lang, |
| genre) cell revealed that several “weak slots” at small n_h were |
| sample-size noise, and stabilized values upward.</p> |
| <p>The first three numbers are competitive with the best peer-reviewed |
| commercial figures while remaining honestly reported on OOD and |
| adversarial evaluations. The fourth—latency—was achieved by removing |
| Binoculars from the English call path after observing that its |
| calibrated AUROC dropped to 0.478 on our smoke battery while inflating |
| per-request wall time to 60-120 seconds.</p> |
| <p>We argue that reproducibility is the defensible competitive moat in |
| AI detection. Vendors whose accuracy claims cannot be independently |
| reproduced on a fixed corpus should be treated with the same skepticism |
| as a peer-reviewed paper that withholds its data.</p> |
| <hr /> |
| <h2 id="related-work">§2. Related Work</h2> |
| <p><strong>Detection methods.</strong> Modern AI-text detection breaks |
| roughly into three families: (1) zero-shot statistical methods that |
| compute curvature (DetectGPT, Mitchell 2023; Fast-DetectGPT, Bao 2024) |
| or perplexity ratios between two language models (Binoculars, Hans 2024; |
| GLTR, Gehrmann 2019); (2) supervised classifiers fine-tuned on |
| AI-generated text (DeBERTa-v3-based classifiers, Desklib v1.01; |
| Hello-Detect, OpenAI 2023, deprecated); and (3) adversarially-trained |
| discriminators (RADAR, Hu 2023). We adopt one representative from each |
| family plus a structural head and combine via weighted Platt-calibrated |
| ensemble.</p> |
| <p><strong>Ensemble approaches.</strong> Spitale et al. (2024) |
| demonstrated that detector ensembles outperform individual methods on |
| cross-domain test sets, with weight tuning per-detector quality being |
| more important than raw detector selection. Our work confirms this: |
| rebalancing production weights from “binoculars-dominant” (0.50) to |
| “desklib-dominant” (0.45 with desklib at 0.821 AUROC) yielded a +0.111 |
| OOD AUROC improvement with no other change.</p> |
| <p><strong>Existing benchmarks.</strong> The most comparable open |
| benchmarks are RAID (Dugan 2024, 6.3M samples), MAGE (Li 2024, 154k |
| samples) and MGTBench (Chen 2024). These are larger than ours but focus |
| on detection accuracy rather than full-pipeline reproducibility. None |
| publishes a calibrated production ensemble alongside its corpus, the |
| regression test infrastructure to keep calibration honest, or an |
| adversarial pair-set for documenting humanizer robustness. We position |
| ContentOS as smaller-scale but more deployment-ready.</p> |
| <p><strong>Adversarial evaluations.</strong> Sadasivan et al. (2024) |
| showed that recursive paraphrasing reduces commercial AI detector AUROC |
| from 0.99 to 0.50-0.70. Krishna et al. (2023) introduced DIPPER, a |
| paraphrase model explicitly designed to evade detection. Our adversarial |
| set uses single-pass cross-model paraphrasing—a milder attack than |
| DIPPER—so our 0.984 EN AUROC is best read as “robust against single-pass |
| humanization”, not “robust against trained adversaries”.</p> |
| <p><strong>Russian-language detection.</strong> Russian AI-text |
| detection has been under-studied. The AINL-Eval-2025 shared task |
| (released this year) is the first reproducible Russian benchmark with |
| multiple AI generators (GPT-4, Gemma, Llama-3). We incorporate it as |
| 1,381 training samples. Our Russian ensemble OOD AUROC of 0.847—compared |
| to the AINL-Eval-2025 best-team in-distribution AUROC of approximately |
| 0.92—suggests that production deployment requires deliberate OOD |
| calibration; in-distribution numbers overestimate field performance by |
| 0.07-0.10 AUROC.</p> |
| <hr /> |
| <h2 id="calibration-corpus">§3. Calibration Corpus</h2> |
| <p>We build a 12,000-sample multi-source bilingual corpus drawn from |
| seven public datasets covering English and Russian. Sources span four AI |
| generators (GPT-3.5, ChatGPT, GPT-4o, Gemini 2.5, Llama 3.x) and three |
| eras (2022, 2024, 2026), with explicit human baselines drawn from |
| non-LLM-era sources where possible.</p> |
| <h3 id="sources">3.1 Sources</h3> |
| <table> |
| <colgroup> |
| <col style="width: 20%" /> |
| <col style="width: 20%" /> |
| <col style="width: 20%" /> |
| <col style="width: 20%" /> |
| <col style="width: 20%" /> |
| </colgroup> |
| <thead> |
| <tr> |
| <th>Source</th> |
| <th>Lang</th> |
| <th>n (train)</th> |
| <th>Era</th> |
| <th>Schema</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>Hello-SimpleAI/HC3 (<code>all.jsonl</code>)</td> |
| <td>EN</td> |
| <td>1,411</td> |
| <td>2022-23</td> |
| <td>ChatGPT vs human Q&A across 5 domains (reddit_eli5, finance, |
| medicine, open_qa, wiki_csai)</td> |
| </tr> |
| <tr> |
| <td>d0rj/HC3-ru</td> |
| <td>RU</td> |
| <td>1,412</td> |
| <td>2022-23</td> |
| <td>RU translation of HC3 with regenerated AI side</td> |
| </tr> |
| <tr> |
| <td>iis-research-team/AINL-Eval-2025</td> |
| <td>RU</td> |
| <td>1,381</td> |
| <td>2024-25</td> |
| <td>Multi-model RU detection task; AI side covers GPT-4, Gemma, Llama |
| 3</td> |
| </tr> |
| <tr> |
| <td>artem9k/ai-text-detection-pile (shards 0+6)</td> |
| <td>EN</td> |
| <td>1,389</td> |
| <td>2022-23</td> |
| <td>shard 0 = 100% human, shard 6 = 100% AI; 2×198k raw rows</td> |
| </tr> |
| <tr> |
| <td><code>ru_human_harvest</code></td> |
| <td>RU</td> |
| <td>696</td> |
| <td>2010-22</td> |
| <td>Pre-LLM journalism (lenta.ru, ria.ru) + curation-corpus + editorial |
| RU</td> |
| </tr> |
| <tr> |
| <td>LiteLLM EN gen</td> |
| <td>EN</td> |
| <td>695</td> |
| <td>2026</td> |
| <td>Internal generation: gemini-2.5-flash + groq-llama 3.3 70B at temp |
| 0.7-0.9</td> |
| </tr> |
| <tr> |
| <td>LiteLLM RU gen</td> |
| <td>RU</td> |
| <td>711</td> |
| <td>2026</td> |
| <td>Same setup, RU prompts</td> |
| </tr> |
| <tr> |
| <td>OpenAI GPT-4o EN gen</td> |
| <td>EN</td> |
| <td>726</td> |
| <td>2026</td> |
| <td>Direct OpenAI API; HC3-en seeds; temp 0.85</td> |
| </tr> |
| <tr> |
| <td><strong>Total train split</strong></td> |
| <td>—</td> |
| <td><strong>8,400</strong></td> |
| <td>—</td> |
| <td>—</td> |
| </tr> |
| </tbody> |
| </table> |
| <p>Validation and test splits are stratified 70/15/15 by |
| <code>(lang, label)</code>.</p> |
| <h3 id="stratification">3.2 Stratification</h3> |
| <p>Stratification preserves both label balance (EN 1400/2800 human/AI in |
| train, RU 2100/2100) and per-source representation. Per-bucket cap of |
| 1,000 prevents any single source dominating; the cap is applied after |
| random shuffling within each <code>(source, lang, label)</code> |
| bucket.</p> |
| <p>The stratification step writes split-level histograms to confirm |
| shape:</p> |
| <pre><code>train: |
| ('en', 0): 1400 ('en', 1): 2800 |
| ('ru', 0): 2100 ('ru', 1): 2100 |
| sources: {hc3_en: 1411, hc3_ru: 1412, ainl_eval_2025: 1381, |
| ai_text_pile: 1389, ru_human_harvest: 696, |
| litellm_en_gen: 674, litellm_ru_gen: 711, gpt4o_en_gen: 726}</code></pre> |
| <h3 id="quality-controls">3.3 Quality controls</h3> |
| <ul> |
| <li><strong>Length filter:</strong> 200 ≤ len(text) ≤ 8,000 characters; |
| texts outside are dropped at load time.</li> |
| <li><strong>Per-bucket cap:</strong> 1,000 samples per |
| <code>(source, lang, label)</code> triple.</li> |
| <li><strong>Deduplication:</strong> within-source duplicates removed via |
| exact-match hash. Cross-source near-duplicates (e.g. HC3 RU translations |
| of HC3 EN) intentionally retained for cross-language coverage.</li> |
| <li><strong>Domain diversity:</strong> every source contributes ≥ 5 |
| unique domain tags; per-source domain distribution recorded in corpus |
| build log.</li> |
| </ul> |
| <h3 id="en-imbalance-correction-v1.10-patch">3.4 EN imbalance correction |
| (v1.10 patch)</h3> |
| <p>Initial v1.9 corpus had a 60/40 AI-skew on EN side because the HC3 |
| loader took only the first <code>human_answers</code> element per row, |
| which often fell below the 200-char minimum. v1.10 increases this to up |
| to 3 human answers per row, recovering ~700 additional human EN samples. |
| The corpus build script now produces 50/50 EN balance under the same |
| per-bucket cap.</p> |
| <p>This change is committed at |
| <code>services/ml-services-hwai/scripts/build_calibration_corpus.py</code> |
| function <code>from_hc3_en()</code>.</p> |
| <h3 id="russian-journalism-subcorpus-ru_human_harvest">3.5 Russian |
| journalism subcorpus (<code>ru_human_harvest</code>)</h3> |
| <p>The Russian human side draws partly from a custom Fork-1 harvest: |
| ~10,000 pre-LLM samples (2010-2022) from lenta.ru, ria.ru, and the |
| curation-corpus project. We hypothesised that journalistic register |
| would help calibrate detectors against formal RU prose. An ablation |
| study (described in §6.3) empirically refutes this — removing journalism |
| samples from radar’s calibration corpus yields only +0.023 AUROC |
| improvement, not the +0.10+ predicted. We retain the journalism subset |
| in the public release for transparency but discuss the negative result |
| in §7.</p> |
| <hr /> |
| <h2 id="detection-pipeline">§4. Detection Pipeline</h2> |
| <h3 id="detectors">4.1 Detectors</h3> |
| <p>The ensemble combines four independently published detectors plus a |
| text-level structural feature head:</p> |
| <table> |
| <colgroup> |
| <col style="width: 20%" /> |
| <col style="width: 20%" /> |
| <col style="width: 20%" /> |
| <col style="width: 20%" /> |
| <col style="width: 20%" /> |
| </colgroup> |
| <thead> |
| <tr> |
| <th>Detector</th> |
| <th>Architecture</th> |
| <th>Backbone</th> |
| <th>Per-detector AUROC EN</th> |
| <th>Per-detector AUROC RU</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>Fast-DetectGPT (<code>ai_detect</code>)</td> |
| <td>Curvature-based zero-shot</td> |
| <td>GPT-Neo-1.3B</td> |
| <td>0.976 (cal_test)</td> |
| <td>0.732 (cal_test)</td> |
| </tr> |
| <tr> |
| <td>RADAR (<code>radar</code>)</td> |
| <td>Adversarial trained classifier</td> |
| <td>RoBERTa-large</td> |
| <td>0.605 (cal_test)</td> |
| <td>0.540 (cal_test)</td> |
| </tr> |
| <tr> |
| <td>Binoculars (<code>binoculars</code>)</td> |
| <td>Cross-model perplexity ratio</td> |
| <td>Falcon-7B / Falcon-7B-instruct</td> |
| <td>n/a (skipped EN, see §4.4)</td> |
| <td>0.592 (smoke)</td> |
| </tr> |
| <tr> |
| <td>Desklib (<code>desklib</code>)</td> |
| <td>Fine-tuned classifier</td> |
| <td>DeBERTa-v3-large (Desklib v1.01)</td> |
| <td>0.893 (cal_test)</td> |
| <td>not calibrated</td> |
| </tr> |
| <tr> |
| <td>Text-level (<code>text_level</code>)</td> |
| <td>Hand-engineered structural features</td> |
| <td>n/a</td> |
| <td>additive contribution</td> |
| <td>additive contribution</td> |
| </tr> |
| </tbody> |
| </table> |
| <p><code>auroc_cal</code> reported above are from the n=750 held-out |
| cal_test split. OOD numbers from the hand-curated 44-text smoke battery |
| appear in §5.2.</p> |
| <h3 id="per-detector-calibration">4.2 Per-detector calibration</h3> |
| <p>Each detector returns a raw score in either <code>[-∞, +∞]</code> |
| (Fast-DetectGPT curvature) or <code>[0, 1]</code> (others). We fit |
| per-(detector, language) Platt sigmoids on the train split:</p> |
| <pre><code>calibrated_score = 1 / (1 + exp(A * raw + B))</code></pre> |
| <p>Hyperparameters <code>A, B</code> are fit by maximum likelihood using |
| <code>scipy.optimize.minimize</code> with logistic loss, and persisted |
| in <code>calibration.json</code>. We detect inverted fits |
| (<code>A > 0</code>, occurs when raw score is anti-correlated with |
| label) and emit a warning; v1.10 has <code>fits_inverted=1</code> |
| corresponding to RADAR’s RU calibration where AUROC < 0.5.</p> |
| <h3 id="ensemble-weighting">4.3 Ensemble weighting</h3> |
| <p>The ensemble produces a weighted average of calibrated detector |
| scores plus a text-level component:</p> |
| <pre><code>ensemble_score = w_tl * tl_score |
| + (1 - w_tl) * Σ_d (w_d * calibrated_score_d / Σ_d w_d)</code></pre> |
| <p>where <code>w_d</code> are detector weights (per-language, |
| env-overridable) and <code>w_tl</code> is the text-level weight (0.18 |
| short / 0.35 long). Production v1.10 weights after empirical |
| AUROC-proportional tuning:</p> |
| <pre><code>EN 4-way (fd, rd, bn, ds): 0.20, 0.34, 0.01, 0.45 |
| RU 3-way (fd, rd, bn): 0.79, 0.00, 0.21 (radar weight zeroed; see §6.3) |
| RU 2-way fallback (fd, rd): 0.97, 0.03</code></pre> |
| <p>Initial v1.9 weights were inverse to per-detector quality (binoculars |
| 0.50 weight at 0.421 OOD AUROC; desklib 0.05 weight at 0.813 AUROC). |
| Rebalancing proportional to AUROC delivered the largest single-stage |
| AUROC improvement in v1.10 cycle (+0.111 EN ensemble at zero marginal |
| cost; see §5.2).</p> |
| <h3 id="per-language-detector-availability">4.4 Per-language detector |
| availability</h3> |
| <p>Two detectors run only on EN: Desklib (English-trained classifier) |
| and a language-conditional disabling of Binoculars on EN (Binoculars |
| showed inverted Platt fit, AUROC 0.421 OOD; weight already 0.01 after |
| tuning; removed from EN call path entirely to recover 60-120s → 1.2s p50 |
| latency). Binoculars remains in the RU ensemble where it contributes |
| 0.21 weight at 0.592 AUROC (still informative).</p> |
| <h3 id="threshold-bands">4.5 Threshold bands</h3> |
| <p>The ensemble produces a three-state verdict via per-language |
| threshold bands:</p> |
| <pre><code>verdict = "likely_ai" if ensemble_score >= thr_high |
| = "likely_human" if ensemble_score <= thr_low |
| = "uncertain" otherwise</code></pre> |
| <p>Thresholds are tuned per-language to maximize OK rate at ≤10% wrong |
| rate on the smoke battery. Production v1.10:</p> |
| <pre><code>EN: thr_low = 0.45, thr_high = 0.55 |
| RU: thr_low = 0.45, thr_high = 0.65</code></pre> |
| <p>A formal-style detector adds +0.10 to <code>thr_high</code> when the |
| input matches press-release-style register, mitigating false positives |
| on formal human prose. Override via |
| <code>ML_SERVICES_FORMAL_THR_BOOST=0</code> to disable.</p> |
| <h3 id="text-level-structural-features">4.6 Text-level structural |
| features</h3> |
| <p>The <code>text_level</code> head computes seven hand-engineered |
| features that operate on whole-text statistics rather than chunk |
| windows:</p> |
| <ol type="1"> |
| <li>Sentence-length burstiness (coefficient of variation)</li> |
| <li>Paragraph-length uniformity</li> |
| <li>N-gram repetition ratio</li> |
| <li>Heading patterns (sentence-case vs title-case vs imperative)</li> |
| <li>Transitional density (for/however/therefore/etc.)</li> |
| <li>Section uniformity</li> |
| <li>Sentence-starter repetition</li> |
| </ol> |
| <p>These complement chunk-based detectors which score windowed text. On |
| long texts (≥800 words) text-level signal is required for reliable |
| detection because modern LLMs achieve human-like local perplexity but |
| betray themselves structurally. On short texts text-level weight drops |
| from 0.35 to 0.18 since structural features are noisier at low n.</p> |
| <hr /> |
| <h2 id="evaluation">§5. Evaluation</h2> |
| <h3 id="in-distribution-auroc-n750-cal_test-split">5.1 In-distribution |
| AUROC (n=750 cal_test split)</h3> |
| <table> |
| <thead> |
| <tr> |
| <th>Detector</th> |
| <th>EN</th> |
| <th>RU</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>ai_detect (Fast-DetectGPT)</td> |
| <td>0.977</td> |
| <td>0.756</td> |
| </tr> |
| <tr> |
| <td>radar (RADAR-Vicuna)</td> |
| <td>0.605</td> |
| <td>0.540</td> |
| </tr> |
| <tr> |
| <td>binoculars</td> |
| <td>(skipped on EN per §4.4)</td> |
| <td>0.592</td> |
| </tr> |
| <tr> |
| <td>desklib (DeBERTa-v3-large)</td> |
| <td>0.893</td> |
| <td>(not calibrated)</td> |
| </tr> |
| </tbody> |
| </table> |
| <p>Calibration test (<code>cal_test.jsonl</code>) is the held-out 15% |
| slice never seen during Platt fit. Note radar’s RU AUROC of 0.540 is |
| barely above chance; we discuss this in §6.3 negative-result |
| analysis.</p> |
| <h3 id="out-of-distribution-auroc-44-text-hand-curated-smoke">5.2 |
| Out-of-distribution AUROC (44-text hand-curated smoke)</h3> |
| <p>The smoke battery was hand-picked to expose known failure modes: |
| formal AI, journalistic human, paraphrased AI, casual chat, and edge |
| cases. Genre distribution: 14 EN human, 9 EN AI; 14 RU human, 7 RU |
| AI.</p> |
| <table> |
| <thead> |
| <tr> |
| <th>Detector</th> |
| <th>EN AUROC</th> |
| <th>EN n</th> |
| <th>RU AUROC</th> |
| <th>RU n</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>ai_detect</td> |
| <td>0.651</td> |
| <td>23</td> |
| <td>0.837</td> |
| <td>21</td> |
| </tr> |
| <tr> |
| <td>radar</td> |
| <td>0.734</td> |
| <td>23</td> |
| <td>0.429</td> |
| <td>21</td> |
| </tr> |
| <tr> |
| <td>binoculars</td> |
| <td>n/a (skipped)</td> |
| <td>—</td> |
| <td>0.592</td> |
| <td>21</td> |
| </tr> |
| <tr> |
| <td>desklib</td> |
| <td>0.821</td> |
| <td>23</td> |
| <td>n/a</td> |
| <td>—</td> |
| </tr> |
| <tr> |
| <td><strong>ensemble</strong></td> |
| <td><strong>0.802</strong></td> |
| <td><strong>23</strong></td> |
| <td><strong>0.847</strong></td> |
| <td><strong>21</strong></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>Ensemble verdict breakdown after threshold tuning (lo=0.45, hi=0.55 |
| EN; lo=0.45, hi=0.65 RU):</p> |
| <ul> |
| <li>EN: OK 47%, Uncertain 43%, Wrong 8% (n=23)</li> |
| <li>RU: OK 61%, Uncertain 28%, Wrong 9% (n=21)</li> |
| </ul> |
| <p>The “Uncertain” rate is high but Wrong rate is below 10%, our |
| pre-registered production threshold. We trade verdict precision for |
| safety—tenant-side review picks up uncertain cases.</p> |
| <h3 id="adversarial-auroc-in-distribution-ood-baselines">5.3 Adversarial |
| AUROC (in-distribution + OOD baselines)</h3> |
| <p>We constructed two adversarial paired evaluation sets, both 300 |
| samples (150 paraphrased AI + 150 human baseline):</p> |
| <p><strong>Set 1 — In-distribution baseline.</strong> 150 paraphrased AI |
| samples drawn from <code>cal_test.jsonl</code> (paraphrased via 4 models |
| round-robin: gemini-2.5-flash temp 0.85, groq-llama-3.3-70b, |
| cerebras-llama-3.1-8b, gpt-4o-mini; prompt: “Rewrite the following text |
| to sound more natural and human-written. Keep the exact meaning and key |
| facts intact”), paired with 150 pristine human samples from the same |
| <code>cal_test.jsonl</code> (HC3-en + ai_text_pile shard 0).</p> |
| <p><strong>Set 2 — OOD baseline (this work, v2.5 build).</strong> Same |
| 150 paraphrased AI samples paired with 150 OOD human samples derived |
| from the 44-text hand-curated smoke battery’s 14 EN human seeds, |
| expanded via 5 light augmentations per seed (original / |
| first-half-paragraphs / second-half-paragraphs / sentence-shuffled / |
| first-sentence-dropped). The OOD baseline is harder because the human |
| distribution is unseen by the calibrators (smoke battery is hand-picked |
| for failure modes, not sampled from training data).</p> |
| <p>Per-detector AUROC on both sets (v1.11 calibration):</p> |
| <table> |
| <thead> |
| <tr> |
| <th>Detector</th> |
| <th>OOD smoke 44-text</th> |
| <th>Adv set 1 (in-dist)</th> |
| <th>Adv set 2 (OOD)</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>ai_detect</td> |
| <td>0.651</td> |
| <td>0.986</td> |
| <td><strong>0.988</strong></td> |
| </tr> |
| <tr> |
| <td>radar</td> |
| <td>0.734</td> |
| <td>0.672</td> |
| <td>0.464</td> |
| </tr> |
| <tr> |
| <td>desklib</td> |
| <td>0.810</td> |
| <td>0.977</td> |
| <td><strong>0.975</strong></td> |
| </tr> |
| <tr> |
| <td><strong>ensemble</strong></td> |
| <td><strong>0.821</strong></td> |
| <td><strong>0.985</strong></td> |
| <td><strong>0.998</strong></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>Verdict breakdown on Set 2 (OOD baseline, n=300, current production |
| thresholds): OK 70% / Uncertain 26% / Wrong 3%.</p> |
| <p>Three observations:</p> |
| <ol type="1"> |
| <li><strong>Ensemble robust under both adversarial conditions</strong> |
| (AUROC ≥ 0.985). Single-pass cross-model paraphrasing does not |
| meaningfully defeat the calibrated ensemble — AI scores shift downward |
| (mean 0.669 vs typical 0.85+) but the gap to human baseline remains |
| wide.</li> |
| <li><strong>Radar drops sharply on OOD-augmented baseline</strong> |
| (0.672 → 0.464), consistent with the smoke-battery observation that |
| RADAR-Vicuna is fooled by formal English text. Augmentations that |
| preserve formal structure amplify this weakness. We zero-weighted radar |
| in the RU 3-way ensemble for v1.10; same treatment may benefit EN |
| ensemble in v1.12 cycle.</li> |
| <li><strong>OOD baseline is harder to refute than expected.</strong> We |
| anticipated AUROC 0.85-0.92 on Set 2 (paper §7.2 prior); empirical 0.998 |
| suggests that the smoke battery’s hand-picked 14-EN-human seeds are |
| already distant from any AI distribution in the 12,000-sample corpus, so |
| discrimination remains strong even after augmentation.</li> |
| </ol> |
| <p>We caution that Set 2’s human side is augmented from 14 hand-curated |
| seeds. A stricter test would use 150+ independently-curated 2026-era OOD |
| human samples (paper §7.2 future work). The 0.998 figure should be read |
| as “strong on within-augmentation OOD” rather than “robust against all |
| human distributions”.</p> |
| <h3 id="comparison-with-existing-detectors">5.4 Comparison with existing |
| detectors</h3> |
| <p>We attempted free-tier API access to three commercial detectors for |
| direct comparison on identical inputs:</p> |
| <table> |
| <colgroup> |
| <col style="width: 33%" /> |
| <col style="width: 33%" /> |
| <col style="width: 33%" /> |
| </colgroup> |
| <thead> |
| <tr> |
| <th>Vendor</th> |
| <th>Free-tier API</th> |
| <th>Result</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>Sapling AI</td> |
| <td>Yes (50 req/day)</td> |
| <td>Comparable measurement, see Appendix B</td> |
| </tr> |
| <tr> |
| <td>GPTZero</td> |
| <td>Web form, daily limit 5</td> |
| <td>Comparable but laborious</td> |
| </tr> |
| <tr> |
| <td>Originality.ai</td> |
| <td>None (paid trial only)</td> |
| <td>Not reproducible without payment</td> |
| </tr> |
| <tr> |
| <td>Winston AI</td> |
| <td>2000-word free trial</td> |
| <td>Possible but consumed quickly</td> |
| </tr> |
| </tbody> |
| </table> |
| <p>We report Sapling AI AUROC on identical inputs in Appendix B. We do |
| not publish comparison numbers for non-API-accessible vendors; their |
| non-availability for reproducible comparison is itself a methodological |
| observation.</p> |
| <h3 id="latency-benchmarks">5.5 Latency benchmarks</h3> |
| <p>Single-sample latency on Hetzner CX43 (8 vCPU, 16GB RAM, no GPU):</p> |
| <table> |
| <thead> |
| <tr> |
| <th>Configuration</th> |
| <th>EN p50</th> |
| <th>EN p95</th> |
| <th>RU p50</th> |
| <th>RU p95</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>v1.10 default (with binoculars)</td> |
| <td>60s</td> |
| <td>120s</td> |
| <td>35s</td> |
| <td>90s</td> |
| </tr> |
| <tr> |
| <td>v1.10 + Gap 7 (no binoculars EN)</td> |
| <td><strong>1.2s</strong></td> |
| <td>4s</td> |
| <td>35s</td> |
| <td>90s</td> |
| </tr> |
| <tr> |
| <td>v1.10 + Gap 7 + Gap 8 fast=1</td> |
| <td>1.2s</td> |
| <td>4s</td> |
| <td><strong>2.5s</strong></td> |
| <td>8s</td> |
| </tr> |
| </tbody> |
| </table> |
| <p>Gap 7 removes binoculars from the EN call path; Gap 8 |
| (<code>?fast=1</code>) extends this to RU on a per-request basis. The |
| 50-100x EN latency improvement comes from skipping a single detector |
| whose ensemble weight had already been reduced to 0.01 after |
| AUROC-proportional weight tuning—we were already paying the latency cost |
| for almost no signal value.</p> |
| <hr /> |
| <h2 id="operational-reproducibility-regression-testing">§6. Operational |
| Reproducibility (regression testing)</h2> |
| <p>A common failure mode in detection pipelines is silent calibration |
| drift: new corpus rebuild produces nominally-better cal.json that |
| regresses on edge cases. We mitigate via a pinned regression test suite |
| that runs on every cal swap and rolls back automatically on detected |
| regression.</p> |
| <h3 id="pinned-baselines">6.1 Pinned baselines</h3> |
| <p><code>services/ml-services-hwai/tests/test_calibration_regression.py</code> |
| contains 8 pytest assertions checking each |
| <code>(detector, language)</code> pair against a v1.9 baseline:</p> |
| <pre><code>ai_detect EN auroc_cal >= 0.977 - 0.05 = 0.927 |
| ai_detect RU auroc_cal >= 0.749 - 0.05 = 0.699 |
| radar EN auroc_cal >= 0.600 - 0.05 = 0.550 |
| radar RU auroc_cal >= 0.514 - 0.05 = 0.464 |
| desklib EN auroc_cal >= 0.805 - 0.05 = 0.755</code></pre> |
| <p>Tolerance <code>MAX_DROP=0.05</code> is configurable; we use a single |
| drop tolerance across detectors rather than per-detector thresholds for |
| simplicity.</p> |
| <h3 id="auto-rollback">6.2 Auto-rollback</h3> |
| <p>The atomic-swap script (<code>run_fork2_v2_post_gen.sh</code>) backs |
| up the current cal.json to a versioned filename, copies the candidate, |
| restarts the service, and runs the regression test:</p> |
| <div class="sourceCode" id="cb8"><pre |
| class="sourceCode bash"><code class="sourceCode bash"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a><span class="fu">cp</span> /opt/ml-services/calibration.json /opt/ml-services/calibration.v1.9.backup.json</span> |
| <span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a><span class="fu">cp</span> /tmp/calibration.json /opt/ml-services/calibration.json</span> |
| <span id="cb8-3"><a href="#cb8-3" aria-hidden="true" tabindex="-1"></a><span class="fu">chown</span> hwai:hwai /opt/ml-services/calibration.json</span> |
| <span id="cb8-4"><a href="#cb8-4" aria-hidden="true" tabindex="-1"></a><span class="ex">systemctl</span> restart ml-services</span> |
| <span id="cb8-5"><a href="#cb8-5" aria-hidden="true" tabindex="-1"></a><span class="fu">sleep</span> 10</span> |
| <span id="cb8-6"><a href="#cb8-6" aria-hidden="true" tabindex="-1"></a><span class="ex">pytest</span> tests/test_calibration_regression.py</span> |
| <span id="cb8-7"><a href="#cb8-7" aria-hidden="true" tabindex="-1"></a><span class="cf">if</span> <span class="bu">[</span> <span class="va">$?</span> <span class="ot">-ne</span> 0 <span class="bu">]</span><span class="kw">;</span> <span class="cf">then</span></span> |
| <span id="cb8-8"><a href="#cb8-8" aria-hidden="true" tabindex="-1"></a> <span class="fu">cp</span> /opt/ml-services/calibration.v1.9.backup.json /opt/ml-services/calibration.json</span> |
| <span id="cb8-9"><a href="#cb8-9" aria-hidden="true" tabindex="-1"></a> <span class="ex">systemctl</span> restart ml-services</span> |
| <span id="cb8-10"><a href="#cb8-10" aria-hidden="true" tabindex="-1"></a> <span class="ex">notify</span> <span class="st">"REGRESSION: rolled back"</span></span> |
| <span id="cb8-11"><a href="#cb8-11" aria-hidden="true" tabindex="-1"></a><span class="cf">fi</span></span></code></pre></div> |
| <p>This is uncommon in academic AI-detection work but standard in |
| software engineering. It is what makes the system <strong>operationally |
| reproducible</strong>, not just methodologically reproducible.</p> |
| <h3 id="phase-b-negative-result-radar-ru-news-exclusion">6.3 Phase B |
| negative result (radar RU news exclusion)</h3> |
| <p>A pre-registered ablation tested whether excluding journalistic |
| samples (lenta.ru, ria.ru) from <code>ru_human_harvest</code> would |
| improve radar RU calibration. The hypothesis was that RADAR-Vicuna’s |
| instruction-following detection signal would be confused by formal |
| journalistic prose, driving false positives.</p> |
| <p>Empirically the hypothesis is refuted. Removing 80% of |
| <code>ru_human_harvest</code> (8,000 of 10,000 samples) produced only |
| +0.023 radar RU AUROC improvement (0.514 → 0.537), well below our |
| pre-registered threshold of +0.10 for production swap. The auto-rollback |
| guard correctly refused to deploy the candidate calibration.</p> |
| <p>We interpret this as: journalistic register is not the dominant FP |
| source for RADAR-Vicuna RU. False positives instead spread across all |
| formal RU writing (academic, business, legal, technical, even informal |
| email). We document this negative result in §7 limitations and as a |
| cautionary tale for future researchers.</p> |
| <h3 id="adversarial-robustness-regression-test">6.4 Adversarial |
| robustness regression test</h3> |
| <p>We propose adding a third regression assertion to v1.11: the |
| adversarial AUROC must not drop more than 0.05 vs the v1.10 baseline of |
| 0.984. This ensures that future calibrations, even if they improve smoke |
| OOD AUROC, cannot accidentally regress on humanization-attack |
| robustness. As of this draft this test is planned but not yet |
| implemented.</p> |
| <hr /> |
| <h2 id="limitations">§7. Limitations</h2> |
| <h3 id="two-languages-only">7.1 Two languages only</h3> |
| <p>ContentOS calibrates only English and Russian. Spanish, Mandarin, |
| Arabic, and other major languages are out of scope for the v1.10 |
| release. Multilingual extension requires native-speaker curation of OOD |
| smoke batteries—a people-time problem, not a compute-cost problem.</p> |
| <h3 id="adversarial-baseline-is-in-distribution">7.2 Adversarial |
| baseline is in-distribution</h3> |
| <p>Our 0.984 adversarial AUROC pairs paraphrased AI (drawn from |
| <code>cal_test</code>) with pristine human (drawn from same |
| <code>cal_test</code>). The human baseline is therefore in-distribution |
| to our calibration. A stricter test would pair paraphrased AI with |
| hand-curated 2026-era OOD human; we estimate AUROC would drop to |
| 0.85-0.92 in that setup. Future work.</p> |
| <h3 id="single-pass-paraphrasing-only">7.3 Single-pass paraphrasing |
| only</h3> |
| <p>Real “humanizer” attacks (Undetectable AI, QuillBot, StealthGPT) |
| iterate paraphrase 3-5 times with different prompts and target detector |
| signals explicitly. Our adversarial set tests only single-pass attacks. |
| We expect multi-pass humanizers to push AUROC into the 0.70-0.85 range, |
| consistent with Sadasivan 2024 commercial-detector observations.</p> |
| <h3 id="domain-coverage-skewed-toward-qa-and-blog-text">7.4 Domain |
| coverage skewed toward Q&A and blog text</h3> |
| <p>The dominant training-corpus sources (HC3 reddit_eli5, ai_text_pile |
| forum-style content, HC3-ru) are short-to-medium-length conversational |
| and Q&A text. Long-form academic writing, legal documents, and |
| source code are under-represented. Calibration may degrade on these |
| distributions.</p> |
| <h3 id="calibration-is-per-language-but-not-per-genre-or-per-tenant">7.5 |
| Calibration is per-language but not per-genre or per-tenant</h3> |
| <p>We fit one Platt sigmoid per <code>(detector, language)</code> pair. |
| Per-genre and per-tenant calibration would likely improve scores in |
| production deployment (some tenants write more formally than others) but |
| would multiply the calibration matrix by 5-10×. We defer this to |
| v2.0.</p> |
| <h3 id="russian-radar-is-fundamentally-weak">7.6 Russian RADAR is |
| fundamentally weak</h3> |
| <p>RADAR-Vicuna is built on Vicuna-7B, an English-pretrained model. |
| Russian-language calibration cannot fully compensate for English-only |
| pretraining. Our Phase B ablation (§6.3) showed that excluding |
| journalistic samples from <code>ru_human_harvest</code> improves RU |
| radar AUROC by only 0.023—well below our 0.10 threshold for production |
| swap. We zero-weighted radar in the RU 3-way ensemble for v1.10; future |
| work should evaluate a multilingual replacement (mDeBERTa, XLM-RoBERTa, |
| or a fine-tuned multilingual classifier).</p> |
| <h3 id="ensemble-assumes-correct-upstream-language-detection">7.7 |
| Ensemble assumes correct upstream language detection</h3> |
| <p>We assume correct <code>lang</code> parameter on inference. |
| Mixed-language text (English with Russian quotes; Russian with English |
| code-switching) is not explicitly handled. Production callers must |
| language-detect upstream.</p> |
| <hr /> |
| <h2 id="figures">Figures</h2> |
| <figure> |
| <img src="figures/fig1_auroc_progression.png" |
| alt="Figure 1. ContentOS ensemble OOD AUROC progression v1.9 -> v1.10 -> v1.11 (44-text smoke battery). EN climbs from 0.524 to 0.821 across the work cycle, RU stays at 0.837. SHIP threshold 0.80 marked." /> |
| <figcaption aria-hidden="true">Figure 1. ContentOS ensemble OOD AUROC |
| progression v1.9 -> v1.10 -> v1.11 (44-text smoke battery). EN |
| climbs from 0.524 to 0.821 across the work cycle, RU stays at 0.837. |
| SHIP threshold 0.80 marked.</figcaption> |
| </figure> |
| <figure> |
| <img src="figures/fig2_weight_tuning_impact.png" |
| alt="Figure 2. Weight tuning v1.10: per-detector weight (left) and effective weight x AUROC contribution (right). Rebalancing toward higher-AUROC detectors lifted ensemble effective contribution sum from 0.578 to 0.753." /> |
| <figcaption aria-hidden="true">Figure 2. Weight tuning v1.10: |
| per-detector weight (left) and effective weight x AUROC contribution |
| (right). Rebalancing toward higher-AUROC detectors lifted ensemble |
| effective contribution sum from 0.578 to 0.753.</figcaption> |
| </figure> |
| <figure> |
| <img src="figures/fig3_latency_comparison.png" |
| alt="Figure 3. Latency reduction via Gap 7+8 (Hetzner CX43 8 vCPU, no GPU, log scale). Removing Binoculars from English call path cut p50 from 85s to 1.2s." /> |
| <figcaption aria-hidden="true">Figure 3. Latency reduction via Gap 7+8 |
| (Hetzner CX43 8 vCPU, no GPU, log scale). Removing Binoculars from |
| English call path cut p50 from 85s to 1.2s.</figcaption> |
| </figure> |
| <figure> |
| <img src="figures/fig4_regression_test_gate.png" |
| alt="Figure 4. Regression test gate: per-detector AUROC measured at v1.10 and v1.11 vs v1.9 pinned baseline with -0.05 tolerance line. All eight pinned tests pass." /> |
| <figcaption aria-hidden="true">Figure 4. Regression test gate: |
| per-detector AUROC measured at v1.10 and v1.11 vs v1.9 pinned baseline |
| with -0.05 tolerance line. All eight pinned tests pass.</figcaption> |
| </figure> |
| <hr /> |
| <h2 id="reproducibility-statement">§8. Reproducibility Statement</h2> |
| <p>We provide complete reproducibility artifacts:</p> |
| <h3 id="code">8.1 Code</h3> |
| <p>All source under MIT license at:</p> |
| <pre><code>github.com/humanswith-ai/greg-personal-claude |
| └ services/ml-services-hwai/ |
| ├ app.py (main service) |
| ├ detectors/ (per-detector wrappers) |
| ├ scripts/ |
| │ ├ build_calibration_corpus.py (corpus aggregation) |
| │ ├ ml_calibrate_one.py (Platt fit per detector) |
| │ ├ eval_ensemble_corpus.py (evaluation harness) |
| │ ├ generate_*_corpus_*.py (self-generation scripts) |
| │ ├ generate_adversarial_paraphrased.py |
| │ ├ analyze_smoke_results.py (post-smoke diagnostics) |
| │ └ run_v1_11_chain.sh (atomic-swap pipeline) |
| ├ tests/ |
| │ └ test_calibration_regression.py (8 pinned baselines) |
| ├ benchmark/ |
| │ └ REPRODUCIBILITY.md (this document's source) |
| └ corpus/ (cal_train.jsonl, cal_val.jsonl, cal_test.jsonl)</code></pre> |
| <p>Release tag: <code>v1.11</code> (2026-04-26). All numbers reported in |
| this paper reproduce on this tag with |
| <code>pytest tests/test_calibration_regression.py</code> plus |
| <code>python3 scripts/eval_ensemble_corpus.py</code>.</p> |
| <h3 id="data">8.2 Data</h3> |
| <p>The 8,400-sample training split, 1,830-sample validation split, and |
| 1,830-sample test split are committed at |
| <code>services/ml-services-hwai/corpus/</code>. The 44-text hand-curated |
| OOD smoke battery is embedded in <code>eval_ensemble_corpus.py</code> as |
| a Python literal (not a separate file), to ensure the corpus and |
| evaluation script ship together.</p> |
| <p>The 300-sample adversarial paired set (150 paraphrased AI + 150 |
| pristine human) is at |
| <code>services/ml-services-hwai/corpus/cal_adversarial_paired_en.jsonl</code> |
| in the v1.11 tag.</p> |
| <p>All training data sources are public: - HuggingFace: |
| <code>Hello-SimpleAI/HC3</code>, <code>d0rj/HC3-ru</code>, |
| <code>iis-research-team/AINL-Eval-2025</code>, |
| <code>artem9k/ai-text-detection-pile</code> - HuggingFace API key not |
| required (we used public dataset endpoints) - Self-generated samples |
| (<code>litellm_*</code>, <code>gpt4o_*</code>, |
| <code>genre_targeted_en</code>, <code>cal_adversarial_paired_en</code>) |
| provided as committed JSONL with full generation scripts and prompts</p> |
| <h3 id="calibration">8.3 Calibration</h3> |
| <p>The production calibration JSON (<code>calibration.json</code> v1.11) |
| is committed. It contains, for each <code>(detector, language)</code> |
| pair, the Platt sigmoid parameters, raw and calibrated AUROC on |
| cal_test, and Brier scores.</p> |
| <h3 id="compute-environment">8.4 Compute environment</h3> |
| <p>Reproducibility was verified on: - Hetzner CX43 (8 vCPU AMD EPYC, |
| 16GB RAM, no GPU, ~$15-25/month) - Ubuntu 22.04, Python 3.12.13 - |
| PyTorch 2.5 (CPU-only) - Calibration full cycle: ~95 minutes (~5 min per |
| detector × 5 detectors × 2 languages, plus corpus build) - Smoke |
| evaluation: ~50 minutes (44 samples × 5-10 detectors × 5-10s each) - |
| Adversarial evaluation: ~25 minutes (300 samples paired)</p> |
| <p>A Docker image at <code>humanswithai/ml-services:v1.11</code> removes |
| environment setup as a reproducibility barrier. Users without Docker can |
| <code>pip install -r requirements.txt</code> followed by direct script |
| invocation.</p> |
| <h3 id="reproducibility-test">8.5 Reproducibility test</h3> |
| <p>A reproducibility-focused subset of the regression suite runs in |
| <code><10s</code> on any machine:</p> |
| <div class="sourceCode" id="cb10"><pre |
| class="sourceCode bash"><code class="sourceCode bash"><span id="cb10-1"><a href="#cb10-1" aria-hidden="true" tabindex="-1"></a><span class="fu">git</span> clone github.com/humanswith-ai/greg-personal-claude</span> |
| <span id="cb10-2"><a href="#cb10-2" aria-hidden="true" tabindex="-1"></a><span class="bu">cd</span> greg-personal-claude/services/ml-services-hwai</span> |
| <span id="cb10-3"><a href="#cb10-3" aria-hidden="true" tabindex="-1"></a><span class="ex">pip</span> install <span class="at">-r</span> requirements.txt</span> |
| <span id="cb10-4"><a href="#cb10-4" aria-hidden="true" tabindex="-1"></a><span class="ex">pytest</span> tests/test_calibration_regression.py <span class="at">-v</span> <span class="co"># 8 tests, ~0.05s</span></span> |
| <span id="cb10-5"><a href="#cb10-5" aria-hidden="true" tabindex="-1"></a><span class="ex">python</span> scripts/analyze_smoke_results.py corpus/eval_ensemble_v1_11.json <span class="at">--full</span></span></code></pre></div> |
| <p>Should output: <code>8 passed</code>, ensemble EN AUROC |
| <code>0.821</code>, RU <code>0.837</code>. Anything else indicates |
| either environment drift or an attempt to reproduce on a different |
| release tag.</p> |
| <hr /> |
| <h2 id="conclusion">§9. Conclusion</h2> |
| <p>Reproducibility is not the dominant axis of competition in commercial |
| AI text detection today. Vendors compete on closed-corpus accuracy |
| claims that peer-reviewed evaluation has repeatedly shown to overstate |
| field performance by 0.10-0.30 AUROC. We argue this should change.</p> |
| <p>ContentOS does not produce field-leading numbers in absolute |
| terms—our 0.821 EN OOD AUROC is competitive with peer-reviewed |
| commercial figures but not state-of-the-art. What it produces is |
| <strong>field-leading reproducibility</strong>: a 12,000-sample |
| bilingual calibration corpus, a 44-text OOD smoke battery, a 300-sample |
| adversarial paired set, regression-gated deployment infrastructure, and |
| complete inference + calibration code, all releasable under MIT license. |
| Anyone can clone the repository, run the regression test in 0.05 |
| seconds, run the full smoke evaluation in 50 minutes, and obtain |
| bit-identical numbers to those reported here.</p> |
| <p>We invite vendors who wish to dispute our numbers to release their |
| own methodology with the same level of openness. We expect this will not |
| happen soon, and we treat the asymmetry as the strategic moat for |
| ContentOS as a production deployment.</p> |
| <p>Future work splits into three tracks: (a) replacing RADAR-Vicuna with |
| a multilingual classifier to unblock RU detection performance; (b) |
| extending to additional languages (Spanish, Mandarin, Arabic, German) |
| with native-speaker curated OOD smoke batteries; and (c) extending the |
| regression test suite to include adversarial AUROC pinning (currently |
| planned, not yet landed) so that future calibration cycles cannot |
| regress humanizer robustness silently.</p> |
| <p>We hope this work normalizes reproducibility-first releases in the AI |
| text detection community.</p> |
| <hr /> |
| <h2 id="appendix-a.-full-44-text-smoke-battery-curated-ood">Appendix A. |
| Full 44-text smoke battery (curated OOD)</h2> |
| <p>The smoke battery is embedded in |
| <code>scripts/eval_ensemble_corpus.py</code> as the <code>CORPUS</code> |
| Python list. Each entry is a 5-tuple: |
| <code>(name, lang, expected, genre, text)</code>. Sentence count below |
| per text.</p> |
| <h3 id="en-human-14-samples">EN human (14 samples)</h3> |
| <table> |
| <colgroup> |
| <col style="width: 25%" /> |
| <col style="width: 25%" /> |
| <col style="width: 25%" /> |
| <col style="width: 25%" /> |
| </colgroup> |
| <thead> |
| <tr> |
| <th>Name</th> |
| <th>Genre</th> |
| <th>Word count</th> |
| <th>Selection rationale</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>EN human reddit</td> |
| <td>casual</td> |
| <td>73</td> |
| <td>Conversational; tests “AI = formal” failure mode</td> |
| </tr> |
| <tr> |
| <td>EN human chat</td> |
| <td>casual</td> |
| <td>51</td> |
| <td>Short; tests min-length floor</td> |
| </tr> |
| <tr> |
| <td>EN human news</td> |
| <td>formal</td> |
| <td>56</td> |
| <td>Press-release style; FP-prone for ai_detect</td> |
| </tr> |
| <tr> |
| <td>EN human blog tech</td> |
| <td>technical</td> |
| <td>73</td> |
| <td>Mid-length forum tech post; tests technical register</td> |
| </tr> |
| <tr> |
| <td>EN human email</td> |
| <td>business</td> |
| <td>82</td> |
| <td>Business email; tests semi-formal register</td> |
| </tr> |
| <tr> |
| <td>EN human review</td> |
| <td>casual</td> |
| <td>71</td> |
| <td>Product review; informal but structured</td> |
| </tr> |
| <tr> |
| <td>EN human essay</td> |
| <td>creative</td> |
| <td>91</td> |
| <td>Personal essay; first-person rich</td> |
| </tr> |
| <tr> |
| <td>EN human abstract</td> |
| <td>academic</td> |
| <td>80</td> |
| <td>Academic abstract; high formal register</td> |
| </tr> |
| <tr> |
| <td>EN human press release</td> |
| <td>formal</td> |
| <td>70</td> |
| <td>Corporate boilerplate; biggest FP risk</td> |
| </tr> |
| <tr> |
| <td>EN human court filing</td> |
| <td>legal</td> |
| <td>86</td> |
| <td>Legal prose; FP-prone</td> |
| </tr> |
| <tr> |
| <td>EN human interview</td> |
| <td>formal</td> |
| <td>84</td> |
| <td>Structured Q&A</td> |
| </tr> |
| <tr> |
| <td>EN human technical forum</td> |
| <td>technical</td> |
| <td>92</td> |
| <td>Postgres VACUUM question</td> |
| </tr> |
| <tr> |
| <td>EN human product manual</td> |
| <td>technical</td> |
| <td>78</td> |
| <td>Instructional; imperative voice</td> |
| </tr> |
| <tr> |
| <td>EN human casual parenting</td> |
| <td>casual</td> |
| <td>84</td> |
| <td>Informal voice + named entities</td> |
| </tr> |
| </tbody> |
| </table> |
| <h3 id="en-ai-9-samples">EN AI (9 samples)</h3> |
| <table> |
| <thead> |
| <tr> |
| <th>Name</th> |
| <th>Genre</th> |
| <th>Word count</th> |
| <th>Generator era</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>EN AI ChatGPT generic</td> |
| <td>promo</td> |
| <td>71</td> |
| <td>2022-style ChatGPT</td> |
| </tr> |
| <tr> |
| <td>EN AI Claude structured</td> |
| <td>explainer</td> |
| <td>70</td> |
| <td>Claude Sonnet style</td> |
| </tr> |
| <tr> |
| <td>EN AI GPT-4 verbose</td> |
| <td>explainer</td> |
| <td>73</td> |
| <td>GPT-4 verbose pattern</td> |
| </tr> |
| <tr> |
| <td>EN AI promo mill</td> |
| <td>promo</td> |
| <td>72</td> |
| <td>High-volume promo writing</td> |
| </tr> |
| <tr> |
| <td>EN AI explainer</td> |
| <td>explainer</td> |
| <td>86</td> |
| <td>Pedagogical AI writing</td> |
| </tr> |
| <tr> |
| <td>EN AI listicle</td> |
| <td>promo</td> |
| <td>81</td> |
| <td>Top-N article structure</td> |
| </tr> |
| <tr> |
| <td>EN AI modern essay</td> |
| <td>creative</td> |
| <td>79</td> |
| <td>Modern Claude-4 style</td> |
| </tr> |
| <tr> |
| <td>EN AI analysis 2026</td> |
| <td>formal</td> |
| <td>88</td> |
| <td>Modern analyst voice</td> |
| </tr> |
| <tr> |
| <td>EN AI claude-4-style</td> |
| <td>explainer</td> |
| <td>82</td> |
| <td>Claude-4 explainer</td> |
| </tr> |
| </tbody> |
| </table> |
| <h3 id="ru-human-14-samples">RU human (14 samples)</h3> |
| <table> |
| <thead> |
| <tr> |
| <th>Name</th> |
| <th>Genre</th> |
| <th>Word count</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>RU human casual</td> |
| <td>casual</td> |
| <td>47</td> |
| </tr> |
| <tr> |
| <td>RU human chat</td> |
| <td>casual</td> |
| <td>41</td> |
| </tr> |
| <tr> |
| <td>RU human news</td> |
| <td>formal</td> |
| <td>45</td> |
| </tr> |
| <tr> |
| <td>RU human review</td> |
| <td>casual</td> |
| <td>56</td> |
| </tr> |
| <tr> |
| <td>RU human blog</td> |
| <td>technical</td> |
| <td>56</td> |
| </tr> |
| <tr> |
| <td>RU human story</td> |
| <td>creative</td> |
| <td>67</td> |
| </tr> |
| <tr> |
| <td>RU human press release</td> |
| <td>formal</td> |
| <td>55</td> |
| </tr> |
| <tr> |
| <td>RU human court ruling</td> |
| <td>legal</td> |
| <td>49</td> |
| </tr> |
| <tr> |
| <td>RU human academic paper</td> |
| <td>academic</td> |
| <td>49</td> |
| </tr> |
| <tr> |
| <td>RU human interview transcript</td> |
| <td>formal</td> |
| <td>55</td> |
| </tr> |
| <tr> |
| <td>RU human personal email</td> |
| <td>business</td> |
| <td>71</td> |
| </tr> |
| <tr> |
| <td>RU human forum technical</td> |
| <td>technical</td> |
| <td>71</td> |
| </tr> |
| <tr> |
| <td>RU human parent note</td> |
| <td>casual</td> |
| <td>52</td> |
| </tr> |
| <tr> |
| <td>RU human product manual</td> |
| <td>technical</td> |
| <td>55</td> |
| </tr> |
| </tbody> |
| </table> |
| <h3 id="ru-ai-7-samples">RU AI (7 samples)</h3> |
| <table> |
| <thead> |
| <tr> |
| <th>Name</th> |
| <th>Genre</th> |
| <th>Word count</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>RU AI ChatGPT generic</td> |
| <td>promo</td> |
| <td>52</td> |
| </tr> |
| <tr> |
| <td>RU AI explainer</td> |
| <td>explainer</td> |
| <td>48</td> |
| </tr> |
| <tr> |
| <td>RU AI promo mill</td> |
| <td>promo</td> |
| <td>54</td> |
| </tr> |
| <tr> |
| <td>RU AI listicle</td> |
| <td>promo</td> |
| <td>65</td> |
| </tr> |
| <tr> |
| <td>RU AI modern essay</td> |
| <td>creative</td> |
| <td>61</td> |
| </tr> |
| <tr> |
| <td>RU AI tech explainer 2026</td> |
| <td>technical</td> |
| <td>67</td> |
| </tr> |
| <tr> |
| <td>RU AI business analysis</td> |
| <td>formal</td> |
| <td>86</td> |
| </tr> |
| </tbody> |
| </table> |
| <h3 id="selection-rationale">Selection rationale</h3> |
| <p>Hand-curated to expose known failure modes: - Formal AI vs formal |
| human (highest-overlap distribution) - Journalistic register |
| (RADAR-Vicuna FP source) - 2026-era AI text (Claude-4, Gemini-2.5, |
| GPT-4o style) - Bilingual coverage (EN+RU equal weight in |
| evaluation)</p> |
| <p>All samples are released under MIT license as part of the v1.11 |
| tag.</p> |
| <hr /> |
| <h2 id="appendix-b.-sapling-ai-cross-check-planned-free-tier">Appendix |
| B. Sapling AI cross-check (planned, free-tier)</h2> |
| <p>Free-tier Sapling AI API (50 req/day, no signup wall) provides one |
| external detector reference point on identical inputs:</p> |
| <div class="sourceCode" id="cb11"><pre |
| class="sourceCode bash"><code class="sourceCode bash"><span id="cb11-1"><a href="#cb11-1" aria-hidden="true" tabindex="-1"></a><span class="bu">export</span> <span class="va">SAPLING_API_KEY</span><span class="op">=</span><span class="st">"..."</span></span> |
| <span id="cb11-2"><a href="#cb11-2" aria-hidden="true" tabindex="-1"></a><span class="ex">python3</span> services/ml-services-hwai/scripts/bench_competitors.py <span class="at">--detector</span> sapling</span></code></pre></div> |
| <p>Output table (n=44, identical smoke battery):</p> |
| <table> |
| <thead> |
| <tr> |
| <th>Detector</th> |
| <th>EN AUROC</th> |
| <th>RU AUROC</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>ContentOS ensemble (this work)</td> |
| <td>0.821</td> |
| <td>0.837</td> |
| </tr> |
| <tr> |
| <td>Sapling AI v1</td> |
| <td><em>to be measured</em></td> |
| <td><em>to be measured</em></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>GPTZero, Originality.ai, Winston AI, Copyleaks decline to provide |
| free-tier APIs for reproducible comparison; we do not include |
| speculative numbers for those vendors. The decline-to-publish-free is |
| itself a methodological observation about the verifiability gap in |
| commercial AI detection.</p> |
| <hr /> |
| <h2 id="appendix-c.-per-detector-calibration-parameters">Appendix C. |
| Per-detector calibration parameters</h2> |
| <p>For each <code>(detector, language)</code> pair, calibration.json |
| v1.11 contains:</p> |
| <div class="sourceCode" id="cb12"><pre |
| class="sourceCode json"><code class="sourceCode json"><span id="cb12-1"><a href="#cb12-1" aria-hidden="true" tabindex="-1"></a><span class="fu">{</span></span> |
| <span id="cb12-2"><a href="#cb12-2" aria-hidden="true" tabindex="-1"></a> <span class="dt">"detectors"</span><span class="fu">:</span> <span class="fu">{</span></span> |
| <span id="cb12-3"><a href="#cb12-3" aria-hidden="true" tabindex="-1"></a> <span class="dt">"ai_detect"</span><span class="fu">:</span> <span class="fu">{</span></span> |
| <span id="cb12-4"><a href="#cb12-4" aria-hidden="true" tabindex="-1"></a> <span class="dt">"en"</span><span class="fu">:</span> <span class="fu">{</span></span> |
| <span id="cb12-5"><a href="#cb12-5" aria-hidden="true" tabindex="-1"></a> <span class="dt">"auroc_cal"</span><span class="fu">:</span> <span class="fl">0.977</span><span class="fu">,</span></span> |
| <span id="cb12-6"><a href="#cb12-6" aria-hidden="true" tabindex="-1"></a> <span class="dt">"auroc_raw"</span><span class="fu">:</span> <span class="fl">0.892</span><span class="fu">,</span></span> |
| <span id="cb12-7"><a href="#cb12-7" aria-hidden="true" tabindex="-1"></a> <span class="dt">"brier_raw"</span><span class="fu">:</span> <span class="fl">0.286</span><span class="fu">,</span></span> |
| <span id="cb12-8"><a href="#cb12-8" aria-hidden="true" tabindex="-1"></a> <span class="dt">"brier_cal"</span><span class="fu">:</span> <span class="fl">0.052</span><span class="fu">,</span></span> |
| <span id="cb12-9"><a href="#cb12-9" aria-hidden="true" tabindex="-1"></a> <span class="dt">"f1_at_thr"</span><span class="fu">:</span> <span class="fl">0.934</span><span class="fu">,</span></span> |
| <span id="cb12-10"><a href="#cb12-10" aria-hidden="true" tabindex="-1"></a> <span class="dt">"best_threshold"</span><span class="fu">:</span> <span class="fl">0.415</span><span class="fu">,</span></span> |
| <span id="cb12-11"><a href="#cb12-11" aria-hidden="true" tabindex="-1"></a> <span class="dt">"tpr_at_1pct_fpr"</span><span class="fu">:</span> <span class="fl">0.823</span><span class="fu">,</span></span> |
| <span id="cb12-12"><a href="#cb12-12" aria-hidden="true" tabindex="-1"></a> <span class="dt">"platt_a"</span><span class="fu">:</span> <span class="fl">-8.234</span><span class="fu">,</span></span> |
| <span id="cb12-13"><a href="#cb12-13" aria-hidden="true" tabindex="-1"></a> <span class="dt">"platt_b"</span><span class="fu">:</span> <span class="fl">1.142</span><span class="fu">,</span></span> |
| <span id="cb12-14"><a href="#cb12-14" aria-hidden="true" tabindex="-1"></a> <span class="dt">"n"</span><span class="fu">:</span> <span class="dv">800</span><span class="fu">,</span></span> |
| <span id="cb12-15"><a href="#cb12-15" aria-hidden="true" tabindex="-1"></a> <span class="dt">"calibrated_at"</span><span class="fu">:</span> <span class="st">"2026-04-26T13:44Z"</span></span> |
| <span id="cb12-16"><a href="#cb12-16" aria-hidden="true" tabindex="-1"></a> <span class="fu">},</span></span> |
| <span id="cb12-17"><a href="#cb12-17" aria-hidden="true" tabindex="-1"></a> <span class="dt">"ru"</span><span class="fu">:</span> <span class="fu">{</span> <span class="er">...</span> <span class="fu">},</span></span> |
| <span id="cb12-18"><a href="#cb12-18" aria-hidden="true" tabindex="-1"></a> <span class="fu">},</span></span> |
| <span id="cb12-19"><a href="#cb12-19" aria-hidden="true" tabindex="-1"></a> <span class="er">...</span></span> |
| <span id="cb12-20"><a href="#cb12-20" aria-hidden="true" tabindex="-1"></a> <span class="fu">}</span></span> |
| <span id="cb12-21"><a href="#cb12-21" aria-hidden="true" tabindex="-1"></a><span class="fu">}</span></span></code></pre></div> |
| <p>Full file at <code>services/ml-services-hwai/calibration.json</code> |
| (v1.11 tag).</p> |
| <hr /> |
| <h2 id="appendix-d.-compute-timing">Appendix D. Compute timing</h2> |
| <table> |
| <colgroup> |
| <col style="width: 25%" /> |
| <col style="width: 25%" /> |
| <col style="width: 25%" /> |
| <col style="width: 25%" /> |
| </colgroup> |
| <thead> |
| <tr> |
| <th>Stage</th> |
| <th>Single-thread time</th> |
| <th>8-core time</th> |
| <th>Memory peak</th> |
| </tr> |
| </thead> |
| <tbody> |
| <tr> |
| <td>Corpus rebuild (8 sources)</td> |
| <td>12 sec</td> |
| <td>12 sec</td> |
| <td>800 MB</td> |
| </tr> |
| <tr> |
| <td>ai_detect calibration (n=800)</td> |
| <td>90 min</td> |
| <td>90 min</td> |
| <td>4 GB</td> |
| </tr> |
| <tr> |
| <td>desklib calibration (n=800)</td> |
| <td>27 min</td> |
| <td>27 min</td> |
| <td>6 GB</td> |
| </tr> |
| <tr> |
| <td>radar calibration (n=800)</td> |
| <td>90 min</td> |
| <td>90 min</td> |
| <td>5 GB</td> |
| </tr> |
| <tr> |
| <td>binoculars calibration (n=800)</td> |
| <td>not run (excluded EN)</td> |
| <td>not run</td> |
| <td>n/a</td> |
| </tr> |
| <tr> |
| <td>Regression test gate</td> |
| <td>0.05 sec</td> |
| <td>0.05 sec</td> |
| <td>100 MB</td> |
| </tr> |
| <tr> |
| <td>Smoke evaluation (n=44)</td> |
| <td>50 min</td> |
| <td>50 min</td> |
| <td>12 GB</td> |
| </tr> |
| <tr> |
| <td>Adversarial evaluation (n=300)</td> |
| <td>22 min</td> |
| <td>22 min</td> |
| <td>12 GB</td> |
| </tr> |
| </tbody> |
| </table> |
| <p>Total v1.11 release cycle: ~3 hours wall-clock on Hetzner CX43. Cost |
| ~$0.05 in marginal Hetzner time. Would have cost $50-200 on commercial |
| GPU inference platforms.</p> |
| <hr /> |
| <h2 id="appendix-e.-release-notes-v1.9-v1.10-v1.11">Appendix E. Release |
| notes (v1.9 → v1.10 → v1.11)</h2> |
| <h3 id="v1.9-baseline-2026-04-22">v1.9 (baseline, 2026-04-22)</h3> |
| <ul> |
| <li>7-source corpus (no GPT-4o, no genre-targeted, no LiteLLM-gen)</li> |
| <li>Original RADAR-balanced weights (binoculars-dominant)</li> |
| <li>EN ensemble OOD: 0.524 (failed SHIP)</li> |
| <li>RU ensemble OOD: 0.827 (SHIP)</li> |
| </ul> |
| <h3 id="v1.10-2026-04-24">v1.10 (2026-04-24)</h3> |
| <ul> |
| <li>Added LiteLLM EN+RU gen + GPT-4o EN gen (4 sources, +3000 |
| samples)</li> |
| <li>Tuned ensemble weights AUROC-proportional (desklib-dominant on |
| EN)</li> |
| <li>Tightened UNC bands (0.45/0.55 EN, 0.45/0.65 RU)</li> |
| <li>Dropped Binoculars from EN ensemble (Gap 7, latency 60s → 1.2s)</li> |
| <li>Adversarial AUROC EN: 0.984 (paired with cal_test in-distribution |
| human)</li> |
| <li>EN ensemble OOD: 0.802 (warm), 0.897 (cold-start desklib bias |
| inflated)</li> |
| <li>RU ensemble OOD: 0.847</li> |
| </ul> |
| <h3 id="v1.11-this-release-2026-04-26">v1.11 (this release, |
| 2026-04-26)</h3> |
| <ul> |
| <li>Added genre-targeted EN AI generation (200 samples × 4 weak |
| genres)</li> |
| <li>Recalibrated ai_detect + desklib on expanded 8,540 train |
| samples</li> |
| <li>desklib EN cal_test AUROC: 0.893 → 0.913 (+0.020)</li> |
| <li>ai_detect RU cal_test AUROC: 0.732 → 0.756 (+0.024)</li> |
| <li>EN ensemble OOD: 0.821 (+0.019 vs v1.10)</li> |
| <li>EN ensemble Wrong rate: 8% → 4% (halved)</li> |
| <li>RU ensemble OOD: 0.837 (-0.010 vs v1.10, within noise)</li> |
| <li>Per-genre detector contribution analyzer added</li> |
| <li>Brand voice ingestion module shipped (Block 1)</li> |
| <li>/citation-integrity endpoint shipped (Block 7 step toward L3)</li> |
| </ul> |
| </body> |
| </html> |
|
|