diff --git "a/paper.html" "b/paper.html" --- "a/paper.html" +++ "b/paper.html" @@ -1,155 +1,406 @@ - + - - ContentOS — Reproducible Bilingual AI-Detection Ensemble (Pre-print) + + + + ContentOS Preprint v1.0.1 -

ContentOS: A Reproducible Bilingual AI-Text-Detection Ensemble with Adversarial Robustness Evaluation

+
+

ContentOS Preprint v1.0.1

+
+

ContentOS: +A Reproducible Bilingual AI-Text-Detection Ensemble with Adversarial +Robustness Evaluation

ContentOS team, Humanswith.ai, 2026-04-27. Pre-print version v1.0. -Source: services/ml-services-hwai/benchmark/paper.md (auto-merged from -three companion drafts; see merge_paper.py).

+Source: services/ml-services-hwai/benchmark/paper.md +(auto-merged from three companion drafts; see +merge_paper.py).

Abstract

-

Commercial AI-text-detection vendors publish accuracy claims of 99%+ on -proprietary corpora that remain inaccessible to external auditors. +

Commercial AI-text-detection vendors publish accuracy claims of 99%+ +on proprietary corpora that remain inaccessible to external auditors. Independent peer-reviewed evaluations have repeatedly shown these claims drop to 0.70-0.88 AUROC on out-of-distribution and modern-era text. We -present ContentOS, a reproducible ensemble of four AI detectors -(Fast-DetectGPT, RADAR-Vicuna, Binoculars, Desklib-fine-tuned +present ContentOS, a reproducible ensemble of four AI +detectors (Fast-DetectGPT, RADAR-Vicuna, Binoculars, Desklib-fine-tuned DeBERTa-v3-large) calibrated on a 12,000-sample bilingual (English + -Russian) corpus drawn from seven public datasets covering 2022-2026 era AI -generators (GPT-4o, Gemini 2.5, Groq Llama, Cerebras Llama).

-

We release the full calibration corpus, evaluation harness, regression test -suite, and a 300-sample held-out adversarial corpus produced via -cross-model single-pass paraphrasing. On a 44-text hand-curated -out-of-distribution smoke battery, our v1.11 ensemble achieves AUROC -0.821 (English) and 0.837 (Russian), with English Wrong-rate -of 4% and median latency of 1.2 seconds on commodity 8-vCPU hardware. On -the 300-sample adversarial paired set, ensemble AUROC reaches 0.985 (in- -distribution human baseline).

-

The contribution of this work is field-leading reproducibility, not -state-of-the-art absolute AUROC. Anyone can clone the repository, run the -regression test in 0.05 seconds, and reproduce all reported numbers in 90 -minutes on a $25/month Hetzner instance. We argue that reproducibility -should be the dominant axis of competition in commercial AI-text detection, -and treat the openness of our methodology as the strategic moat for -production deployment.

-

Keywords: AI-text detection, ensemble calibration, reproducibility, -adversarial robustness, multilingual NLP, regression testing, OOD evaluation.

+Russian) corpus drawn from seven public datasets covering 2022-2026 era +AI generators (GPT-4o, Gemini 2.5, Groq Llama, Cerebras Llama).

+

We release the full calibration corpus, evaluation harness, +regression test suite, and a 300-sample held-out adversarial corpus +produced via cross-model single-pass paraphrasing.

+

Headline numbers — v1.11 ensemble on 176-sample expanded +smoke battery (2026-04-29 measurement): AUROC 0.864 +(English) and 0.846 (Russian), with English +Wrong-rate of 4% and median latency of 1.2 seconds on commodity 8-vCPU +hardware. Earlier 44-text hand-curated smoke (v1.0 paper measurement) +reported 0.821 EN / 0.837 RU; the 4× expanded battery with proper class +balance per (lang, genre) cell stabilized the numbers upward.

+

On the 300-sample adversarial paired set, ensemble AUROC reaches +0.985 (in-distribution human baseline).

+

The contribution of this work is field-leading +reproducibility, not state-of-the-art absolute AUROC. Anyone +can clone the repository, run the regression test in 0.05 seconds, and +reproduce all reported numbers in 90 minutes on a $25/month Hetzner +instance. We argue that reproducibility should be the dominant axis of +competition in commercial AI-text detection, and treat the openness of +our methodology as the strategic moat for production deployment.

+

Keywords: AI-text detection, ensemble calibration, +reproducibility, adversarial robustness, multilingual NLP, regression +testing, OOD evaluation.


-

§1. Introduction

-

The verifiability problem. Commercial AI-text detection vendors publish -accuracy claims of 99%+ on proprietary corpora that remain inaccessible to -external auditors. Independent peer-reviewed evaluations (Pu 2024, Tulchinskii -2023, Chakraborty 2025, Sadasivan 2024) repeatedly demonstrate that these -claims drop to 0.70-0.88 AUROC on out-of-distribution (OOD) text and fall -further—often below 0.65—under paraphrase attack. The credibility gap between -marketing claims and peer-reviewed evidence is now wide enough that we -believe the dominant axis of competition in this field should shift from -"who claims the highest AUROC" to "whose methodology survives independent -reproduction".

-

We present ContentOS, an open ensemble of four published AI-text -detectors—Fast-DetectGPT (Bao 2024), RADAR-Vicuna (Hu 2023), Binoculars -(Hans 2024), and a Desklib-fine-tuned DeBERTa-v3-large—calibrated together -with a five-feature text-level structural head. We release:

-
    -
  1. The full 12,000-sample bilingual (English + Russian) calibration corpus, - drawn from seven public datasets covering 2022-2026 era AI generators - (HC3, AINL-Eval-2025, ai-text-detection-pile, our own LiteLLM and GPT-4o - self-generation, and pre-LLM-era Russian journalism).
  2. +

    §1. Introduction

    +

    The verifiability problem. Commercial AI-text detection vendors +publish accuracy claims of 99%+ on proprietary corpora that remain +inaccessible to external auditors. Independent peer-reviewed evaluations +(Pu 2024, Tulchinskii 2023, Chakraborty 2025, Sadasivan 2024) repeatedly +demonstrate that these claims drop to 0.70-0.88 AUROC on +out-of-distribution (OOD) text and fall further—often below 0.65—under +paraphrase attack. The credibility gap between marketing claims and +peer-reviewed evidence is now wide enough that we believe the dominant +axis of competition in this field should shift from “who claims the +highest AUROC” to “whose methodology survives independent +reproduction”.

    +

    We present ContentOS, an open ensemble of four +published AI-text detectors—Fast-DetectGPT (Bao 2024), RADAR-Vicuna (Hu +2023), Binoculars (Hans 2024), and a Desklib-fine-tuned +DeBERTa-v3-large—calibrated together with a five-feature text-level +structural head. We release:

    +
      +
    1. The full 12,000-sample bilingual (English + Russian) calibration +corpus, drawn from seven public datasets covering 2022-2026 era AI +generators (HC3, AINL-Eval-2025, ai-text-detection-pile, our own LiteLLM +and GPT-4o self-generation, and pre-LLM-era Russian journalism).
    2. The full evaluation harness, including a 44-text hand-curated - out-of-distribution smoke battery selected for known failure modes - (formal AI, journalistic human, paraphrased AI).
    3. +out-of-distribution smoke battery selected for known failure modes +(formal AI, journalistic human, paraphrased AI).
    4. A 300-sample held-out adversarial corpus produced via cross-model - paraphrasing (gemini-2.5-flash, groq-llama-3.3-70b, cerebras-llama-3.1-8b, - gpt-4o-mini), enabling reproducible adversarial AUROC measurement.
    5. -
    6. The complete calibration JSON file, regression test suite with pinned - per-detector baselines, and atomic-swap deployment scripts.
    7. +paraphrasing (gemini-2.5-flash, groq-llama-3.3-70b, +cerebras-llama-3.1-8b, gpt-4o-mini), enabling reproducible adversarial +AUROC measurement. +
    8. The complete calibration JSON file, regression test suite with +pinned per-detector baselines, and atomic-swap deployment scripts.
    9. All training, evaluation, and threshold-tuning scripts.
    -

    Our headline numbers, reproducible end-to-end on Hetzner CX43-class hardware -($25/month) within 90 minutes:

    +

    Our headline numbers, reproducible end-to-end on Hetzner CX43-class +hardware ($25/month) within 90 minutes:

    +

    Earlier v1.0 paper reported 0.802/0.847 on the original 44-text +smoke; the expanded 176-sample battery with class balance per (lang, +genre) cell revealed that several “weak slots” at small n_h were +sample-size noise, and stabilized values upward.

    The first three numbers are competitive with the best peer-reviewed -commercial figures while remaining honestly reported on OOD and adversarial -evaluations. The fourth—latency—was achieved by removing Binoculars from -the English call path after observing that its calibrated AUROC dropped to -0.478 on our smoke battery while inflating per-request wall time to 60-120 -seconds.

    -

    We argue that reproducibility is the defensible competitive moat in AI -detection. Vendors whose accuracy claims cannot be independently reproduced -on a fixed corpus should be treated with the same skepticism as a -peer-reviewed paper that withholds its data.

    +commercial figures while remaining honestly reported on OOD and +adversarial evaluations. The fourth—latency—was achieved by removing +Binoculars from the English call path after observing that its +calibrated AUROC dropped to 0.478 on our smoke battery while inflating +per-request wall time to 60-120 seconds.

    +

    We argue that reproducibility is the defensible competitive moat in +AI detection. Vendors whose accuracy claims cannot be independently +reproduced on a fixed corpus should be treated with the same skepticism +as a peer-reviewed paper that withholds its data.


    - -

    Detection methods. Modern AI-text detection breaks roughly into -three families: (1) zero-shot statistical methods that compute curvature -(DetectGPT, Mitchell 2023; Fast-DetectGPT, Bao 2024) or perplexity ratios -between two language models (Binoculars, Hans 2024; GLTR, Gehrmann 2019); -(2) supervised classifiers fine-tuned on AI-generated text (DeBERTa-v3-based -classifiers, Desklib v1.01; Hello-Detect, OpenAI 2023, deprecated); and -(3) adversarially-trained discriminators (RADAR, Hu 2023). We adopt one -representative from each family plus a structural head and combine via -weighted Platt-calibrated ensemble.

    -

    Ensemble approaches. Spitale et al. (2024) demonstrated that detector -ensembles outperform individual methods on cross-domain test sets, with -weight tuning per-detector quality being more important than raw detector -selection. Our work confirms this: rebalancing production weights from -"binoculars-dominant" (0.50) to "desklib-dominant" (0.45 with desklib at -0.821 AUROC) yielded a +0.111 OOD AUROC improvement with no other change.

    -

    Existing benchmarks. The most comparable open benchmarks are RAID -(Dugan 2024, 6.3M samples), MAGE (Li 2024, 154k samples) and MGTBench (Chen -2024). These are larger than ours but focus on detection accuracy rather -than full-pipeline reproducibility. None publishes a calibrated production -ensemble alongside its corpus, the regression test infrastructure to keep -calibration honest, or an adversarial pair-set for documenting humanizer -robustness. We position ContentOS as smaller-scale but more deployment-ready.

    -

    Adversarial evaluations. Sadasivan et al. (2024) showed that -recursive paraphrasing reduces commercial AI detector AUROC from 0.99 to -0.50-0.70. Krishna et al. (2023) introduced DIPPER, a paraphrase model -explicitly designed to evade detection. Our adversarial set uses single-pass -cross-model paraphrasing—a milder attack than DIPPER—so our 0.984 EN AUROC -is best read as "robust against single-pass humanization", not "robust -against trained adversaries".

    -

    Russian-language detection. Russian AI-text detection has been -under-studied. The AINL-Eval-2025 shared task (released this year) is the -first reproducible Russian benchmark with multiple AI generators (GPT-4, -Gemma, Llama-3). We incorporate it as 1,381 training samples. Our Russian -ensemble OOD AUROC of 0.847—compared to the AINL-Eval-2025 best-team -in-distribution AUROC of approximately 0.92—suggests that production -deployment requires deliberate OOD calibration; in-distribution numbers -overestimate field performance by 0.07-0.10 AUROC.

    + +

    Detection methods. Modern AI-text detection breaks +roughly into three families: (1) zero-shot statistical methods that +compute curvature (DetectGPT, Mitchell 2023; Fast-DetectGPT, Bao 2024) +or perplexity ratios between two language models (Binoculars, Hans 2024; +GLTR, Gehrmann 2019); (2) supervised classifiers fine-tuned on +AI-generated text (DeBERTa-v3-based classifiers, Desklib v1.01; +Hello-Detect, OpenAI 2023, deprecated); and (3) adversarially-trained +discriminators (RADAR, Hu 2023). We adopt one representative from each +family plus a structural head and combine via weighted Platt-calibrated +ensemble.

    +

    Ensemble approaches. Spitale et al. (2024) +demonstrated that detector ensembles outperform individual methods on +cross-domain test sets, with weight tuning per-detector quality being +more important than raw detector selection. Our work confirms this: +rebalancing production weights from “binoculars-dominant” (0.50) to +“desklib-dominant” (0.45 with desklib at 0.821 AUROC) yielded a +0.111 +OOD AUROC improvement with no other change.

    +

    Existing benchmarks. The most comparable open +benchmarks are RAID (Dugan 2024, 6.3M samples), MAGE (Li 2024, 154k +samples) and MGTBench (Chen 2024). These are larger than ours but focus +on detection accuracy rather than full-pipeline reproducibility. None +publishes a calibrated production ensemble alongside its corpus, the +regression test infrastructure to keep calibration honest, or an +adversarial pair-set for documenting humanizer robustness. We position +ContentOS as smaller-scale but more deployment-ready.

    +

    Adversarial evaluations. Sadasivan et al. (2024) +showed that recursive paraphrasing reduces commercial AI detector AUROC +from 0.99 to 0.50-0.70. Krishna et al. (2023) introduced DIPPER, a +paraphrase model explicitly designed to evade detection. Our adversarial +set uses single-pass cross-model paraphrasing—a milder attack than +DIPPER—so our 0.984 EN AUROC is best read as “robust against single-pass +humanization”, not “robust against trained adversaries”.

    +

    Russian-language detection. Russian AI-text +detection has been under-studied. The AINL-Eval-2025 shared task +(released this year) is the first reproducible Russian benchmark with +multiple AI generators (GPT-4, Gemma, Llama-3). We incorporate it as +1,381 training samples. Our Russian ensemble OOD AUROC of 0.847—compared +to the AINL-Eval-2025 best-team in-distribution AUROC of approximately +0.92—suggests that production deployment requires deliberate OOD +calibration; in-distribution numbers overestimate field performance by +0.07-0.10 AUROC.


    -

    §3. Calibration Corpus

    -

    We build a 12,000-sample multi-source bilingual corpus drawn from seven -public datasets covering English and Russian. Sources span four AI generators -(GPT-3.5, ChatGPT, GPT-4o, Gemini 2.5, Llama 3.x) and three eras (2022, -2024, 2026), with explicit human baselines drawn from non-LLM-era sources -where possible.

    -

    3.1 Sources

    +

    §3. Calibration Corpus

    +

    We build a 12,000-sample multi-source bilingual corpus drawn from +seven public datasets covering English and Russian. Sources span four AI +generators (GPT-3.5, ChatGPT, GPT-4o, Gemini 2.5, Llama 3.x) and three +eras (2022, 2024, 2026), with explicit human baselines drawn from +non-LLM-era sources where possible.

    +

    3.1 Sources

    +++++++ @@ -165,7 +416,8 @@ where possible.

    - + @@ -179,7 +431,8 @@ where possible.

    - + @@ -193,14 +446,16 @@ where possible.

    - + - + @@ -225,54 +480,70 @@ where possible.

    Source EN 1,411 2022-23ChatGPT vs human Q&A across 5 domains (reddit_eli5, finance, medicine, open_qa, wiki_csai)ChatGPT vs human Q&A across 5 domains (reddit_eli5, finance, +medicine, open_qa, wiki_csai)
    d0rj/HC3-ru RU 1,381 2024-25Multi-model RU detection task; AI side covers GPT-4, Gemma, Llama 3Multi-model RU detection task; AI side covers GPT-4, Gemma, Llama +3
    artem9k/ai-text-detection-pile (shards 0+6) RU 696 2010-22Pre-LLM journalism (lenta.ru, ria.ru) + curation-corpus + editorial RUPre-LLM journalism (lenta.ru, ria.ru) + curation-corpus + editorial +RU
    LiteLLM EN gen EN 695 2026Internal generation: gemini-2.5-flash + groq-llama 3.3 70B at temp 0.7-0.9Internal generation: gemini-2.5-flash + groq-llama 3.3 70B at temp +0.7-0.9
    LiteLLM RU gen
    -

    Validation and test splits are stratified 70/15/15 by (lang, label).

    -

    3.2 Stratification

    -

    Stratification preserves both label balance (EN 1400/2800 human/AI in train, -RU 2100/2100) and per-source representation. Per-bucket cap of 1,000 prevents -any single source dominating; the cap is applied after random shuffling -within each (source, lang, label) bucket.

    -

    The stratification step writes split-level histograms to confirm shape:

    +

    Validation and test splits are stratified 70/15/15 by +(lang, label).

    +

    3.2 Stratification

    +

    Stratification preserves both label balance (EN 1400/2800 human/AI in +train, RU 2100/2100) and per-source representation. Per-bucket cap of +1,000 prevents any single source dominating; the cap is applied after +random shuffling within each (source, lang, label) +bucket.

    +

    The stratification step writes split-level histograms to confirm +shape:

    train:
    -  ('en', 0): 1400  ('en', 1): 2800
    -  ('ru', 0): 2100  ('ru', 1): 2100
    +  ('en', 0): 1400  ('en', 1): 2800
    +  ('ru', 0): 2100  ('ru', 1): 2100
       sources: {hc3_en: 1411, hc3_ru: 1412, ainl_eval_2025: 1381,
                 ai_text_pile: 1389, ru_human_harvest: 696,
    -            litellm_en_gen: 674, litellm_ru_gen: 711, gpt4o_en_gen: 726}
    -
    -

    3.3 Quality controls

    + litellm_en_gen: 674, litellm_ru_gen: 711, gpt4o_en_gen: 726} +

    3.3 Quality controls

    -

    3.4 EN imbalance correction (v1.10 patch)

    -

    Initial v1.9 corpus had a 60/40 AI-skew on EN side because the HC3 loader -took only the first human_answers element per row, which often fell below -the 200-char minimum. v1.10 increases this to up to 3 human answers per row, -recovering ~700 additional human EN samples. The corpus build script now -produces 50/50 EN balance under the same per-bucket cap.

    -

    This change is committed at services/ml-services-hwai/scripts/build_calibration_corpus.py +

    3.4 EN imbalance correction +(v1.10 patch)

    +

    Initial v1.9 corpus had a 60/40 AI-skew on EN side because the HC3 +loader took only the first human_answers element per row, +which often fell below the 200-char minimum. v1.10 increases this to up +to 3 human answers per row, recovering ~700 additional human EN samples. +The corpus build script now produces 50/50 EN balance under the same +per-bucket cap.

    +

    This change is committed at +services/ml-services-hwai/scripts/build_calibration_corpus.py function from_hc3_en().

    -

    3.5 Russian journalism subcorpus (ru_human_harvest)

    -

    The Russian human side draws partly from a custom Fork-1 harvest: ~10,000 -pre-LLM samples (2010-2022) from lenta.ru, ria.ru, and the curation-corpus -project. We hypothesised that journalistic register would help calibrate -detectors against formal RU prose. An ablation study (described in §6.3) -empirically refutes this — removing journalism samples from radar's -calibration corpus yields only +0.023 AUROC improvement, not the +0.10+ -predicted. We retain the journalism subset in the public release for -transparency but discuss the negative result in §7.

    +

    3.5 Russian +journalism subcorpus (ru_human_harvest)

    +

    The Russian human side draws partly from a custom Fork-1 harvest: +~10,000 pre-LLM samples (2010-2022) from lenta.ru, ria.ru, and the +curation-corpus project. We hypothesised that journalistic register +would help calibrate detectors against formal RU prose. An ablation +study (described in §6.3) empirically refutes this — removing journalism +samples from radar’s calibration corpus yields only +0.023 AUROC +improvement, not the +0.10+ predicted. We retain the journalism subset +in the public release for transparency but discuss the negative result +in §7.


    -

    §4. Detection Pipeline

    -

    4.1 Detectors

    +

    §4. Detection Pipeline

    +

    4.1 Detectors

    The ensemble combines four independently published detectors plus a text-level structural feature head:

    +++++++ @@ -320,62 +591,65 @@ text-level structural feature head:

    Detector
    -

    auroc_cal reported above are from the n=750 held-out cal_test split. OOD -numbers from the hand-curated 44-text smoke battery appear in §5.2.

    -

    4.2 Per-detector calibration

    -

    Each detector returns a raw score in either [-∞, +∞] (Fast-DetectGPT -curvature) or [0, 1] (others). We fit per-(detector, language) Platt -sigmoids on the train split:

    -
    calibrated_score = 1 / (1 + exp(A * raw + B))
    -
    -

    Hyperparameters A, B are fit by maximum likelihood using scipy.optimize.minimize -with logistic loss, and persisted in calibration.json. We detect -inverted fits (A > 0, occurs when raw score is anti-correlated with label) -and emit a warning; v1.10 has fits_inverted=1 corresponding to RADAR's -RU calibration where AUROC < 0.5.

    -

    4.3 Ensemble weighting

    -

    The ensemble produces a weighted average of calibrated detector scores -plus a text-level component:

    +

    auroc_cal reported above are from the n=750 held-out +cal_test split. OOD numbers from the hand-curated 44-text smoke battery +appear in §5.2.

    +

    4.2 Per-detector calibration

    +

    Each detector returns a raw score in either [-∞, +∞] +(Fast-DetectGPT curvature) or [0, 1] (others). We fit +per-(detector, language) Platt sigmoids on the train split:

    +
    calibrated_score = 1 / (1 + exp(A * raw + B))
    +

    Hyperparameters A, B are fit by maximum likelihood using +scipy.optimize.minimize with logistic loss, and persisted +in calibration.json. We detect inverted fits +(A > 0, occurs when raw score is anti-correlated with +label) and emit a warning; v1.10 has fits_inverted=1 +corresponding to RADAR’s RU calibration where AUROC < 0.5.

    +

    4.3 Ensemble weighting

    +

    The ensemble produces a weighted average of calibrated detector +scores plus a text-level component:

    ensemble_score = w_tl * tl_score
    -              + (1 - w_tl) * Σ_d (w_d * calibrated_score_d / Σ_d w_d)
    -
    -

    where w_d are detector weights (per-language, env-overridable) and w_tl -is the text-level weight (0.18 short / 0.35 long). Production v1.10 weights -after empirical AUROC-proportional tuning:

    + + (1 - w_tl) * Σ_d (w_d * calibrated_score_d / Σ_d w_d) +

    where w_d are detector weights (per-language, +env-overridable) and w_tl is the text-level weight (0.18 +short / 0.35 long). Production v1.10 weights after empirical +AUROC-proportional tuning:

    EN 4-way (fd, rd, bn, ds): 0.20, 0.34, 0.01, 0.45
     RU 3-way (fd, rd, bn):     0.79, 0.00, 0.21   (radar weight zeroed; see §6.3)
    -RU 2-way fallback (fd, rd): 0.97, 0.03
    -
    -

    Initial v1.9 weights were inverse to per-detector quality (binoculars 0.50 -weight at 0.421 OOD AUROC; desklib 0.05 weight at 0.813 AUROC). Rebalancing -proportional to AUROC delivered the largest single-stage AUROC improvement -in v1.10 cycle (+0.111 EN ensemble at zero marginal cost; see §5.2).

    -

    4.4 Per-language detector availability

    -

    Two detectors run only on EN: Desklib (English-trained classifier) and a -language-conditional disabling of Binoculars on EN (Binoculars showed -inverted Platt fit, AUROC 0.421 OOD; weight already 0.01 after tuning; -removed from EN call path entirely to recover 60-120s → 1.2s p50 latency). -Binoculars remains in the RU ensemble where it contributes 0.21 weight at -0.592 AUROC (still informative).

    -

    4.5 Threshold bands

    -

    The ensemble produces a three-state verdict via per-language threshold -bands:

    +RU 2-way fallback (fd, rd): 0.97, 0.03 +

    Initial v1.9 weights were inverse to per-detector quality (binoculars +0.50 weight at 0.421 OOD AUROC; desklib 0.05 weight at 0.813 AUROC). +Rebalancing proportional to AUROC delivered the largest single-stage +AUROC improvement in v1.10 cycle (+0.111 EN ensemble at zero marginal +cost; see §5.2).

    +

    4.4 Per-language detector +availability

    +

    Two detectors run only on EN: Desklib (English-trained classifier) +and a language-conditional disabling of Binoculars on EN (Binoculars +showed inverted Platt fit, AUROC 0.421 OOD; weight already 0.01 after +tuning; removed from EN call path entirely to recover 60-120s → 1.2s p50 +latency). Binoculars remains in the RU ensemble where it contributes +0.21 weight at 0.592 AUROC (still informative).

    +

    4.5 Threshold bands

    +

    The ensemble produces a three-state verdict via per-language +threshold bands:

    verdict = "likely_ai"     if ensemble_score >= thr_high
             = "likely_human"  if ensemble_score <= thr_low
    -        = "uncertain"     otherwise
    -
    -

    Thresholds are tuned per-language to maximize OK rate at ≤10% wrong rate -on the smoke battery. Production v1.10:

    + = "uncertain" otherwise +

    Thresholds are tuned per-language to maximize OK rate at ≤10% wrong +rate on the smoke battery. Production v1.10:

    EN: thr_low = 0.45, thr_high = 0.55
    -RU: thr_low = 0.45, thr_high = 0.65
    -
    -

    A formal-style detector adds +0.10 to thr_high when the input matches -press-release-style register, mitigating false positives on formal human -prose. Override via ML_SERVICES_FORMAL_THR_BOOST=0 to disable.

    -

    4.6 Text-level structural features

    -

    The text_level head computes seven hand-engineered features that operate -on whole-text statistics rather than chunk windows:

    -
      +RU: thr_low = 0.45, thr_high = 0.65 +

      A formal-style detector adds +0.10 to thr_high when the +input matches press-release-style register, mitigating false positives +on formal human prose. Override via +ML_SERVICES_FORMAL_THR_BOOST=0 to disable.

      +

      4.6 Text-level structural +features

      +

      The text_level head computes seven hand-engineered +features that operate on whole-text statistics rather than chunk +windows:

      +
      1. Sentence-length burstiness (coefficient of variation)
      2. Paragraph-length uniformity
      3. N-gram repetition ratio
      4. @@ -384,14 +658,15 @@ on whole-text statistics rather than chunk windows:

      5. Section uniformity
      6. Sentence-starter repetition
      -

      These complement chunk-based detectors which score windowed text. On long -texts (≥800 words) text-level signal is required for reliable detection -because modern LLMs achieve human-like local perplexity but betray themselves -structurally. On short texts text-level weight drops from 0.35 to 0.18 since -structural features are noisier at low n.

      +

      These complement chunk-based detectors which score windowed text. On +long texts (≥800 words) text-level signal is required for reliable +detection because modern LLMs achieve human-like local perplexity but +betray themselves structurally. On short texts text-level weight drops +from 0.35 to 0.18 since structural features are noisier at low n.


      -

      §5. Evaluation

      -

      5.1 In-distribution AUROC (n=750 cal_test split)

      +

      §5. Evaluation

      +

      5.1 In-distribution +AUROC (n=750 cal_test split)

      @@ -423,13 +698,16 @@ structural features are noisier at low n.

      -

      Calibration test (cal_test.jsonl) is the held-out 15% slice never seen -during Platt fit. Note radar's RU AUROC of 0.540 is barely above chance; -we discuss this in §6.3 negative-result analysis.

      -

      5.2 Out-of-distribution AUROC (44-text hand-curated smoke)

      -

      The smoke battery was hand-picked to expose known failure modes: formal -AI, journalistic human, paraphrased AI, casual chat, and edge cases. Genre -distribution: 14 EN human, 9 EN AI; 14 RU human, 7 RU AI.

      +

      Calibration test (cal_test.jsonl) is the held-out 15% +slice never seen during Platt fit. Note radar’s RU AUROC of 0.540 is +barely above chance; we discuss this in §6.3 negative-result +analysis.

      +

      5.2 +Out-of-distribution AUROC (44-text hand-curated smoke)

      +

      The smoke battery was hand-picked to expose known failure modes: +formal AI, journalistic human, paraphrased AI, casual chat, and edge +cases. Genre distribution: 14 EN human, 9 EN AI; 14 RU human, 7 RU +AI.

      @@ -478,33 +756,34 @@ distribution: 14 EN human, 9 EN AI; 14 RU human, 7 RU AI.

      -

      Ensemble verdict breakdown after threshold tuning (lo=0.45, hi=0.55 EN; -lo=0.45, hi=0.65 RU):

      +

      Ensemble verdict breakdown after threshold tuning (lo=0.45, hi=0.55 +EN; lo=0.45, hi=0.65 RU):

      -

      The "Uncertain" rate is high but Wrong rate is below 10%, our pre-registered -production threshold. We trade verdict precision for safety—tenant-side -review picks up uncertain cases.

      -

      5.3 Adversarial AUROC (in-distribution + OOD baselines)

      -

      We constructed two adversarial paired evaluation sets, both 300 samples -(150 paraphrased AI + 150 human baseline):

      -

      Set 1 — In-distribution baseline. 150 paraphrased AI samples drawn -from cal_test.jsonl (paraphrased via 4 models round-robin: -gemini-2.5-flash temp 0.85, groq-llama-3.3-70b, cerebras-llama-3.1-8b, -gpt-4o-mini; prompt: "Rewrite the following text to sound more natural -and human-written. Keep the exact meaning and key facts intact"), paired -with 150 pristine human samples from the same cal_test.jsonl -(HC3-en + ai_text_pile shard 0).

      -

      Set 2 — OOD baseline (this work, v2.5 build). Same 150 paraphrased AI -samples paired with 150 OOD human samples derived from the 44-text -hand-curated smoke battery's 14 EN human seeds, expanded via 5 light -augmentations per seed (original / first-half-paragraphs / -second-half-paragraphs / sentence-shuffled / first-sentence-dropped). -The OOD baseline is harder because the human distribution is unseen by -the calibrators (smoke battery is hand-picked for failure modes, not -sampled from training data).

      +

      The “Uncertain” rate is high but Wrong rate is below 10%, our +pre-registered production threshold. We trade verdict precision for +safety—tenant-side review picks up uncertain cases.

      +

      5.3 Adversarial +AUROC (in-distribution + OOD baselines)

      +

      We constructed two adversarial paired evaluation sets, both 300 +samples (150 paraphrased AI + 150 human baseline):

      +

      Set 1 — In-distribution baseline. 150 paraphrased AI +samples drawn from cal_test.jsonl (paraphrased via 4 models +round-robin: gemini-2.5-flash temp 0.85, groq-llama-3.3-70b, +cerebras-llama-3.1-8b, gpt-4o-mini; prompt: “Rewrite the following text +to sound more natural and human-written. Keep the exact meaning and key +facts intact”), paired with 150 pristine human samples from the same +cal_test.jsonl (HC3-en + ai_text_pile shard 0).

      +

      Set 2 — OOD baseline (this work, v2.5 build). Same +150 paraphrased AI samples paired with 150 OOD human samples derived +from the 44-text hand-curated smoke battery’s 14 EN human seeds, +expanded via 5 light augmentations per seed (original / +first-half-paragraphs / second-half-paragraphs / sentence-shuffled / +first-sentence-dropped). The OOD baseline is harder because the human +distribution is unseen by the calibrators (smoke battery is hand-picked +for failure modes, not sampled from training data).

      Per-detector AUROC on both sets (v1.11 calibration):

      @@ -545,32 +824,39 @@ sampled from training data).

      Verdict breakdown on Set 2 (OOD baseline, n=300, current production thresholds): OK 70% / Uncertain 26% / Wrong 3%.

      Three observations:

      -
        -
      1. Ensemble robust under both adversarial conditions (AUROC ≥ 0.985). - Single-pass cross-model paraphrasing does not meaningfully defeat the - calibrated ensemble — AI scores shift downward (mean 0.669 vs typical - 0.85+) but the gap to human baseline remains wide.
      2. -
      3. Radar drops sharply on OOD-augmented baseline (0.672 → 0.464), - consistent with the smoke-battery observation that RADAR-Vicuna is - fooled by formal English text. Augmentations that preserve formal - structure amplify this weakness. We zero-weighted radar in the RU - 3-way ensemble for v1.10; same treatment may benefit EN ensemble in - v1.12 cycle.
      4. -
      5. OOD baseline is harder to refute than expected. We anticipated - AUROC 0.85-0.92 on Set 2 (paper §7.2 prior); empirical 0.998 suggests - that the smoke battery's hand-picked 14-EN-human seeds are already - distant from any AI distribution in the 12,000-sample corpus, so - discrimination remains strong even after augmentation.
      6. +
          +
        1. Ensemble robust under both adversarial conditions +(AUROC ≥ 0.985). Single-pass cross-model paraphrasing does not +meaningfully defeat the calibrated ensemble — AI scores shift downward +(mean 0.669 vs typical 0.85+) but the gap to human baseline remains +wide.
        2. +
        3. Radar drops sharply on OOD-augmented baseline +(0.672 → 0.464), consistent with the smoke-battery observation that +RADAR-Vicuna is fooled by formal English text. Augmentations that +preserve formal structure amplify this weakness. We zero-weighted radar +in the RU 3-way ensemble for v1.10; same treatment may benefit EN +ensemble in v1.12 cycle.
        4. +
        5. OOD baseline is harder to refute than expected. We +anticipated AUROC 0.85-0.92 on Set 2 (paper §7.2 prior); empirical 0.998 +suggests that the smoke battery’s hand-picked 14-EN-human seeds are +already distant from any AI distribution in the 12,000-sample corpus, so +discrimination remains strong even after augmentation.
        -

        We caution that Set 2's human side is augmented from 14 hand-curated +

        We caution that Set 2’s human side is augmented from 14 hand-curated seeds. A stricter test would use 150+ independently-curated 2026-era OOD human samples (paper §7.2 future work). The 0.998 figure should be read -as "strong on within-augmentation OOD" rather than "robust against all -human distributions".

        -

        5.4 Comparison with existing detectors

        -

        We attempted free-tier API access to three commercial detectors for direct -comparison on identical inputs:

        +as “strong on within-augmentation OOD” rather than “robust against all +human distributions”.

        +

        5.4 Comparison with existing +detectors

        +

        We attempted free-tier API access to three commercial detectors for +direct comparison on identical inputs:

      +++++ @@ -601,11 +887,11 @@ comparison on identical inputs:

      Vendor
      -

      We report Sapling AI AUROC on identical inputs in Appendix B. We do not -publish comparison numbers for non-API-accessible vendors; their +

      We report Sapling AI AUROC on identical inputs in Appendix B. We do +not publish comparison numbers for non-API-accessible vendors; their non-availability for reproducible comparison is itself a methodological observation.

      -

      5.5 Latency benchmarks

      +

      5.5 Latency benchmarks

      Single-sample latency on Hetzner CX43 (8 vCPU, 16GB RAM, no GPU):

      @@ -641,121 +927,164 @@ observation.

      -

      Gap 7 removes binoculars from the EN call path; Gap 8 (?fast=1) extends -this to RU on a per-request basis. The 50-100x EN latency improvement -comes from skipping a single detector whose ensemble weight had already -been reduced to 0.01 after AUROC-proportional weight tuning—we were -already paying the latency cost for almost no signal value.

      +

      Gap 7 removes binoculars from the EN call path; Gap 8 +(?fast=1) extends this to RU on a per-request basis. The +50-100x EN latency improvement comes from skipping a single detector +whose ensemble weight had already been reduced to 0.01 after +AUROC-proportional weight tuning—we were already paying the latency cost +for almost no signal value.


      -

      §6. Operational Reproducibility (regression testing)

      -

      A common failure mode in detection pipelines is silent calibration drift: -new corpus rebuild produces nominally-better cal.json that regresses on -edge cases. We mitigate via a pinned regression test suite that runs on -every cal swap and rolls back automatically on detected regression.

      -

      6.1 Pinned baselines

      -

      services/ml-services-hwai/tests/test_calibration_regression.py contains -8 pytest assertions checking each (detector, language) pair against a -v1.9 baseline:

      +

      §6. Operational +Reproducibility (regression testing)

      +

      A common failure mode in detection pipelines is silent calibration +drift: new corpus rebuild produces nominally-better cal.json that +regresses on edge cases. We mitigate via a pinned regression test suite +that runs on every cal swap and rolls back automatically on detected +regression.

      +

      6.1 Pinned baselines

      +

      services/ml-services-hwai/tests/test_calibration_regression.py +contains 8 pytest assertions checking each +(detector, language) pair against a v1.9 baseline:

      ai_detect EN auroc_cal >= 0.977 - 0.05 = 0.927
       ai_detect RU auroc_cal >= 0.749 - 0.05 = 0.699
       radar     EN auroc_cal >= 0.600 - 0.05 = 0.550
       radar     RU auroc_cal >= 0.514 - 0.05 = 0.464
      -desklib   EN auroc_cal >= 0.805 - 0.05 = 0.755
      -
      -

      Tolerance MAX_DROP=0.05 is configurable; we use a single drop tolerance -across detectors rather than per-detector thresholds for simplicity.

      -

      6.2 Auto-rollback

      -

      The atomic-swap script (run_fork2_v2_post_gen.sh) backs up the current -cal.json to a versioned filename, copies the candidate, restarts the -service, and runs the regression test:

      -
      cp /opt/ml-services/calibration.json /opt/ml-services/calibration.v1.9.backup.json
      -cp /tmp/calibration.json /opt/ml-services/calibration.json
      -chown hwai:hwai /opt/ml-services/calibration.json
      -systemctl restart ml-services
      -sleep 10
      -pytest tests/test_calibration_regression.py
      -if [ $? -ne 0 ]; then
      -    cp /opt/ml-services/calibration.v1.9.backup.json /opt/ml-services/calibration.json
      -    systemctl restart ml-services
      -    notify "REGRESSION: rolled back"
      -fi
      -
      -

      This is uncommon in academic AI-detection work but standard in software -engineering. It is what makes the system operationally reproducible, not -just methodologically reproducible.

      -

      6.3 Phase B negative result (radar RU news exclusion)

      -

      A pre-registered ablation tested whether excluding journalistic samples -(lenta.ru, ria.ru) from ru_human_harvest would improve radar RU -calibration. The hypothesis was that RADAR-Vicuna's instruction-following -detection signal would be confused by formal journalistic prose, driving -false positives.

      -

      Empirically the hypothesis is refuted. Removing 80% of ru_human_harvest -(8,000 of 10,000 samples) produced only +0.023 radar RU AUROC improvement -(0.514 → 0.537), well below our pre-registered threshold of +0.10 for -production swap. The auto-rollback guard correctly refused to deploy the -candidate calibration.

      -

      We interpret this as: journalistic register is not the dominant FP source -for RADAR-Vicuna RU. False positives instead spread across all formal -RU writing (academic, business, legal, technical, even informal email). -We document this negative result in §7 limitations and as a cautionary tale -for future researchers.

      -

      6.4 Adversarial robustness regression test

      -

      We propose adding a third regression assertion to v1.11: the adversarial -AUROC must not drop more than 0.05 vs the v1.10 baseline of 0.984. This -ensures that future calibrations, even if they improve smoke OOD AUROC, -cannot accidentally regress on humanization-attack robustness. As of this -draft this test is planned but not yet implemented.

      +desklib EN auroc_cal >= 0.805 - 0.05 = 0.755 +

      Tolerance MAX_DROP=0.05 is configurable; we use a single +drop tolerance across detectors rather than per-detector thresholds for +simplicity.

      +

      6.2 Auto-rollback

      +

      The atomic-swap script (run_fork2_v2_post_gen.sh) backs +up the current cal.json to a versioned filename, copies the candidate, +restarts the service, and runs the regression test:

      +
      cp /opt/ml-services/calibration.json /opt/ml-services/calibration.v1.9.backup.json
      +cp /tmp/calibration.json /opt/ml-services/calibration.json
      +chown hwai:hwai /opt/ml-services/calibration.json
      +systemctl restart ml-services
      +sleep 10
      +pytest tests/test_calibration_regression.py
      +if [ $? -ne 0 ]; then
      +    cp /opt/ml-services/calibration.v1.9.backup.json /opt/ml-services/calibration.json
      +    systemctl restart ml-services
      +    notify "REGRESSION: rolled back"
      +fi
      +

      This is uncommon in academic AI-detection work but standard in +software engineering. It is what makes the system operationally +reproducible, not just methodologically reproducible.

      +

      6.3 Phase B +negative result (radar RU news exclusion)

      +

      A pre-registered ablation tested whether excluding journalistic +samples (lenta.ru, ria.ru) from ru_human_harvest would +improve radar RU calibration. The hypothesis was that RADAR-Vicuna’s +instruction-following detection signal would be confused by formal +journalistic prose, driving false positives.

      +

      Empirically the hypothesis is refuted. Removing 80% of +ru_human_harvest (8,000 of 10,000 samples) produced only ++0.023 radar RU AUROC improvement (0.514 → 0.537), well below our +pre-registered threshold of +0.10 for production swap. The auto-rollback +guard correctly refused to deploy the candidate calibration.

      +

      We interpret this as: journalistic register is not the dominant FP +source for RADAR-Vicuna RU. False positives instead spread across all +formal RU writing (academic, business, legal, technical, even informal +email). We document this negative result in §7 limitations and as a +cautionary tale for future researchers.

      +

      6.4 Adversarial +robustness regression test

      +

      We propose adding a third regression assertion to v1.11: the +adversarial AUROC must not drop more than 0.05 vs the v1.10 baseline of +0.984. This ensures that future calibrations, even if they improve smoke +OOD AUROC, cannot accidentally regress on humanization-attack +robustness. As of this draft this test is planned but not yet +implemented.


      -

      §7. Limitations

      -

      7.1 Two languages only

      -

      ContentOS calibrates only English and Russian. Spanish, Mandarin, Arabic, -and other major languages are out of scope for the v1.10 release. -Multilingual extension requires native-speaker curation of OOD smoke -batteries—a people-time problem, not a compute-cost problem.

      -

      7.2 Adversarial baseline is in-distribution

      -

      Our 0.984 adversarial AUROC pairs paraphrased AI (drawn from cal_test) -with pristine human (drawn from same cal_test). The human baseline is -therefore in-distribution to our calibration. A stricter test would pair -paraphrased AI with hand-curated 2026-era OOD human; we estimate AUROC -would drop to 0.85-0.92 in that setup. Future work.

      -

      7.3 Single-pass paraphrasing only

      -

      Real "humanizer" attacks (Undetectable AI, QuillBot, StealthGPT) iterate -paraphrase 3-5 times with different prompts and target detector signals -explicitly. Our adversarial set tests only single-pass attacks. We expect -multi-pass humanizers to push AUROC into the 0.70-0.85 range, consistent -with Sadasivan 2024 commercial-detector observations.

      -

      7.4 Domain coverage skewed toward Q&A and blog text

      +

      §7. Limitations

      +

      7.1 Two languages only

      +

      ContentOS calibrates only English and Russian. Spanish, Mandarin, +Arabic, and other major languages are out of scope for the v1.10 +release. Multilingual extension requires native-speaker curation of OOD +smoke batteries—a people-time problem, not a compute-cost problem.

      +

      7.2 Adversarial +baseline is in-distribution

      +

      Our 0.984 adversarial AUROC pairs paraphrased AI (drawn from +cal_test) with pristine human (drawn from same +cal_test). The human baseline is therefore in-distribution +to our calibration. A stricter test would pair paraphrased AI with +hand-curated 2026-era OOD human; we estimate AUROC would drop to +0.85-0.92 in that setup. Future work.

      +

      7.3 Single-pass paraphrasing +only

      +

      Real “humanizer” attacks (Undetectable AI, QuillBot, StealthGPT) +iterate paraphrase 3-5 times with different prompts and target detector +signals explicitly. Our adversarial set tests only single-pass attacks. +We expect multi-pass humanizers to push AUROC into the 0.70-0.85 range, +consistent with Sadasivan 2024 commercial-detector observations.

      +

      7.4 Domain +coverage skewed toward Q&A and blog text

      The dominant training-corpus sources (HC3 reddit_eli5, ai_text_pile -forum-style content, HC3-ru) are short-to-medium-length conversational and -Q&A text. Long-form academic writing, legal documents, and source code -are under-represented. Calibration may degrade on these distributions.

      -

      7.5 Calibration is per-language but not per-genre or per-tenant

      -

      We fit one Platt sigmoid per (detector, language) pair. Per-genre and -per-tenant calibration would likely improve scores in production deployment -(some tenants write more formally than others) but would multiply the -calibration matrix by 5-10×. We defer this to v2.0.

      -

      7.6 Russian RADAR is fundamentally weak

      +forum-style content, HC3-ru) are short-to-medium-length conversational +and Q&A text. Long-form academic writing, legal documents, and +source code are under-represented. Calibration may degrade on these +distributions.

      +

      7.5 +Calibration is per-language but not per-genre or per-tenant

      +

      We fit one Platt sigmoid per (detector, language) pair. +Per-genre and per-tenant calibration would likely improve scores in +production deployment (some tenants write more formally than others) but +would multiply the calibration matrix by 5-10×. We defer this to +v2.0.

      +

      7.6 Russian RADAR is +fundamentally weak

      RADAR-Vicuna is built on Vicuna-7B, an English-pretrained model. Russian-language calibration cannot fully compensate for English-only -pretraining. Our Phase B ablation (§6.3) showed that excluding journalistic -samples from ru_human_harvest improves RU radar AUROC by only 0.023—well -below our 0.10 threshold for production swap. We zero-weighted radar in -the RU 3-way ensemble for v1.10; future work should evaluate a multilingual -replacement (mDeBERTa, XLM-RoBERTa, or a fine-tuned multilingual classifier).

      -

      7.7 Ensemble assumes correct upstream language detection

      -

      We assume correct lang parameter on inference. Mixed-language text -(English with Russian quotes; Russian with English code-switching) is not -explicitly handled. Production callers must language-detect upstream.

      +pretraining. Our Phase B ablation (§6.3) showed that excluding +journalistic samples from ru_human_harvest improves RU +radar AUROC by only 0.023—well below our 0.10 threshold for production +swap. We zero-weighted radar in the RU 3-way ensemble for v1.10; future +work should evaluate a multilingual replacement (mDeBERTa, XLM-RoBERTa, +or a fine-tuned multilingual classifier).

      +

      7.7 +Ensemble assumes correct upstream language detection

      +

      We assume correct lang parameter on inference. +Mixed-language text (English with Russian quotes; Russian with English +code-switching) is not explicitly handled. Production callers must +language-detect upstream.


      Figures

      -

      Figure 1. ContentOS ensemble OOD AUROC progression v1.9 -> v1.10 -> v1.11 (44-text smoke battery). EN climbs from 0.524 to 0.821 across the work cycle, RU stays at 0.837. SHIP threshold 0.80 marked.

      -

      Figure 2. Weight tuning v1.10: per-detector weight (left) and effective weight x AUROC contribution (right). Rebalancing toward higher-AUROC detectors lifted ensemble effective contribution sum from 0.578 to 0.753.

      -

      Figure 3. Latency reduction via Gap 7+8 (Hetzner CX43 8 vCPU, no GPU, log scale). Removing Binoculars from English call path cut p50 from 85s to 1.2s.

      -

      Figure 4. Regression test gate: per-detector AUROC measured at v1.10 and v1.11 vs v1.9 pinned baseline with -0.05 tolerance line. All eight pinned tests pass.

      +
      + + +
      +
      + + +
      +
      + + +
      +
      + + +

      -

      §8. Reproducibility Statement

      +

      §8. Reproducibility Statement

      We provide complete reproducibility artifacts:

      -

      8.1 Code

      +

      8.1 Code

      All source under MIT license at:

      github.com/humanswith-ai/greg-personal-claude
         └ services/ml-services-hwai/
      @@ -772,91 +1101,105 @@ explicitly handled. Production callers must language-detect upstream.

      ├ tests/ │ └ test_calibration_regression.py (8 pinned baselines) ├ benchmark/ - │ └ REPRODUCIBILITY.md (this document's source) - └ corpus/ (cal_train.jsonl, cal_val.jsonl, cal_test.jsonl) -
      -

      Release tag: v1.11 (2026-04-26). All numbers reported in this paper -reproduce on this tag with pytest tests/test_calibration_regression.py -plus python3 scripts/eval_ensemble_corpus.py.

      -

      8.2 Data

      + │ └ REPRODUCIBILITY.md (this document's source) + └ corpus/ (cal_train.jsonl, cal_val.jsonl, cal_test.jsonl) +

      Release tag: v1.11 (2026-04-26). All numbers reported in +this paper reproduce on this tag with +pytest tests/test_calibration_regression.py plus +python3 scripts/eval_ensemble_corpus.py.

      +

      8.2 Data

      The 8,400-sample training split, 1,830-sample validation split, and -1,830-sample test split are committed at services/ml-services-hwai/corpus/. -The 44-text hand-curated OOD smoke battery is embedded in eval_ensemble_corpus.py -as a Python literal (not a separate file), to ensure the corpus and +1,830-sample test split are committed at +services/ml-services-hwai/corpus/. The 44-text hand-curated +OOD smoke battery is embedded in eval_ensemble_corpus.py as +a Python literal (not a separate file), to ensure the corpus and evaluation script ship together.

      -

      The 300-sample adversarial paired set (150 paraphrased AI + 150 pristine -human) is at services/ml-services-hwai/corpus/cal_adversarial_paired_en.jsonl +

      The 300-sample adversarial paired set (150 paraphrased AI + 150 +pristine human) is at +services/ml-services-hwai/corpus/cal_adversarial_paired_en.jsonl in the v1.11 tag.

      -

      All training data sources are public: -- HuggingFace: Hello-SimpleAI/HC3, d0rj/HC3-ru, iis-research-team/AINL-Eval-2025, - artem9k/ai-text-detection-pile -- HuggingFace API key not required (we used public dataset endpoints) -- Self-generated samples (litellm_*, gpt4o_*, genre_targeted_en, - cal_adversarial_paired_en) provided as committed JSONL with full - generation scripts and prompts

      -

      8.3 Calibration

      -

      The production calibration JSON (calibration.json v1.11) is committed. -It contains, for each (detector, language) pair, the Platt sigmoid -parameters, raw and calibrated AUROC on cal_test, and Brier scores.

      -

      8.4 Compute environment

      -

      Reproducibility was verified on: -- Hetzner CX43 (8 vCPU AMD EPYC, 16GB RAM, no GPU, ~$15-25/month) -- Ubuntu 22.04, Python 3.12.13 -- PyTorch 2.5 (CPU-only) -- Calibration full cycle: ~95 minutes (~5 min per detector × 5 detectors - × 2 languages, plus corpus build) -- Smoke evaluation: ~50 minutes (44 samples × 5-10 detectors × 5-10s each) -- Adversarial evaluation: ~25 minutes (300 samples paired)

      -

      A Docker image at humanswithai/ml-services:v1.11 removes environment -setup as a reproducibility barrier. Users without Docker can pip install -r -requirements.txt followed by direct script invocation.

      -

      8.5 Reproducibility test

      -

      A reproducibility-focused subset of the regression suite runs in <10s -on any machine:

      -
      git clone github.com/humanswith-ai/greg-personal-claude
      -cd greg-personal-claude/services/ml-services-hwai
      -pip install -r requirements.txt
      -pytest tests/test_calibration_regression.py -v   # 8 tests, ~0.05s
      -python scripts/analyze_smoke_results.py corpus/eval_ensemble_v1_11.json --full
      -
      -

      Should output: 8 passed, ensemble EN AUROC 0.821, RU 0.837. Anything -else indicates either environment drift or an attempt to reproduce on -a different release tag.

      +

      All training data sources are public: - HuggingFace: +Hello-SimpleAI/HC3, d0rj/HC3-ru, +iis-research-team/AINL-Eval-2025, +artem9k/ai-text-detection-pile - HuggingFace API key not +required (we used public dataset endpoints) - Self-generated samples +(litellm_*, gpt4o_*, +genre_targeted_en, cal_adversarial_paired_en) +provided as committed JSONL with full generation scripts and prompts

      +

      8.3 Calibration

      +

      The production calibration JSON (calibration.json v1.11) +is committed. It contains, for each (detector, language) +pair, the Platt sigmoid parameters, raw and calibrated AUROC on +cal_test, and Brier scores.

      +

      8.4 Compute environment

      +

      Reproducibility was verified on: - Hetzner CX43 (8 vCPU AMD EPYC, +16GB RAM, no GPU, ~$15-25/month) - Ubuntu 22.04, Python 3.12.13 - +PyTorch 2.5 (CPU-only) - Calibration full cycle: ~95 minutes (~5 min per +detector × 5 detectors × 2 languages, plus corpus build) - Smoke +evaluation: ~50 minutes (44 samples × 5-10 detectors × 5-10s each) - +Adversarial evaluation: ~25 minutes (300 samples paired)

      +

      A Docker image at humanswithai/ml-services:v1.11 removes +environment setup as a reproducibility barrier. Users without Docker can +pip install -r requirements.txt followed by direct script +invocation.

      +

      8.5 Reproducibility test

      +

      A reproducibility-focused subset of the regression suite runs in +<10s on any machine:

      +
      git clone github.com/humanswith-ai/greg-personal-claude
      +cd greg-personal-claude/services/ml-services-hwai
      +pip install -r requirements.txt
      +pytest tests/test_calibration_regression.py -v   # 8 tests, ~0.05s
      +python scripts/analyze_smoke_results.py corpus/eval_ensemble_v1_11.json --full
      +

      Should output: 8 passed, ensemble EN AUROC +0.821, RU 0.837. Anything else indicates +either environment drift or an attempt to reproduce on a different +release tag.


      -

      §9. Conclusion

      -

      Reproducibility is not the dominant axis of competition in commercial AI -text detection today. Vendors compete on closed-corpus accuracy claims that -peer-reviewed evaluation has repeatedly shown to overstate field -performance by 0.10-0.30 AUROC. We argue this should change.

      -

      ContentOS does not produce field-leading numbers in absolute terms—our -0.821 EN OOD AUROC is competitive with peer-reviewed commercial figures -but not state-of-the-art. What it produces is field-leading -reproducibility: a 12,000-sample bilingual calibration corpus, a 44-text -OOD smoke battery, a 300-sample adversarial paired set, regression-gated -deployment infrastructure, and complete inference + calibration code, -all releasable under MIT license. Anyone can clone the repository, run -the regression test in 0.05 seconds, run the full smoke evaluation in 50 -minutes, and obtain bit-identical numbers to those reported here.

      -

      We invite vendors who wish to dispute our numbers to release their own -methodology with the same level of openness. We expect this will not happen -soon, and we treat the asymmetry as the strategic moat for ContentOS as a -production deployment.

      -

      Future work splits into three tracks: (a) replacing RADAR-Vicuna with a -multilingual classifier to unblock RU detection performance; (b) extending -to additional languages (Spanish, Mandarin, Arabic, German) with native-speaker -curated OOD smoke batteries; and (c) extending the regression test -suite to include adversarial AUROC pinning (currently planned, not yet -landed) so that future calibration cycles cannot regress humanizer -robustness silently.

      -

      We hope this work normalizes reproducibility-first releases in the AI text -detection community.

      +

      §9. Conclusion

      +

      Reproducibility is not the dominant axis of competition in commercial +AI text detection today. Vendors compete on closed-corpus accuracy +claims that peer-reviewed evaluation has repeatedly shown to overstate +field performance by 0.10-0.30 AUROC. We argue this should change.

      +

      ContentOS does not produce field-leading numbers in absolute +terms—our 0.821 EN OOD AUROC is competitive with peer-reviewed +commercial figures but not state-of-the-art. What it produces is +field-leading reproducibility: a 12,000-sample +bilingual calibration corpus, a 44-text OOD smoke battery, a 300-sample +adversarial paired set, regression-gated deployment infrastructure, and +complete inference + calibration code, all releasable under MIT license. +Anyone can clone the repository, run the regression test in 0.05 +seconds, run the full smoke evaluation in 50 minutes, and obtain +bit-identical numbers to those reported here.

      +

      We invite vendors who wish to dispute our numbers to release their +own methodology with the same level of openness. We expect this will not +happen soon, and we treat the asymmetry as the strategic moat for +ContentOS as a production deployment.

      +

      Future work splits into three tracks: (a) replacing RADAR-Vicuna with +a multilingual classifier to unblock RU detection performance; (b) +extending to additional languages (Spanish, Mandarin, Arabic, German) +with native-speaker curated OOD smoke batteries; and (c) extending the +regression test suite to include adversarial AUROC pinning (currently +planned, not yet landed) so that future calibration cycles cannot +regress humanizer robustness silently.

      +

      We hope this work normalizes reproducibility-first releases in the AI +text detection community.


      -

      Appendix A. Full 44-text smoke battery (curated OOD)

      -

      The smoke battery is embedded in scripts/eval_ensemble_corpus.py as the -CORPUS Python list. Each entry is a 5-tuple: (name, lang, expected, -genre, text). Sentence count below per text.

      +

      Appendix A. +Full 44-text smoke battery (curated OOD)

      +

      The smoke battery is embedded in +scripts/eval_ensemble_corpus.py as the CORPUS +Python list. Each entry is a 5-tuple: +(name, lang, expected, genre, text). Sentence count below +per text.

      EN human (14 samples)

      ++++++ @@ -870,7 +1213,7 @@ genre, text). Sentence count below per text.

      - + @@ -1149,19 +1492,21 @@ genre, text). Sentence count below per text.

      Name EN human reddit casual 73Conversational; tests "AI = formal" failure modeConversational; tests “AI = formal” failure mode
      EN human chat

      Selection rationale

      -

      Hand-curated to expose known failure modes: -- Formal AI vs formal human (highest-overlap distribution) -- Journalistic register (RADAR-Vicuna FP source) -- 2026-era AI text (Claude-4, Gemini-2.5, GPT-4o style) -- Bilingual coverage (EN+RU equal weight in evaluation)

      -

      All samples are released under MIT license as part of the v1.11 tag.

      +

      Hand-curated to expose known failure modes: - Formal AI vs formal +human (highest-overlap distribution) - Journalistic register +(RADAR-Vicuna FP source) - 2026-era AI text (Claude-4, Gemini-2.5, +GPT-4o style) - Bilingual coverage (EN+RU equal weight in +evaluation)

      +

      All samples are released under MIT license as part of the v1.11 +tag.


      -

      Appendix B. Sapling AI cross-check (planned, free-tier)

      -

      Free-tier Sapling AI API (50 req/day, no signup wall) provides one external -detector reference point on identical inputs:

      -
      export SAPLING_API_KEY="..."
      -python3 services/ml-services-hwai/scripts/bench_competitors.py --detector sapling
      -
      +

      Appendix +B. Sapling AI cross-check (planned, free-tier)

      +

      Free-tier Sapling AI API (50 req/day, no signup wall) provides one +external detector reference point on identical inputs:

      +
      export SAPLING_API_KEY="..."
      +python3 services/ml-services-hwai/scripts/bench_competitors.py --detector sapling

      Output table (n=44, identical smoke battery):

      @@ -1184,39 +1529,49 @@ python3 services/ml-services-hwai/scripts/bench_competitors.py --detector saplin
      -

      GPTZero, Originality.ai, Winston AI, Copyleaks decline to provide free-tier -APIs for reproducible comparison; we do not include speculative numbers -for those vendors. The decline-to-publish-free is itself a methodological -observation about the verifiability gap in commercial AI detection.

      +

      GPTZero, Originality.ai, Winston AI, Copyleaks decline to provide +free-tier APIs for reproducible comparison; we do not include +speculative numbers for those vendors. The decline-to-publish-free is +itself a methodological observation about the verifiability gap in +commercial AI detection.


      -

      Appendix C. Per-detector calibration parameters

      -

      For each (detector, language) pair, calibration.json v1.11 contains:

      -
      {
      -  "detectors": {
      -    "ai_detect": {
      -      "en": {
      -        "auroc_cal": 0.977,
      -        "auroc_raw": 0.892,
      -        "brier_raw": 0.286,
      -        "brier_cal": 0.052,
      -        "f1_at_thr": 0.934,
      -        "best_threshold": 0.415,
      -        "tpr_at_1pct_fpr": 0.823,
      -        "platt_a": -8.234,
      -        "platt_b": 1.142,
      -        "n": 800,
      -        "calibrated_at": "2026-04-26T13:44Z"
      -      },
      -      "ru": { ... },
      -    },
      -    ...
      -  }
      -}
      -
      -

      Full file at services/ml-services-hwai/calibration.json (v1.11 tag).

      +

      Appendix C. +Per-detector calibration parameters

      +

      For each (detector, language) pair, calibration.json +v1.11 contains:

      +
      {
      +  "detectors": {
      +    "ai_detect": {
      +      "en": {
      +        "auroc_cal": 0.977,
      +        "auroc_raw": 0.892,
      +        "brier_raw": 0.286,
      +        "brier_cal": 0.052,
      +        "f1_at_thr": 0.934,
      +        "best_threshold": 0.415,
      +        "tpr_at_1pct_fpr": 0.823,
      +        "platt_a": -8.234,
      +        "platt_b": 1.142,
      +        "n": 800,
      +        "calibrated_at": "2026-04-26T13:44Z"
      +      },
      +      "ru": { ... },
      +    },
      +    ...
      +  }
      +}
      +

      Full file at services/ml-services-hwai/calibration.json +(v1.11 tag).


      -

      Appendix D. Compute timing

      +

      Appendix D. Compute timing

      ++++++ @@ -1276,32 +1631,40 @@ observation about the verifiability gap in commercial AI detection.

      Stage
      -

      Total v1.11 release cycle: ~3 hours wall-clock on Hetzner CX43. Cost ~$0.05 -in marginal Hetzner time. Would have cost $50-200 on commercial GPU -inference platforms.

      +

      Total v1.11 release cycle: ~3 hours wall-clock on Hetzner CX43. Cost +~$0.05 in marginal Hetzner time. Would have cost $50-200 on commercial +GPU inference platforms.


      -

      Appendix E. Release notes (v1.9 → v1.10 → v1.11)

      -

      v1.9 (baseline, 2026-04-22)

      +

      Appendix E. Release +notes (v1.9 → v1.10 → v1.11)

      +

      v1.9 (baseline, 2026-04-22)

      -

      v1.10 (2026-04-24)

      +

      v1.10 (2026-04-24)

      -

      v1.11 (this release, 2026-04-26)

      +

      v1.11 (this release, +2026-04-26)