diff --git "a/paper.html" "b/paper.html" new file mode 100644--- /dev/null +++ "b/paper.html" @@ -0,0 +1,1315 @@ + + + + + ContentOS — Reproducible Bilingual AI-Detection Ensemble (Pre-print) + + + +

ContentOS: A Reproducible Bilingual AI-Text-Detection Ensemble with Adversarial Robustness Evaluation

+
+

ContentOS team, Humanswith.ai, 2026-04-27. Pre-print version v1.0. +Source: services/ml-services-hwai/benchmark/paper.md (auto-merged from +three companion drafts; see merge_paper.py).

+
+

Abstract

+

Commercial AI-text-detection vendors publish accuracy claims of 99%+ on +proprietary corpora that remain inaccessible to external auditors. +Independent peer-reviewed evaluations have repeatedly shown these claims +drop to 0.70-0.88 AUROC on out-of-distribution and modern-era text. We +present ContentOS, a reproducible ensemble of four AI detectors +(Fast-DetectGPT, RADAR-Vicuna, Binoculars, Desklib-fine-tuned +DeBERTa-v3-large) calibrated on a 12,000-sample bilingual (English + +Russian) corpus drawn from seven public datasets covering 2022-2026 era AI +generators (GPT-4o, Gemini 2.5, Groq Llama, Cerebras Llama).

+

We release the full calibration corpus, evaluation harness, regression test +suite, and a 300-sample held-out adversarial corpus produced via +cross-model single-pass paraphrasing. On a 44-text hand-curated +out-of-distribution smoke battery, our v1.11 ensemble achieves AUROC +0.821 (English) and 0.837 (Russian), with English Wrong-rate +of 4% and median latency of 1.2 seconds on commodity 8-vCPU hardware. On +the 300-sample adversarial paired set, ensemble AUROC reaches 0.985 (in- +distribution human baseline).

+

The contribution of this work is field-leading reproducibility, not +state-of-the-art absolute AUROC. Anyone can clone the repository, run the +regression test in 0.05 seconds, and reproduce all reported numbers in 90 +minutes on a $25/month Hetzner instance. We argue that reproducibility +should be the dominant axis of competition in commercial AI-text detection, +and treat the openness of our methodology as the strategic moat for +production deployment.

+

Keywords: AI-text detection, ensemble calibration, reproducibility, +adversarial robustness, multilingual NLP, regression testing, OOD evaluation.

+
+

§1. Introduction

+

The verifiability problem. Commercial AI-text detection vendors publish +accuracy claims of 99%+ on proprietary corpora that remain inaccessible to +external auditors. Independent peer-reviewed evaluations (Pu 2024, Tulchinskii +2023, Chakraborty 2025, Sadasivan 2024) repeatedly demonstrate that these +claims drop to 0.70-0.88 AUROC on out-of-distribution (OOD) text and fall +further—often below 0.65—under paraphrase attack. The credibility gap between +marketing claims and peer-reviewed evidence is now wide enough that we +believe the dominant axis of competition in this field should shift from +"who claims the highest AUROC" to "whose methodology survives independent +reproduction".

+

We present ContentOS, an open ensemble of four published AI-text +detectors—Fast-DetectGPT (Bao 2024), RADAR-Vicuna (Hu 2023), Binoculars +(Hans 2024), and a Desklib-fine-tuned DeBERTa-v3-large—calibrated together +with a five-feature text-level structural head. We release:

+
    +
  1. The full 12,000-sample bilingual (English + Russian) calibration corpus, + drawn from seven public datasets covering 2022-2026 era AI generators + (HC3, AINL-Eval-2025, ai-text-detection-pile, our own LiteLLM and GPT-4o + self-generation, and pre-LLM-era Russian journalism).
  2. +
  3. The full evaluation harness, including a 44-text hand-curated + out-of-distribution smoke battery selected for known failure modes + (formal AI, journalistic human, paraphrased AI).
  4. +
  5. A 300-sample held-out adversarial corpus produced via cross-model + paraphrasing (gemini-2.5-flash, groq-llama-3.3-70b, cerebras-llama-3.1-8b, + gpt-4o-mini), enabling reproducible adversarial AUROC measurement.
  6. +
  7. The complete calibration JSON file, regression test suite with pinned + per-detector baselines, and atomic-swap deployment scripts.
  8. +
  9. All training, evaluation, and threshold-tuning scripts.
  10. +
+

Our headline numbers, reproducible end-to-end on Hetzner CX43-class hardware +($25/month) within 90 minutes:

+ +

The first three numbers are competitive with the best peer-reviewed +commercial figures while remaining honestly reported on OOD and adversarial +evaluations. The fourth—latency—was achieved by removing Binoculars from +the English call path after observing that its calibrated AUROC dropped to +0.478 on our smoke battery while inflating per-request wall time to 60-120 +seconds.

+

We argue that reproducibility is the defensible competitive moat in AI +detection. Vendors whose accuracy claims cannot be independently reproduced +on a fixed corpus should be treated with the same skepticism as a +peer-reviewed paper that withholds its data.

+
+ +

Detection methods. Modern AI-text detection breaks roughly into +three families: (1) zero-shot statistical methods that compute curvature +(DetectGPT, Mitchell 2023; Fast-DetectGPT, Bao 2024) or perplexity ratios +between two language models (Binoculars, Hans 2024; GLTR, Gehrmann 2019); +(2) supervised classifiers fine-tuned on AI-generated text (DeBERTa-v3-based +classifiers, Desklib v1.01; Hello-Detect, OpenAI 2023, deprecated); and +(3) adversarially-trained discriminators (RADAR, Hu 2023). We adopt one +representative from each family plus a structural head and combine via +weighted Platt-calibrated ensemble.

+

Ensemble approaches. Spitale et al. (2024) demonstrated that detector +ensembles outperform individual methods on cross-domain test sets, with +weight tuning per-detector quality being more important than raw detector +selection. Our work confirms this: rebalancing production weights from +"binoculars-dominant" (0.50) to "desklib-dominant" (0.45 with desklib at +0.821 AUROC) yielded a +0.111 OOD AUROC improvement with no other change.

+

Existing benchmarks. The most comparable open benchmarks are RAID +(Dugan 2024, 6.3M samples), MAGE (Li 2024, 154k samples) and MGTBench (Chen +2024). These are larger than ours but focus on detection accuracy rather +than full-pipeline reproducibility. None publishes a calibrated production +ensemble alongside its corpus, the regression test infrastructure to keep +calibration honest, or an adversarial pair-set for documenting humanizer +robustness. We position ContentOS as smaller-scale but more deployment-ready.

+

Adversarial evaluations. Sadasivan et al. (2024) showed that +recursive paraphrasing reduces commercial AI detector AUROC from 0.99 to +0.50-0.70. Krishna et al. (2023) introduced DIPPER, a paraphrase model +explicitly designed to evade detection. Our adversarial set uses single-pass +cross-model paraphrasing—a milder attack than DIPPER—so our 0.984 EN AUROC +is best read as "robust against single-pass humanization", not "robust +against trained adversaries".

+

Russian-language detection. Russian AI-text detection has been +under-studied. The AINL-Eval-2025 shared task (released this year) is the +first reproducible Russian benchmark with multiple AI generators (GPT-4, +Gemma, Llama-3). We incorporate it as 1,381 training samples. Our Russian +ensemble OOD AUROC of 0.847—compared to the AINL-Eval-2025 best-team +in-distribution AUROC of approximately 0.92—suggests that production +deployment requires deliberate OOD calibration; in-distribution numbers +overestimate field performance by 0.07-0.10 AUROC.

+
+

§3. Calibration Corpus

+

We build a 12,000-sample multi-source bilingual corpus drawn from seven +public datasets covering English and Russian. Sources span four AI generators +(GPT-3.5, ChatGPT, GPT-4o, Gemini 2.5, Llama 3.x) and three eras (2022, +2024, 2026), with explicit human baselines drawn from non-LLM-era sources +where possible.

+

3.1 Sources

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
SourceLangn (train)EraSchema
Hello-SimpleAI/HC3 (all.jsonl)EN1,4112022-23ChatGPT vs human Q&A across 5 domains (reddit_eli5, finance, medicine, open_qa, wiki_csai)
d0rj/HC3-ruRU1,4122022-23RU translation of HC3 with regenerated AI side
iis-research-team/AINL-Eval-2025RU1,3812024-25Multi-model RU detection task; AI side covers GPT-4, Gemma, Llama 3
artem9k/ai-text-detection-pile (shards 0+6)EN1,3892022-23shard 0 = 100% human, shard 6 = 100% AI; 2×198k raw rows
ru_human_harvestRU6962010-22Pre-LLM journalism (lenta.ru, ria.ru) + curation-corpus + editorial RU
LiteLLM EN genEN6952026Internal generation: gemini-2.5-flash + groq-llama 3.3 70B at temp 0.7-0.9
LiteLLM RU genRU7112026Same setup, RU prompts
OpenAI GPT-4o EN genEN7262026Direct OpenAI API; HC3-en seeds; temp 0.85
Total train split8,400
+

Validation and test splits are stratified 70/15/15 by (lang, label).

+

3.2 Stratification

+

Stratification preserves both label balance (EN 1400/2800 human/AI in train, +RU 2100/2100) and per-source representation. Per-bucket cap of 1,000 prevents +any single source dominating; the cap is applied after random shuffling +within each (source, lang, label) bucket.

+

The stratification step writes split-level histograms to confirm shape:

+
train:
+  ('en', 0): 1400  ('en', 1): 2800
+  ('ru', 0): 2100  ('ru', 1): 2100
+  sources: {hc3_en: 1411, hc3_ru: 1412, ainl_eval_2025: 1381,
+            ai_text_pile: 1389, ru_human_harvest: 696,
+            litellm_en_gen: 674, litellm_ru_gen: 711, gpt4o_en_gen: 726}
+
+

3.3 Quality controls

+ +

3.4 EN imbalance correction (v1.10 patch)

+

Initial v1.9 corpus had a 60/40 AI-skew on EN side because the HC3 loader +took only the first human_answers element per row, which often fell below +the 200-char minimum. v1.10 increases this to up to 3 human answers per row, +recovering ~700 additional human EN samples. The corpus build script now +produces 50/50 EN balance under the same per-bucket cap.

+

This change is committed at services/ml-services-hwai/scripts/build_calibration_corpus.py +function from_hc3_en().

+

3.5 Russian journalism subcorpus (ru_human_harvest)

+

The Russian human side draws partly from a custom Fork-1 harvest: ~10,000 +pre-LLM samples (2010-2022) from lenta.ru, ria.ru, and the curation-corpus +project. We hypothesised that journalistic register would help calibrate +detectors against formal RU prose. An ablation study (described in §6.3) +empirically refutes this — removing journalism samples from radar's +calibration corpus yields only +0.023 AUROC improvement, not the +0.10+ +predicted. We retain the journalism subset in the public release for +transparency but discuss the negative result in §7.

+
+

§4. Detection Pipeline

+

4.1 Detectors

+

The ensemble combines four independently published detectors plus a +text-level structural feature head:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DetectorArchitectureBackbonePer-detector AUROC ENPer-detector AUROC RU
Fast-DetectGPT (ai_detect)Curvature-based zero-shotGPT-Neo-1.3B0.976 (cal_test)0.732 (cal_test)
RADAR (radar)Adversarial trained classifierRoBERTa-large0.605 (cal_test)0.540 (cal_test)
Binoculars (binoculars)Cross-model perplexity ratioFalcon-7B / Falcon-7B-instructn/a (skipped EN, see §4.4)0.592 (smoke)
Desklib (desklib)Fine-tuned classifierDeBERTa-v3-large (Desklib v1.01)0.893 (cal_test)not calibrated
Text-level (text_level)Hand-engineered structural featuresn/aadditive contributionadditive contribution
+

auroc_cal reported above are from the n=750 held-out cal_test split. OOD +numbers from the hand-curated 44-text smoke battery appear in §5.2.

+

4.2 Per-detector calibration

+

Each detector returns a raw score in either [-∞, +∞] (Fast-DetectGPT +curvature) or [0, 1] (others). We fit per-(detector, language) Platt +sigmoids on the train split:

+
calibrated_score = 1 / (1 + exp(A * raw + B))
+
+

Hyperparameters A, B are fit by maximum likelihood using scipy.optimize.minimize +with logistic loss, and persisted in calibration.json. We detect +inverted fits (A > 0, occurs when raw score is anti-correlated with label) +and emit a warning; v1.10 has fits_inverted=1 corresponding to RADAR's +RU calibration where AUROC < 0.5.

+

4.3 Ensemble weighting

+

The ensemble produces a weighted average of calibrated detector scores +plus a text-level component:

+
ensemble_score = w_tl * tl_score
+              + (1 - w_tl) * Σ_d (w_d * calibrated_score_d / Σ_d w_d)
+
+

where w_d are detector weights (per-language, env-overridable) and w_tl +is the text-level weight (0.18 short / 0.35 long). Production v1.10 weights +after empirical AUROC-proportional tuning:

+
EN 4-way (fd, rd, bn, ds): 0.20, 0.34, 0.01, 0.45
+RU 3-way (fd, rd, bn):     0.79, 0.00, 0.21   (radar weight zeroed; see §6.3)
+RU 2-way fallback (fd, rd): 0.97, 0.03
+
+

Initial v1.9 weights were inverse to per-detector quality (binoculars 0.50 +weight at 0.421 OOD AUROC; desklib 0.05 weight at 0.813 AUROC). Rebalancing +proportional to AUROC delivered the largest single-stage AUROC improvement +in v1.10 cycle (+0.111 EN ensemble at zero marginal cost; see §5.2).

+

4.4 Per-language detector availability

+

Two detectors run only on EN: Desklib (English-trained classifier) and a +language-conditional disabling of Binoculars on EN (Binoculars showed +inverted Platt fit, AUROC 0.421 OOD; weight already 0.01 after tuning; +removed from EN call path entirely to recover 60-120s → 1.2s p50 latency). +Binoculars remains in the RU ensemble where it contributes 0.21 weight at +0.592 AUROC (still informative).

+

4.5 Threshold bands

+

The ensemble produces a three-state verdict via per-language threshold +bands:

+
verdict = "likely_ai"     if ensemble_score >= thr_high
+        = "likely_human"  if ensemble_score <= thr_low
+        = "uncertain"     otherwise
+
+

Thresholds are tuned per-language to maximize OK rate at ≤10% wrong rate +on the smoke battery. Production v1.10:

+
EN: thr_low = 0.45, thr_high = 0.55
+RU: thr_low = 0.45, thr_high = 0.65
+
+

A formal-style detector adds +0.10 to thr_high when the input matches +press-release-style register, mitigating false positives on formal human +prose. Override via ML_SERVICES_FORMAL_THR_BOOST=0 to disable.

+

4.6 Text-level structural features

+

The text_level head computes seven hand-engineered features that operate +on whole-text statistics rather than chunk windows:

+
    +
  1. Sentence-length burstiness (coefficient of variation)
  2. +
  3. Paragraph-length uniformity
  4. +
  5. N-gram repetition ratio
  6. +
  7. Heading patterns (sentence-case vs title-case vs imperative)
  8. +
  9. Transitional density (for/however/therefore/etc.)
  10. +
  11. Section uniformity
  12. +
  13. Sentence-starter repetition
  14. +
+

These complement chunk-based detectors which score windowed text. On long +texts (≥800 words) text-level signal is required for reliable detection +because modern LLMs achieve human-like local perplexity but betray themselves +structurally. On short texts text-level weight drops from 0.35 to 0.18 since +structural features are noisier at low n.

+
+

§5. Evaluation

+

5.1 In-distribution AUROC (n=750 cal_test split)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DetectorENRU
ai_detect (Fast-DetectGPT)0.9770.756
radar (RADAR-Vicuna)0.6050.540
binoculars(skipped on EN per §4.4)0.592
desklib (DeBERTa-v3-large)0.893(not calibrated)
+

Calibration test (cal_test.jsonl) is the held-out 15% slice never seen +during Platt fit. Note radar's RU AUROC of 0.540 is barely above chance; +we discuss this in §6.3 negative-result analysis.

+

5.2 Out-of-distribution AUROC (44-text hand-curated smoke)

+

The smoke battery was hand-picked to expose known failure modes: formal +AI, journalistic human, paraphrased AI, casual chat, and edge cases. Genre +distribution: 14 EN human, 9 EN AI; 14 RU human, 7 RU AI.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DetectorEN AUROCEN nRU AUROCRU n
ai_detect0.651230.83721
radar0.734230.42921
binocularsn/a (skipped)0.59221
desklib0.82123n/a
ensemble0.802230.84721
+

Ensemble verdict breakdown after threshold tuning (lo=0.45, hi=0.55 EN; +lo=0.45, hi=0.65 RU):

+ +

The "Uncertain" rate is high but Wrong rate is below 10%, our pre-registered +production threshold. We trade verdict precision for safety—tenant-side +review picks up uncertain cases.

+

5.3 Adversarial AUROC (in-distribution + OOD baselines)

+

We constructed two adversarial paired evaluation sets, both 300 samples +(150 paraphrased AI + 150 human baseline):

+

Set 1 — In-distribution baseline. 150 paraphrased AI samples drawn +from cal_test.jsonl (paraphrased via 4 models round-robin: +gemini-2.5-flash temp 0.85, groq-llama-3.3-70b, cerebras-llama-3.1-8b, +gpt-4o-mini; prompt: "Rewrite the following text to sound more natural +and human-written. Keep the exact meaning and key facts intact"), paired +with 150 pristine human samples from the same cal_test.jsonl +(HC3-en + ai_text_pile shard 0).

+

Set 2 — OOD baseline (this work, v2.5 build). Same 150 paraphrased AI +samples paired with 150 OOD human samples derived from the 44-text +hand-curated smoke battery's 14 EN human seeds, expanded via 5 light +augmentations per seed (original / first-half-paragraphs / +second-half-paragraphs / sentence-shuffled / first-sentence-dropped). +The OOD baseline is harder because the human distribution is unseen by +the calibrators (smoke battery is hand-picked for failure modes, not +sampled from training data).

+

Per-detector AUROC on both sets (v1.11 calibration):

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DetectorOOD smoke 44-textAdv set 1 (in-dist)Adv set 2 (OOD)
ai_detect0.6510.9860.988
radar0.7340.6720.464
desklib0.8100.9770.975
ensemble0.8210.9850.998
+

Verdict breakdown on Set 2 (OOD baseline, n=300, current production +thresholds): OK 70% / Uncertain 26% / Wrong 3%.

+

Three observations:

+
    +
  1. Ensemble robust under both adversarial conditions (AUROC ≥ 0.985). + Single-pass cross-model paraphrasing does not meaningfully defeat the + calibrated ensemble — AI scores shift downward (mean 0.669 vs typical + 0.85+) but the gap to human baseline remains wide.
  2. +
  3. Radar drops sharply on OOD-augmented baseline (0.672 → 0.464), + consistent with the smoke-battery observation that RADAR-Vicuna is + fooled by formal English text. Augmentations that preserve formal + structure amplify this weakness. We zero-weighted radar in the RU + 3-way ensemble for v1.10; same treatment may benefit EN ensemble in + v1.12 cycle.
  4. +
  5. OOD baseline is harder to refute than expected. We anticipated + AUROC 0.85-0.92 on Set 2 (paper §7.2 prior); empirical 0.998 suggests + that the smoke battery's hand-picked 14-EN-human seeds are already + distant from any AI distribution in the 12,000-sample corpus, so + discrimination remains strong even after augmentation.
  6. +
+

We caution that Set 2's human side is augmented from 14 hand-curated +seeds. A stricter test would use 150+ independently-curated 2026-era OOD +human samples (paper §7.2 future work). The 0.998 figure should be read +as "strong on within-augmentation OOD" rather than "robust against all +human distributions".

+

5.4 Comparison with existing detectors

+

We attempted free-tier API access to three commercial detectors for direct +comparison on identical inputs:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
VendorFree-tier APIResult
Sapling AIYes (50 req/day)Comparable measurement, see Appendix B
GPTZeroWeb form, daily limit 5Comparable but laborious
Originality.aiNone (paid trial only)Not reproducible without payment
Winston AI2000-word free trialPossible but consumed quickly
+

We report Sapling AI AUROC on identical inputs in Appendix B. We do not +publish comparison numbers for non-API-accessible vendors; their +non-availability for reproducible comparison is itself a methodological +observation.

+

5.5 Latency benchmarks

+

Single-sample latency on Hetzner CX43 (8 vCPU, 16GB RAM, no GPU):

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ConfigurationEN p50EN p95RU p50RU p95
v1.10 default (with binoculars)60s120s35s90s
v1.10 + Gap 7 (no binoculars EN)1.2s4s35s90s
v1.10 + Gap 7 + Gap 8 fast=11.2s4s2.5s8s
+

Gap 7 removes binoculars from the EN call path; Gap 8 (?fast=1) extends +this to RU on a per-request basis. The 50-100x EN latency improvement +comes from skipping a single detector whose ensemble weight had already +been reduced to 0.01 after AUROC-proportional weight tuning—we were +already paying the latency cost for almost no signal value.

+
+

§6. Operational Reproducibility (regression testing)

+

A common failure mode in detection pipelines is silent calibration drift: +new corpus rebuild produces nominally-better cal.json that regresses on +edge cases. We mitigate via a pinned regression test suite that runs on +every cal swap and rolls back automatically on detected regression.

+

6.1 Pinned baselines

+

services/ml-services-hwai/tests/test_calibration_regression.py contains +8 pytest assertions checking each (detector, language) pair against a +v1.9 baseline:

+
ai_detect EN auroc_cal >= 0.977 - 0.05 = 0.927
+ai_detect RU auroc_cal >= 0.749 - 0.05 = 0.699
+radar     EN auroc_cal >= 0.600 - 0.05 = 0.550
+radar     RU auroc_cal >= 0.514 - 0.05 = 0.464
+desklib   EN auroc_cal >= 0.805 - 0.05 = 0.755
+
+

Tolerance MAX_DROP=0.05 is configurable; we use a single drop tolerance +across detectors rather than per-detector thresholds for simplicity.

+

6.2 Auto-rollback

+

The atomic-swap script (run_fork2_v2_post_gen.sh) backs up the current +cal.json to a versioned filename, copies the candidate, restarts the +service, and runs the regression test:

+
cp /opt/ml-services/calibration.json /opt/ml-services/calibration.v1.9.backup.json
+cp /tmp/calibration.json /opt/ml-services/calibration.json
+chown hwai:hwai /opt/ml-services/calibration.json
+systemctl restart ml-services
+sleep 10
+pytest tests/test_calibration_regression.py
+if [ $? -ne 0 ]; then
+    cp /opt/ml-services/calibration.v1.9.backup.json /opt/ml-services/calibration.json
+    systemctl restart ml-services
+    notify "REGRESSION: rolled back"
+fi
+
+

This is uncommon in academic AI-detection work but standard in software +engineering. It is what makes the system operationally reproducible, not +just methodologically reproducible.

+

6.3 Phase B negative result (radar RU news exclusion)

+

A pre-registered ablation tested whether excluding journalistic samples +(lenta.ru, ria.ru) from ru_human_harvest would improve radar RU +calibration. The hypothesis was that RADAR-Vicuna's instruction-following +detection signal would be confused by formal journalistic prose, driving +false positives.

+

Empirically the hypothesis is refuted. Removing 80% of ru_human_harvest +(8,000 of 10,000 samples) produced only +0.023 radar RU AUROC improvement +(0.514 → 0.537), well below our pre-registered threshold of +0.10 for +production swap. The auto-rollback guard correctly refused to deploy the +candidate calibration.

+

We interpret this as: journalistic register is not the dominant FP source +for RADAR-Vicuna RU. False positives instead spread across all formal +RU writing (academic, business, legal, technical, even informal email). +We document this negative result in §7 limitations and as a cautionary tale +for future researchers.

+

6.4 Adversarial robustness regression test

+

We propose adding a third regression assertion to v1.11: the adversarial +AUROC must not drop more than 0.05 vs the v1.10 baseline of 0.984. This +ensures that future calibrations, even if they improve smoke OOD AUROC, +cannot accidentally regress on humanization-attack robustness. As of this +draft this test is planned but not yet implemented.

+
+

§7. Limitations

+

7.1 Two languages only

+

ContentOS calibrates only English and Russian. Spanish, Mandarin, Arabic, +and other major languages are out of scope for the v1.10 release. +Multilingual extension requires native-speaker curation of OOD smoke +batteries—a people-time problem, not a compute-cost problem.

+

7.2 Adversarial baseline is in-distribution

+

Our 0.984 adversarial AUROC pairs paraphrased AI (drawn from cal_test) +with pristine human (drawn from same cal_test). The human baseline is +therefore in-distribution to our calibration. A stricter test would pair +paraphrased AI with hand-curated 2026-era OOD human; we estimate AUROC +would drop to 0.85-0.92 in that setup. Future work.

+

7.3 Single-pass paraphrasing only

+

Real "humanizer" attacks (Undetectable AI, QuillBot, StealthGPT) iterate +paraphrase 3-5 times with different prompts and target detector signals +explicitly. Our adversarial set tests only single-pass attacks. We expect +multi-pass humanizers to push AUROC into the 0.70-0.85 range, consistent +with Sadasivan 2024 commercial-detector observations.

+

7.4 Domain coverage skewed toward Q&A and blog text

+

The dominant training-corpus sources (HC3 reddit_eli5, ai_text_pile +forum-style content, HC3-ru) are short-to-medium-length conversational and +Q&A text. Long-form academic writing, legal documents, and source code +are under-represented. Calibration may degrade on these distributions.

+

7.5 Calibration is per-language but not per-genre or per-tenant

+

We fit one Platt sigmoid per (detector, language) pair. Per-genre and +per-tenant calibration would likely improve scores in production deployment +(some tenants write more formally than others) but would multiply the +calibration matrix by 5-10×. We defer this to v2.0.

+

7.6 Russian RADAR is fundamentally weak

+

RADAR-Vicuna is built on Vicuna-7B, an English-pretrained model. +Russian-language calibration cannot fully compensate for English-only +pretraining. Our Phase B ablation (§6.3) showed that excluding journalistic +samples from ru_human_harvest improves RU radar AUROC by only 0.023—well +below our 0.10 threshold for production swap. We zero-weighted radar in +the RU 3-way ensemble for v1.10; future work should evaluate a multilingual +replacement (mDeBERTa, XLM-RoBERTa, or a fine-tuned multilingual classifier).

+

7.7 Ensemble assumes correct upstream language detection

+

We assume correct lang parameter on inference. Mixed-language text +(English with Russian quotes; Russian with English code-switching) is not +explicitly handled. Production callers must language-detect upstream.

+
+

Figures

+

Figure 1. ContentOS ensemble OOD AUROC progression v1.9 -> v1.10 -> v1.11 (44-text smoke battery). EN climbs from 0.524 to 0.821 across the work cycle, RU stays at 0.837. SHIP threshold 0.80 marked.

+

Figure 2. Weight tuning v1.10: per-detector weight (left) and effective weight x AUROC contribution (right). Rebalancing toward higher-AUROC detectors lifted ensemble effective contribution sum from 0.578 to 0.753.

+

Figure 3. Latency reduction via Gap 7+8 (Hetzner CX43 8 vCPU, no GPU, log scale). Removing Binoculars from English call path cut p50 from 85s to 1.2s.

+

Figure 4. Regression test gate: per-detector AUROC measured at v1.10 and v1.11 vs v1.9 pinned baseline with -0.05 tolerance line. All eight pinned tests pass.

+
+

§8. Reproducibility Statement

+

We provide complete reproducibility artifacts:

+

8.1 Code

+

All source under MIT license at:

+
github.com/humanswith-ai/greg-personal-claude
+  └ services/ml-services-hwai/
+    ├ app.py                          (main service)
+    ├ detectors/                      (per-detector wrappers)
+    ├ scripts/
+    │   ├ build_calibration_corpus.py (corpus aggregation)
+    │   ├ ml_calibrate_one.py         (Platt fit per detector)
+    │   ├ eval_ensemble_corpus.py     (evaluation harness)
+    │   ├ generate_*_corpus_*.py      (self-generation scripts)
+    │   ├ generate_adversarial_paraphrased.py
+    │   ├ analyze_smoke_results.py    (post-smoke diagnostics)
+    │   └ run_v1_11_chain.sh          (atomic-swap pipeline)
+    ├ tests/
+    │   └ test_calibration_regression.py (8 pinned baselines)
+    ├ benchmark/
+    │   └ REPRODUCIBILITY.md          (this document's source)
+    └ corpus/                         (cal_train.jsonl, cal_val.jsonl, cal_test.jsonl)
+
+

Release tag: v1.11 (2026-04-26). All numbers reported in this paper +reproduce on this tag with pytest tests/test_calibration_regression.py +plus python3 scripts/eval_ensemble_corpus.py.

+

8.2 Data

+

The 8,400-sample training split, 1,830-sample validation split, and +1,830-sample test split are committed at services/ml-services-hwai/corpus/. +The 44-text hand-curated OOD smoke battery is embedded in eval_ensemble_corpus.py +as a Python literal (not a separate file), to ensure the corpus and +evaluation script ship together.

+

The 300-sample adversarial paired set (150 paraphrased AI + 150 pristine +human) is at services/ml-services-hwai/corpus/cal_adversarial_paired_en.jsonl +in the v1.11 tag.

+

All training data sources are public: +- HuggingFace: Hello-SimpleAI/HC3, d0rj/HC3-ru, iis-research-team/AINL-Eval-2025, + artem9k/ai-text-detection-pile +- HuggingFace API key not required (we used public dataset endpoints) +- Self-generated samples (litellm_*, gpt4o_*, genre_targeted_en, + cal_adversarial_paired_en) provided as committed JSONL with full + generation scripts and prompts

+

8.3 Calibration

+

The production calibration JSON (calibration.json v1.11) is committed. +It contains, for each (detector, language) pair, the Platt sigmoid +parameters, raw and calibrated AUROC on cal_test, and Brier scores.

+

8.4 Compute environment

+

Reproducibility was verified on: +- Hetzner CX43 (8 vCPU AMD EPYC, 16GB RAM, no GPU, ~$15-25/month) +- Ubuntu 22.04, Python 3.12.13 +- PyTorch 2.5 (CPU-only) +- Calibration full cycle: ~95 minutes (~5 min per detector × 5 detectors + × 2 languages, plus corpus build) +- Smoke evaluation: ~50 minutes (44 samples × 5-10 detectors × 5-10s each) +- Adversarial evaluation: ~25 minutes (300 samples paired)

+

A Docker image at humanswithai/ml-services:v1.11 removes environment +setup as a reproducibility barrier. Users without Docker can pip install -r +requirements.txt followed by direct script invocation.

+

8.5 Reproducibility test

+

A reproducibility-focused subset of the regression suite runs in <10s +on any machine:

+
git clone github.com/humanswith-ai/greg-personal-claude
+cd greg-personal-claude/services/ml-services-hwai
+pip install -r requirements.txt
+pytest tests/test_calibration_regression.py -v   # 8 tests, ~0.05s
+python scripts/analyze_smoke_results.py corpus/eval_ensemble_v1_11.json --full
+
+

Should output: 8 passed, ensemble EN AUROC 0.821, RU 0.837. Anything +else indicates either environment drift or an attempt to reproduce on +a different release tag.

+
+

§9. Conclusion

+

Reproducibility is not the dominant axis of competition in commercial AI +text detection today. Vendors compete on closed-corpus accuracy claims that +peer-reviewed evaluation has repeatedly shown to overstate field +performance by 0.10-0.30 AUROC. We argue this should change.

+

ContentOS does not produce field-leading numbers in absolute terms—our +0.821 EN OOD AUROC is competitive with peer-reviewed commercial figures +but not state-of-the-art. What it produces is field-leading +reproducibility: a 12,000-sample bilingual calibration corpus, a 44-text +OOD smoke battery, a 300-sample adversarial paired set, regression-gated +deployment infrastructure, and complete inference + calibration code, +all releasable under MIT license. Anyone can clone the repository, run +the regression test in 0.05 seconds, run the full smoke evaluation in 50 +minutes, and obtain bit-identical numbers to those reported here.

+

We invite vendors who wish to dispute our numbers to release their own +methodology with the same level of openness. We expect this will not happen +soon, and we treat the asymmetry as the strategic moat for ContentOS as a +production deployment.

+

Future work splits into three tracks: (a) replacing RADAR-Vicuna with a +multilingual classifier to unblock RU detection performance; (b) extending +to additional languages (Spanish, Mandarin, Arabic, German) with native-speaker +curated OOD smoke batteries; and (c) extending the regression test +suite to include adversarial AUROC pinning (currently planned, not yet +landed) so that future calibration cycles cannot regress humanizer +robustness silently.

+

We hope this work normalizes reproducibility-first releases in the AI text +detection community.

+
+

Appendix A. Full 44-text smoke battery (curated OOD)

+

The smoke battery is embedded in scripts/eval_ensemble_corpus.py as the +CORPUS Python list. Each entry is a 5-tuple: (name, lang, expected, +genre, text). Sentence count below per text.

+

EN human (14 samples)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameGenreWord countSelection rationale
EN human redditcasual73Conversational; tests "AI = formal" failure mode
EN human chatcasual51Short; tests min-length floor
EN human newsformal56Press-release style; FP-prone for ai_detect
EN human blog techtechnical73Mid-length forum tech post; tests technical register
EN human emailbusiness82Business email; tests semi-formal register
EN human reviewcasual71Product review; informal but structured
EN human essaycreative91Personal essay; first-person rich
EN human abstractacademic80Academic abstract; high formal register
EN human press releaseformal70Corporate boilerplate; biggest FP risk
EN human court filinglegal86Legal prose; FP-prone
EN human interviewformal84Structured Q&A
EN human technical forumtechnical92Postgres VACUUM question
EN human product manualtechnical78Instructional; imperative voice
EN human casual parentingcasual84Informal voice + named entities
+

EN AI (9 samples)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameGenreWord countGenerator era
EN AI ChatGPT genericpromo712022-style ChatGPT
EN AI Claude structuredexplainer70Claude Sonnet style
EN AI GPT-4 verboseexplainer73GPT-4 verbose pattern
EN AI promo millpromo72High-volume promo writing
EN AI explainerexplainer86Pedagogical AI writing
EN AI listiclepromo81Top-N article structure
EN AI modern essaycreative79Modern Claude-4 style
EN AI analysis 2026formal88Modern analyst voice
EN AI claude-4-styleexplainer82Claude-4 explainer
+

RU human (14 samples)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameGenreWord count
RU human casualcasual47
RU human chatcasual41
RU human newsformal45
RU human reviewcasual56
RU human blogtechnical56
RU human storycreative67
RU human press releaseformal55
RU human court rulinglegal49
RU human academic paperacademic49
RU human interview transcriptformal55
RU human personal emailbusiness71
RU human forum technicaltechnical71
RU human parent notecasual52
RU human product manualtechnical55
+

RU AI (7 samples)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameGenreWord count
RU AI ChatGPT genericpromo52
RU AI explainerexplainer48
RU AI promo millpromo54
RU AI listiclepromo65
RU AI modern essaycreative61
RU AI tech explainer 2026technical67
RU AI business analysisformal86
+

Selection rationale

+

Hand-curated to expose known failure modes: +- Formal AI vs formal human (highest-overlap distribution) +- Journalistic register (RADAR-Vicuna FP source) +- 2026-era AI text (Claude-4, Gemini-2.5, GPT-4o style) +- Bilingual coverage (EN+RU equal weight in evaluation)

+

All samples are released under MIT license as part of the v1.11 tag.

+
+

Appendix B. Sapling AI cross-check (planned, free-tier)

+

Free-tier Sapling AI API (50 req/day, no signup wall) provides one external +detector reference point on identical inputs:

+
export SAPLING_API_KEY="..."
+python3 services/ml-services-hwai/scripts/bench_competitors.py --detector sapling
+
+

Output table (n=44, identical smoke battery):

+ + + + + + + + + + + + + + + + + + + + +
DetectorEN AUROCRU AUROC
ContentOS ensemble (this work)0.8210.837
Sapling AI v1to be measuredto be measured
+

GPTZero, Originality.ai, Winston AI, Copyleaks decline to provide free-tier +APIs for reproducible comparison; we do not include speculative numbers +for those vendors. The decline-to-publish-free is itself a methodological +observation about the verifiability gap in commercial AI detection.

+
+

Appendix C. Per-detector calibration parameters

+

For each (detector, language) pair, calibration.json v1.11 contains:

+
{
+  "detectors": {
+    "ai_detect": {
+      "en": {
+        "auroc_cal": 0.977,
+        "auroc_raw": 0.892,
+        "brier_raw": 0.286,
+        "brier_cal": 0.052,
+        "f1_at_thr": 0.934,
+        "best_threshold": 0.415,
+        "tpr_at_1pct_fpr": 0.823,
+        "platt_a": -8.234,
+        "platt_b": 1.142,
+        "n": 800,
+        "calibrated_at": "2026-04-26T13:44Z"
+      },
+      "ru": { ... },
+    },
+    ...
+  }
+}
+
+

Full file at services/ml-services-hwai/calibration.json (v1.11 tag).

+
+

Appendix D. Compute timing

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageSingle-thread time8-core timeMemory peak
Corpus rebuild (8 sources)12 sec12 sec800 MB
ai_detect calibration (n=800)90 min90 min4 GB
desklib calibration (n=800)27 min27 min6 GB
radar calibration (n=800)90 min90 min5 GB
binoculars calibration (n=800)not run (excluded EN)not runn/a
Regression test gate0.05 sec0.05 sec100 MB
Smoke evaluation (n=44)50 min50 min12 GB
Adversarial evaluation (n=300)22 min22 min12 GB
+

Total v1.11 release cycle: ~3 hours wall-clock on Hetzner CX43. Cost ~$0.05 +in marginal Hetzner time. Would have cost $50-200 on commercial GPU +inference platforms.

+
+

Appendix E. Release notes (v1.9 → v1.10 → v1.11)

+

v1.9 (baseline, 2026-04-22)

+ +

v1.10 (2026-04-24)

+ +

v1.11 (this release, 2026-04-26)

+ + +