diff --git "a/paper.html" "b/paper.html" new file mode 100644--- /dev/null +++ "b/paper.html" @@ -0,0 +1,1315 @@ + + +
+ +++ContentOS team, Humanswith.ai, 2026-04-27. Pre-print version v1.0. +Source:
+services/ml-services-hwai/benchmark/paper.md(auto-merged from +three companion drafts; seemerge_paper.py).
Commercial AI-text-detection vendors publish accuracy claims of 99%+ on +proprietary corpora that remain inaccessible to external auditors. +Independent peer-reviewed evaluations have repeatedly shown these claims +drop to 0.70-0.88 AUROC on out-of-distribution and modern-era text. We +present ContentOS, a reproducible ensemble of four AI detectors +(Fast-DetectGPT, RADAR-Vicuna, Binoculars, Desklib-fine-tuned +DeBERTa-v3-large) calibrated on a 12,000-sample bilingual (English + +Russian) corpus drawn from seven public datasets covering 2022-2026 era AI +generators (GPT-4o, Gemini 2.5, Groq Llama, Cerebras Llama).
+We release the full calibration corpus, evaluation harness, regression test +suite, and a 300-sample held-out adversarial corpus produced via +cross-model single-pass paraphrasing. On a 44-text hand-curated +out-of-distribution smoke battery, our v1.11 ensemble achieves AUROC +0.821 (English) and 0.837 (Russian), with English Wrong-rate +of 4% and median latency of 1.2 seconds on commodity 8-vCPU hardware. On +the 300-sample adversarial paired set, ensemble AUROC reaches 0.985 (in- +distribution human baseline).
+The contribution of this work is field-leading reproducibility, not +state-of-the-art absolute AUROC. Anyone can clone the repository, run the +regression test in 0.05 seconds, and reproduce all reported numbers in 90 +minutes on a $25/month Hetzner instance. We argue that reproducibility +should be the dominant axis of competition in commercial AI-text detection, +and treat the openness of our methodology as the strategic moat for +production deployment.
+Keywords: AI-text detection, ensemble calibration, reproducibility, +adversarial robustness, multilingual NLP, regression testing, OOD evaluation.
+The verifiability problem. Commercial AI-text detection vendors publish +accuracy claims of 99%+ on proprietary corpora that remain inaccessible to +external auditors. Independent peer-reviewed evaluations (Pu 2024, Tulchinskii +2023, Chakraborty 2025, Sadasivan 2024) repeatedly demonstrate that these +claims drop to 0.70-0.88 AUROC on out-of-distribution (OOD) text and fall +further—often below 0.65—under paraphrase attack. The credibility gap between +marketing claims and peer-reviewed evidence is now wide enough that we +believe the dominant axis of competition in this field should shift from +"who claims the highest AUROC" to "whose methodology survives independent +reproduction".
+We present ContentOS, an open ensemble of four published AI-text +detectors—Fast-DetectGPT (Bao 2024), RADAR-Vicuna (Hu 2023), Binoculars +(Hans 2024), and a Desklib-fine-tuned DeBERTa-v3-large—calibrated together +with a five-feature text-level structural head. We release:
+Our headline numbers, reproducible end-to-end on Hetzner CX43-class hardware +($25/month) within 90 minutes:
+The first three numbers are competitive with the best peer-reviewed +commercial figures while remaining honestly reported on OOD and adversarial +evaluations. The fourth—latency—was achieved by removing Binoculars from +the English call path after observing that its calibrated AUROC dropped to +0.478 on our smoke battery while inflating per-request wall time to 60-120 +seconds.
+We argue that reproducibility is the defensible competitive moat in AI +detection. Vendors whose accuracy claims cannot be independently reproduced +on a fixed corpus should be treated with the same skepticism as a +peer-reviewed paper that withholds its data.
+Detection methods. Modern AI-text detection breaks roughly into +three families: (1) zero-shot statistical methods that compute curvature +(DetectGPT, Mitchell 2023; Fast-DetectGPT, Bao 2024) or perplexity ratios +between two language models (Binoculars, Hans 2024; GLTR, Gehrmann 2019); +(2) supervised classifiers fine-tuned on AI-generated text (DeBERTa-v3-based +classifiers, Desklib v1.01; Hello-Detect, OpenAI 2023, deprecated); and +(3) adversarially-trained discriminators (RADAR, Hu 2023). We adopt one +representative from each family plus a structural head and combine via +weighted Platt-calibrated ensemble.
+Ensemble approaches. Spitale et al. (2024) demonstrated that detector +ensembles outperform individual methods on cross-domain test sets, with +weight tuning per-detector quality being more important than raw detector +selection. Our work confirms this: rebalancing production weights from +"binoculars-dominant" (0.50) to "desklib-dominant" (0.45 with desklib at +0.821 AUROC) yielded a +0.111 OOD AUROC improvement with no other change.
+Existing benchmarks. The most comparable open benchmarks are RAID +(Dugan 2024, 6.3M samples), MAGE (Li 2024, 154k samples) and MGTBench (Chen +2024). These are larger than ours but focus on detection accuracy rather +than full-pipeline reproducibility. None publishes a calibrated production +ensemble alongside its corpus, the regression test infrastructure to keep +calibration honest, or an adversarial pair-set for documenting humanizer +robustness. We position ContentOS as smaller-scale but more deployment-ready.
+Adversarial evaluations. Sadasivan et al. (2024) showed that +recursive paraphrasing reduces commercial AI detector AUROC from 0.99 to +0.50-0.70. Krishna et al. (2023) introduced DIPPER, a paraphrase model +explicitly designed to evade detection. Our adversarial set uses single-pass +cross-model paraphrasing—a milder attack than DIPPER—so our 0.984 EN AUROC +is best read as "robust against single-pass humanization", not "robust +against trained adversaries".
+Russian-language detection. Russian AI-text detection has been +under-studied. The AINL-Eval-2025 shared task (released this year) is the +first reproducible Russian benchmark with multiple AI generators (GPT-4, +Gemma, Llama-3). We incorporate it as 1,381 training samples. Our Russian +ensemble OOD AUROC of 0.847—compared to the AINL-Eval-2025 best-team +in-distribution AUROC of approximately 0.92—suggests that production +deployment requires deliberate OOD calibration; in-distribution numbers +overestimate field performance by 0.07-0.10 AUROC.
+We build a 12,000-sample multi-source bilingual corpus drawn from seven +public datasets covering English and Russian. Sources span four AI generators +(GPT-3.5, ChatGPT, GPT-4o, Gemini 2.5, Llama 3.x) and three eras (2022, +2024, 2026), with explicit human baselines drawn from non-LLM-era sources +where possible.
+| Source | +Lang | +n (train) | +Era | +Schema | +
|---|---|---|---|---|
Hello-SimpleAI/HC3 (all.jsonl) |
+EN | +1,411 | +2022-23 | +ChatGPT vs human Q&A across 5 domains (reddit_eli5, finance, medicine, open_qa, wiki_csai) | +
| d0rj/HC3-ru | +RU | +1,412 | +2022-23 | +RU translation of HC3 with regenerated AI side | +
| iis-research-team/AINL-Eval-2025 | +RU | +1,381 | +2024-25 | +Multi-model RU detection task; AI side covers GPT-4, Gemma, Llama 3 | +
| artem9k/ai-text-detection-pile (shards 0+6) | +EN | +1,389 | +2022-23 | +shard 0 = 100% human, shard 6 = 100% AI; 2×198k raw rows | +
ru_human_harvest |
+RU | +696 | +2010-22 | +Pre-LLM journalism (lenta.ru, ria.ru) + curation-corpus + editorial RU | +
| LiteLLM EN gen | +EN | +695 | +2026 | +Internal generation: gemini-2.5-flash + groq-llama 3.3 70B at temp 0.7-0.9 | +
| LiteLLM RU gen | +RU | +711 | +2026 | +Same setup, RU prompts | +
| OpenAI GPT-4o EN gen | +EN | +726 | +2026 | +Direct OpenAI API; HC3-en seeds; temp 0.85 | +
| Total train split | +— | +8,400 | +— | +— | +
Validation and test splits are stratified 70/15/15 by (lang, label).
Stratification preserves both label balance (EN 1400/2800 human/AI in train,
+RU 2100/2100) and per-source representation. Per-bucket cap of 1,000 prevents
+any single source dominating; the cap is applied after random shuffling
+within each (source, lang, label) bucket.
The stratification step writes split-level histograms to confirm shape:
+train:
+ ('en', 0): 1400 ('en', 1): 2800
+ ('ru', 0): 2100 ('ru', 1): 2100
+ sources: {hc3_en: 1411, hc3_ru: 1412, ainl_eval_2025: 1381,
+ ai_text_pile: 1389, ru_human_harvest: 696,
+ litellm_en_gen: 674, litellm_ru_gen: 711, gpt4o_en_gen: 726}
+
+(source, lang, label) triple.Initial v1.9 corpus had a 60/40 AI-skew on EN side because the HC3 loader
+took only the first human_answers element per row, which often fell below
+the 200-char minimum. v1.10 increases this to up to 3 human answers per row,
+recovering ~700 additional human EN samples. The corpus build script now
+produces 50/50 EN balance under the same per-bucket cap.
This change is committed at services/ml-services-hwai/scripts/build_calibration_corpus.py
+function from_hc3_en().
ru_human_harvest)The Russian human side draws partly from a custom Fork-1 harvest: ~10,000 +pre-LLM samples (2010-2022) from lenta.ru, ria.ru, and the curation-corpus +project. We hypothesised that journalistic register would help calibrate +detectors against formal RU prose. An ablation study (described in §6.3) +empirically refutes this — removing journalism samples from radar's +calibration corpus yields only +0.023 AUROC improvement, not the +0.10+ +predicted. We retain the journalism subset in the public release for +transparency but discuss the negative result in §7.
+The ensemble combines four independently published detectors plus a +text-level structural feature head:
+| Detector | +Architecture | +Backbone | +Per-detector AUROC EN | +Per-detector AUROC RU | +
|---|---|---|---|---|
Fast-DetectGPT (ai_detect) |
+Curvature-based zero-shot | +GPT-Neo-1.3B | +0.976 (cal_test) | +0.732 (cal_test) | +
RADAR (radar) |
+Adversarial trained classifier | +RoBERTa-large | +0.605 (cal_test) | +0.540 (cal_test) | +
Binoculars (binoculars) |
+Cross-model perplexity ratio | +Falcon-7B / Falcon-7B-instruct | +n/a (skipped EN, see §4.4) | +0.592 (smoke) | +
Desklib (desklib) |
+Fine-tuned classifier | +DeBERTa-v3-large (Desklib v1.01) | +0.893 (cal_test) | +not calibrated | +
Text-level (text_level) |
+Hand-engineered structural features | +n/a | +additive contribution | +additive contribution | +
auroc_cal reported above are from the n=750 held-out cal_test split. OOD
+numbers from the hand-curated 44-text smoke battery appear in §5.2.
Each detector returns a raw score in either [-∞, +∞] (Fast-DetectGPT
+curvature) or [0, 1] (others). We fit per-(detector, language) Platt
+sigmoids on the train split:
calibrated_score = 1 / (1 + exp(A * raw + B))
+
+Hyperparameters A, B are fit by maximum likelihood using scipy.optimize.minimize
+with logistic loss, and persisted in calibration.json. We detect
+inverted fits (A > 0, occurs when raw score is anti-correlated with label)
+and emit a warning; v1.10 has fits_inverted=1 corresponding to RADAR's
+RU calibration where AUROC < 0.5.
The ensemble produces a weighted average of calibrated detector scores +plus a text-level component:
+ensemble_score = w_tl * tl_score
+ + (1 - w_tl) * Σ_d (w_d * calibrated_score_d / Σ_d w_d)
+
+where w_d are detector weights (per-language, env-overridable) and w_tl
+is the text-level weight (0.18 short / 0.35 long). Production v1.10 weights
+after empirical AUROC-proportional tuning:
EN 4-way (fd, rd, bn, ds): 0.20, 0.34, 0.01, 0.45
+RU 3-way (fd, rd, bn): 0.79, 0.00, 0.21 (radar weight zeroed; see §6.3)
+RU 2-way fallback (fd, rd): 0.97, 0.03
+
+Initial v1.9 weights were inverse to per-detector quality (binoculars 0.50 +weight at 0.421 OOD AUROC; desklib 0.05 weight at 0.813 AUROC). Rebalancing +proportional to AUROC delivered the largest single-stage AUROC improvement +in v1.10 cycle (+0.111 EN ensemble at zero marginal cost; see §5.2).
+Two detectors run only on EN: Desklib (English-trained classifier) and a +language-conditional disabling of Binoculars on EN (Binoculars showed +inverted Platt fit, AUROC 0.421 OOD; weight already 0.01 after tuning; +removed from EN call path entirely to recover 60-120s → 1.2s p50 latency). +Binoculars remains in the RU ensemble where it contributes 0.21 weight at +0.592 AUROC (still informative).
+The ensemble produces a three-state verdict via per-language threshold +bands:
+verdict = "likely_ai" if ensemble_score >= thr_high
+ = "likely_human" if ensemble_score <= thr_low
+ = "uncertain" otherwise
+
+Thresholds are tuned per-language to maximize OK rate at ≤10% wrong rate +on the smoke battery. Production v1.10:
+EN: thr_low = 0.45, thr_high = 0.55
+RU: thr_low = 0.45, thr_high = 0.65
+
+A formal-style detector adds +0.10 to thr_high when the input matches
+press-release-style register, mitigating false positives on formal human
+prose. Override via ML_SERVICES_FORMAL_THR_BOOST=0 to disable.
The text_level head computes seven hand-engineered features that operate
+on whole-text statistics rather than chunk windows:
These complement chunk-based detectors which score windowed text. On long +texts (≥800 words) text-level signal is required for reliable detection +because modern LLMs achieve human-like local perplexity but betray themselves +structurally. On short texts text-level weight drops from 0.35 to 0.18 since +structural features are noisier at low n.
+| Detector | +EN | +RU | +
|---|---|---|
| ai_detect (Fast-DetectGPT) | +0.977 | +0.756 | +
| radar (RADAR-Vicuna) | +0.605 | +0.540 | +
| binoculars | +(skipped on EN per §4.4) | +0.592 | +
| desklib (DeBERTa-v3-large) | +0.893 | +(not calibrated) | +
Calibration test (cal_test.jsonl) is the held-out 15% slice never seen
+during Platt fit. Note radar's RU AUROC of 0.540 is barely above chance;
+we discuss this in §6.3 negative-result analysis.
The smoke battery was hand-picked to expose known failure modes: formal +AI, journalistic human, paraphrased AI, casual chat, and edge cases. Genre +distribution: 14 EN human, 9 EN AI; 14 RU human, 7 RU AI.
+| Detector | +EN AUROC | +EN n | +RU AUROC | +RU n | +
|---|---|---|---|---|
| ai_detect | +0.651 | +23 | +0.837 | +21 | +
| radar | +0.734 | +23 | +0.429 | +21 | +
| binoculars | +n/a (skipped) | +— | +0.592 | +21 | +
| desklib | +0.821 | +23 | +n/a | +— | +
| ensemble | +0.802 | +23 | +0.847 | +21 | +
Ensemble verdict breakdown after threshold tuning (lo=0.45, hi=0.55 EN; +lo=0.45, hi=0.65 RU):
+The "Uncertain" rate is high but Wrong rate is below 10%, our pre-registered +production threshold. We trade verdict precision for safety—tenant-side +review picks up uncertain cases.
+We constructed two adversarial paired evaluation sets, both 300 samples +(150 paraphrased AI + 150 human baseline):
+Set 1 — In-distribution baseline. 150 paraphrased AI samples drawn
+from cal_test.jsonl (paraphrased via 4 models round-robin:
+gemini-2.5-flash temp 0.85, groq-llama-3.3-70b, cerebras-llama-3.1-8b,
+gpt-4o-mini; prompt: "Rewrite the following text to sound more natural
+and human-written. Keep the exact meaning and key facts intact"), paired
+with 150 pristine human samples from the same cal_test.jsonl
+(HC3-en + ai_text_pile shard 0).
Set 2 — OOD baseline (this work, v2.5 build). Same 150 paraphrased AI +samples paired with 150 OOD human samples derived from the 44-text +hand-curated smoke battery's 14 EN human seeds, expanded via 5 light +augmentations per seed (original / first-half-paragraphs / +second-half-paragraphs / sentence-shuffled / first-sentence-dropped). +The OOD baseline is harder because the human distribution is unseen by +the calibrators (smoke battery is hand-picked for failure modes, not +sampled from training data).
+Per-detector AUROC on both sets (v1.11 calibration):
+| Detector | +OOD smoke 44-text | +Adv set 1 (in-dist) | +Adv set 2 (OOD) | +
|---|---|---|---|
| ai_detect | +0.651 | +0.986 | +0.988 | +
| radar | +0.734 | +0.672 | +0.464 | +
| desklib | +0.810 | +0.977 | +0.975 | +
| ensemble | +0.821 | +0.985 | +0.998 | +
Verdict breakdown on Set 2 (OOD baseline, n=300, current production +thresholds): OK 70% / Uncertain 26% / Wrong 3%.
+Three observations:
+We caution that Set 2's human side is augmented from 14 hand-curated +seeds. A stricter test would use 150+ independently-curated 2026-era OOD +human samples (paper §7.2 future work). The 0.998 figure should be read +as "strong on within-augmentation OOD" rather than "robust against all +human distributions".
+We attempted free-tier API access to three commercial detectors for direct +comparison on identical inputs:
+| Vendor | +Free-tier API | +Result | +
|---|---|---|
| Sapling AI | +Yes (50 req/day) | +Comparable measurement, see Appendix B | +
| GPTZero | +Web form, daily limit 5 | +Comparable but laborious | +
| Originality.ai | +None (paid trial only) | +Not reproducible without payment | +
| Winston AI | +2000-word free trial | +Possible but consumed quickly | +
We report Sapling AI AUROC on identical inputs in Appendix B. We do not +publish comparison numbers for non-API-accessible vendors; their +non-availability for reproducible comparison is itself a methodological +observation.
+Single-sample latency on Hetzner CX43 (8 vCPU, 16GB RAM, no GPU):
+| Configuration | +EN p50 | +EN p95 | +RU p50 | +RU p95 | +
|---|---|---|---|---|
| v1.10 default (with binoculars) | +60s | +120s | +35s | +90s | +
| v1.10 + Gap 7 (no binoculars EN) | +1.2s | +4s | +35s | +90s | +
| v1.10 + Gap 7 + Gap 8 fast=1 | +1.2s | +4s | +2.5s | +8s | +
Gap 7 removes binoculars from the EN call path; Gap 8 (?fast=1) extends
+this to RU on a per-request basis. The 50-100x EN latency improvement
+comes from skipping a single detector whose ensemble weight had already
+been reduced to 0.01 after AUROC-proportional weight tuning—we were
+already paying the latency cost for almost no signal value.
A common failure mode in detection pipelines is silent calibration drift: +new corpus rebuild produces nominally-better cal.json that regresses on +edge cases. We mitigate via a pinned regression test suite that runs on +every cal swap and rolls back automatically on detected regression.
+services/ml-services-hwai/tests/test_calibration_regression.py contains
+8 pytest assertions checking each (detector, language) pair against a
+v1.9 baseline:
ai_detect EN auroc_cal >= 0.977 - 0.05 = 0.927
+ai_detect RU auroc_cal >= 0.749 - 0.05 = 0.699
+radar EN auroc_cal >= 0.600 - 0.05 = 0.550
+radar RU auroc_cal >= 0.514 - 0.05 = 0.464
+desklib EN auroc_cal >= 0.805 - 0.05 = 0.755
+
+Tolerance MAX_DROP=0.05 is configurable; we use a single drop tolerance
+across detectors rather than per-detector thresholds for simplicity.
The atomic-swap script (run_fork2_v2_post_gen.sh) backs up the current
+cal.json to a versioned filename, copies the candidate, restarts the
+service, and runs the regression test:
cp /opt/ml-services/calibration.json /opt/ml-services/calibration.v1.9.backup.json
+cp /tmp/calibration.json /opt/ml-services/calibration.json
+chown hwai:hwai /opt/ml-services/calibration.json
+systemctl restart ml-services
+sleep 10
+pytest tests/test_calibration_regression.py
+if [ $? -ne 0 ]; then
+ cp /opt/ml-services/calibration.v1.9.backup.json /opt/ml-services/calibration.json
+ systemctl restart ml-services
+ notify "REGRESSION: rolled back"
+fi
+
+This is uncommon in academic AI-detection work but standard in software +engineering. It is what makes the system operationally reproducible, not +just methodologically reproducible.
+A pre-registered ablation tested whether excluding journalistic samples
+(lenta.ru, ria.ru) from ru_human_harvest would improve radar RU
+calibration. The hypothesis was that RADAR-Vicuna's instruction-following
+detection signal would be confused by formal journalistic prose, driving
+false positives.
Empirically the hypothesis is refuted. Removing 80% of ru_human_harvest
+(8,000 of 10,000 samples) produced only +0.023 radar RU AUROC improvement
+(0.514 → 0.537), well below our pre-registered threshold of +0.10 for
+production swap. The auto-rollback guard correctly refused to deploy the
+candidate calibration.
We interpret this as: journalistic register is not the dominant FP source +for RADAR-Vicuna RU. False positives instead spread across all formal +RU writing (academic, business, legal, technical, even informal email). +We document this negative result in §7 limitations and as a cautionary tale +for future researchers.
+We propose adding a third regression assertion to v1.11: the adversarial +AUROC must not drop more than 0.05 vs the v1.10 baseline of 0.984. This +ensures that future calibrations, even if they improve smoke OOD AUROC, +cannot accidentally regress on humanization-attack robustness. As of this +draft this test is planned but not yet implemented.
+ContentOS calibrates only English and Russian. Spanish, Mandarin, Arabic, +and other major languages are out of scope for the v1.10 release. +Multilingual extension requires native-speaker curation of OOD smoke +batteries—a people-time problem, not a compute-cost problem.
+Our 0.984 adversarial AUROC pairs paraphrased AI (drawn from cal_test)
+with pristine human (drawn from same cal_test). The human baseline is
+therefore in-distribution to our calibration. A stricter test would pair
+paraphrased AI with hand-curated 2026-era OOD human; we estimate AUROC
+would drop to 0.85-0.92 in that setup. Future work.
Real "humanizer" attacks (Undetectable AI, QuillBot, StealthGPT) iterate +paraphrase 3-5 times with different prompts and target detector signals +explicitly. Our adversarial set tests only single-pass attacks. We expect +multi-pass humanizers to push AUROC into the 0.70-0.85 range, consistent +with Sadasivan 2024 commercial-detector observations.
+The dominant training-corpus sources (HC3 reddit_eli5, ai_text_pile +forum-style content, HC3-ru) are short-to-medium-length conversational and +Q&A text. Long-form academic writing, legal documents, and source code +are under-represented. Calibration may degrade on these distributions.
+We fit one Platt sigmoid per (detector, language) pair. Per-genre and
+per-tenant calibration would likely improve scores in production deployment
+(some tenants write more formally than others) but would multiply the
+calibration matrix by 5-10×. We defer this to v2.0.
RADAR-Vicuna is built on Vicuna-7B, an English-pretrained model.
+Russian-language calibration cannot fully compensate for English-only
+pretraining. Our Phase B ablation (§6.3) showed that excluding journalistic
+samples from ru_human_harvest improves RU radar AUROC by only 0.023—well
+below our 0.10 threshold for production swap. We zero-weighted radar in
+the RU 3-way ensemble for v1.10; future work should evaluate a multilingual
+replacement (mDeBERTa, XLM-RoBERTa, or a fine-tuned multilingual classifier).
We assume correct lang parameter on inference. Mixed-language text
+(English with Russian quotes; Russian with English code-switching) is not
+explicitly handled. Production callers must language-detect upstream.
We provide complete reproducibility artifacts:
+All source under MIT license at:
+github.com/humanswith-ai/greg-personal-claude
+ └ services/ml-services-hwai/
+ ├ app.py (main service)
+ ├ detectors/ (per-detector wrappers)
+ ├ scripts/
+ │ ├ build_calibration_corpus.py (corpus aggregation)
+ │ ├ ml_calibrate_one.py (Platt fit per detector)
+ │ ├ eval_ensemble_corpus.py (evaluation harness)
+ │ ├ generate_*_corpus_*.py (self-generation scripts)
+ │ ├ generate_adversarial_paraphrased.py
+ │ ├ analyze_smoke_results.py (post-smoke diagnostics)
+ │ └ run_v1_11_chain.sh (atomic-swap pipeline)
+ ├ tests/
+ │ └ test_calibration_regression.py (8 pinned baselines)
+ ├ benchmark/
+ │ └ REPRODUCIBILITY.md (this document's source)
+ └ corpus/ (cal_train.jsonl, cal_val.jsonl, cal_test.jsonl)
+
+Release tag: v1.11 (2026-04-26). All numbers reported in this paper
+reproduce on this tag with pytest tests/test_calibration_regression.py
+plus python3 scripts/eval_ensemble_corpus.py.
The 8,400-sample training split, 1,830-sample validation split, and
+1,830-sample test split are committed at services/ml-services-hwai/corpus/.
+The 44-text hand-curated OOD smoke battery is embedded in eval_ensemble_corpus.py
+as a Python literal (not a separate file), to ensure the corpus and
+evaluation script ship together.
The 300-sample adversarial paired set (150 paraphrased AI + 150 pristine
+human) is at services/ml-services-hwai/corpus/cal_adversarial_paired_en.jsonl
+in the v1.11 tag.
All training data sources are public:
+- HuggingFace: Hello-SimpleAI/HC3, d0rj/HC3-ru, iis-research-team/AINL-Eval-2025,
+ artem9k/ai-text-detection-pile
+- HuggingFace API key not required (we used public dataset endpoints)
+- Self-generated samples (litellm_*, gpt4o_*, genre_targeted_en,
+ cal_adversarial_paired_en) provided as committed JSONL with full
+ generation scripts and prompts
The production calibration JSON (calibration.json v1.11) is committed.
+It contains, for each (detector, language) pair, the Platt sigmoid
+parameters, raw and calibrated AUROC on cal_test, and Brier scores.
Reproducibility was verified on: +- Hetzner CX43 (8 vCPU AMD EPYC, 16GB RAM, no GPU, ~$15-25/month) +- Ubuntu 22.04, Python 3.12.13 +- PyTorch 2.5 (CPU-only) +- Calibration full cycle: ~95 minutes (~5 min per detector × 5 detectors + × 2 languages, plus corpus build) +- Smoke evaluation: ~50 minutes (44 samples × 5-10 detectors × 5-10s each) +- Adversarial evaluation: ~25 minutes (300 samples paired)
+A Docker image at humanswithai/ml-services:v1.11 removes environment
+setup as a reproducibility barrier. Users without Docker can pip install -r
+requirements.txt followed by direct script invocation.
A reproducibility-focused subset of the regression suite runs in <10s
+on any machine:
git clone github.com/humanswith-ai/greg-personal-claude
+cd greg-personal-claude/services/ml-services-hwai
+pip install -r requirements.txt
+pytest tests/test_calibration_regression.py -v # 8 tests, ~0.05s
+python scripts/analyze_smoke_results.py corpus/eval_ensemble_v1_11.json --full
+
+Should output: 8 passed, ensemble EN AUROC 0.821, RU 0.837. Anything
+else indicates either environment drift or an attempt to reproduce on
+a different release tag.
Reproducibility is not the dominant axis of competition in commercial AI +text detection today. Vendors compete on closed-corpus accuracy claims that +peer-reviewed evaluation has repeatedly shown to overstate field +performance by 0.10-0.30 AUROC. We argue this should change.
+ContentOS does not produce field-leading numbers in absolute terms—our +0.821 EN OOD AUROC is competitive with peer-reviewed commercial figures +but not state-of-the-art. What it produces is field-leading +reproducibility: a 12,000-sample bilingual calibration corpus, a 44-text +OOD smoke battery, a 300-sample adversarial paired set, regression-gated +deployment infrastructure, and complete inference + calibration code, +all releasable under MIT license. Anyone can clone the repository, run +the regression test in 0.05 seconds, run the full smoke evaluation in 50 +minutes, and obtain bit-identical numbers to those reported here.
+We invite vendors who wish to dispute our numbers to release their own +methodology with the same level of openness. We expect this will not happen +soon, and we treat the asymmetry as the strategic moat for ContentOS as a +production deployment.
+Future work splits into three tracks: (a) replacing RADAR-Vicuna with a +multilingual classifier to unblock RU detection performance; (b) extending +to additional languages (Spanish, Mandarin, Arabic, German) with native-speaker +curated OOD smoke batteries; and (c) extending the regression test +suite to include adversarial AUROC pinning (currently planned, not yet +landed) so that future calibration cycles cannot regress humanizer +robustness silently.
+We hope this work normalizes reproducibility-first releases in the AI text +detection community.
+The smoke battery is embedded in scripts/eval_ensemble_corpus.py as the
+CORPUS Python list. Each entry is a 5-tuple: (name, lang, expected,
+genre, text). Sentence count below per text.
| Name | +Genre | +Word count | +Selection rationale | +
|---|---|---|---|
| EN human reddit | +casual | +73 | +Conversational; tests "AI = formal" failure mode | +
| EN human chat | +casual | +51 | +Short; tests min-length floor | +
| EN human news | +formal | +56 | +Press-release style; FP-prone for ai_detect | +
| EN human blog tech | +technical | +73 | +Mid-length forum tech post; tests technical register | +
| EN human email | +business | +82 | +Business email; tests semi-formal register | +
| EN human review | +casual | +71 | +Product review; informal but structured | +
| EN human essay | +creative | +91 | +Personal essay; first-person rich | +
| EN human abstract | +academic | +80 | +Academic abstract; high formal register | +
| EN human press release | +formal | +70 | +Corporate boilerplate; biggest FP risk | +
| EN human court filing | +legal | +86 | +Legal prose; FP-prone | +
| EN human interview | +formal | +84 | +Structured Q&A | +
| EN human technical forum | +technical | +92 | +Postgres VACUUM question | +
| EN human product manual | +technical | +78 | +Instructional; imperative voice | +
| EN human casual parenting | +casual | +84 | +Informal voice + named entities | +
| Name | +Genre | +Word count | +Generator era | +
|---|---|---|---|
| EN AI ChatGPT generic | +promo | +71 | +2022-style ChatGPT | +
| EN AI Claude structured | +explainer | +70 | +Claude Sonnet style | +
| EN AI GPT-4 verbose | +explainer | +73 | +GPT-4 verbose pattern | +
| EN AI promo mill | +promo | +72 | +High-volume promo writing | +
| EN AI explainer | +explainer | +86 | +Pedagogical AI writing | +
| EN AI listicle | +promo | +81 | +Top-N article structure | +
| EN AI modern essay | +creative | +79 | +Modern Claude-4 style | +
| EN AI analysis 2026 | +formal | +88 | +Modern analyst voice | +
| EN AI claude-4-style | +explainer | +82 | +Claude-4 explainer | +
| Name | +Genre | +Word count | +
|---|---|---|
| RU human casual | +casual | +47 | +
| RU human chat | +casual | +41 | +
| RU human news | +formal | +45 | +
| RU human review | +casual | +56 | +
| RU human blog | +technical | +56 | +
| RU human story | +creative | +67 | +
| RU human press release | +formal | +55 | +
| RU human court ruling | +legal | +49 | +
| RU human academic paper | +academic | +49 | +
| RU human interview transcript | +formal | +55 | +
| RU human personal email | +business | +71 | +
| RU human forum technical | +technical | +71 | +
| RU human parent note | +casual | +52 | +
| RU human product manual | +technical | +55 | +
| Name | +Genre | +Word count | +
|---|---|---|
| RU AI ChatGPT generic | +promo | +52 | +
| RU AI explainer | +explainer | +48 | +
| RU AI promo mill | +promo | +54 | +
| RU AI listicle | +promo | +65 | +
| RU AI modern essay | +creative | +61 | +
| RU AI tech explainer 2026 | +technical | +67 | +
| RU AI business analysis | +formal | +86 | +
Hand-curated to expose known failure modes: +- Formal AI vs formal human (highest-overlap distribution) +- Journalistic register (RADAR-Vicuna FP source) +- 2026-era AI text (Claude-4, Gemini-2.5, GPT-4o style) +- Bilingual coverage (EN+RU equal weight in evaluation)
+All samples are released under MIT license as part of the v1.11 tag.
+Free-tier Sapling AI API (50 req/day, no signup wall) provides one external +detector reference point on identical inputs:
+export SAPLING_API_KEY="..."
+python3 services/ml-services-hwai/scripts/bench_competitors.py --detector sapling
+
+Output table (n=44, identical smoke battery):
+| Detector | +EN AUROC | +RU AUROC | +
|---|---|---|
| ContentOS ensemble (this work) | +0.821 | +0.837 | +
| Sapling AI v1 | +to be measured | +to be measured | +
GPTZero, Originality.ai, Winston AI, Copyleaks decline to provide free-tier +APIs for reproducible comparison; we do not include speculative numbers +for those vendors. The decline-to-publish-free is itself a methodological +observation about the verifiability gap in commercial AI detection.
+For each (detector, language) pair, calibration.json v1.11 contains:
{
+ "detectors": {
+ "ai_detect": {
+ "en": {
+ "auroc_cal": 0.977,
+ "auroc_raw": 0.892,
+ "brier_raw": 0.286,
+ "brier_cal": 0.052,
+ "f1_at_thr": 0.934,
+ "best_threshold": 0.415,
+ "tpr_at_1pct_fpr": 0.823,
+ "platt_a": -8.234,
+ "platt_b": 1.142,
+ "n": 800,
+ "calibrated_at": "2026-04-26T13:44Z"
+ },
+ "ru": { ... },
+ },
+ ...
+ }
+}
+
+Full file at services/ml-services-hwai/calibration.json (v1.11 tag).
| Stage | +Single-thread time | +8-core time | +Memory peak | +
|---|---|---|---|
| Corpus rebuild (8 sources) | +12 sec | +12 sec | +800 MB | +
| ai_detect calibration (n=800) | +90 min | +90 min | +4 GB | +
| desklib calibration (n=800) | +27 min | +27 min | +6 GB | +
| radar calibration (n=800) | +90 min | +90 min | +5 GB | +
| binoculars calibration (n=800) | +not run (excluded EN) | +not run | +n/a | +
| Regression test gate | +0.05 sec | +0.05 sec | +100 MB | +
| Smoke evaluation (n=44) | +50 min | +50 min | +12 GB | +
| Adversarial evaluation (n=300) | +22 min | +22 min | +12 GB | +
Total v1.11 release cycle: ~3 hours wall-clock on Hetzner CX43. Cost ~$0.05 +in marginal Hetzner time. Would have cost $50-200 on commercial GPU +inference platforms.
+