--- language: - en - ru license: mit tags: - ai-text-detection - reproducibility - bilingual - adversarial-robustness - calibration --- # ContentOS — Reproducible Bilingual AI-Text-Detection Ensemble **Pre-print v1.0 (2026-04-27)** This repository contains the open pre-print and supporting artifacts for ContentOS, a reproducible English+Russian AI-text-detection ensemble. ## Authors and affiliation - **Gregory Shevchenko** — author, Humanswith.ai (founder) - **Humanswith.ai team** — methodology, calibration, evaluation infrastructure ContentOS is a Humanswith.ai product. This preprint is published under the author's personal HuggingFace account; the supporting code repository is maintained under the organization account (see "Code repository" below). - Author profile: https://huggingface.co/gshevchenko - Organization: https://humanswith.ai - Contact for collaboration: open a Discussion on this dataset ## Code repository Public benchmark + evaluation scripts: **https://github.com/humanswith-ai/contentos-benchmark** The repo includes regression test suite (8 pinned baselines, 0.05s), streaming-CSV eval scripts (partial-tolerant), per-genre AUROC analyzer, and the calibration JSON shape for v1.11 production state. ## Headline numbers (v1.11 production, 2026-04-29 measurement) | Metric | EN | RU | |---|---|---| | OOD AUROC (176-sample expanded smoke) | **0.864** | **0.846** | | Wrong-rate | 4% | 9% | | p50 latency (EN ensemble) | **1.2 s** | — | | Adversarial AUROC (n=300, OOD-paired) | **0.998** | — | Earlier v1.0 paper reported 0.802 / 0.847 on the original 44-text smoke battery; the 4× expanded battery with class balance per (lang, genre) cell stabilized numbers upward. Per-genre details in the [companion repo](https://github.com/humanswith-ai/contentos-benchmark). ## Files - `paper.pdf` — full pre-print (~6,000 words, 9 sections + 5 appendices) - `paper.html` — self-contained HTML version with embedded figures - `paper.md` — source markdown - `figures/` — 4 figures (PNG + SVG) - `REPRODUCIBILITY.md` — open methodology, how to reproduce in 90 minutes ## Reproducibility The full methodology and calibration corpus description are documented in `REPRODUCIBILITY.md`, which is sufficient for independent re-implementation of the ensemble. A public mirror with the evaluation scripts (`eval_ensemble_corpus.py`, 8 pinned regression tests, atomic-swap deploy with 30-second rollback) will be released within ~2 weeks following the v1.12 RU recalibration chain. Target reproduction infrastructure: Hetzner CX43 (8 vCPU, no GPU, ~€14/month) or equivalent. For early access before the public mirror, please open a discussion on this dataset. ## Cite as ```bibtex @misc{contentos2026, title={ContentOS: A Reproducible Bilingual AI-Text-Detection Ensemble with Adversarial Robustness Evaluation}, author={Humanswith.ai team}, year={2026}, url={https://huggingface.co/datasets/gshevchenko/contentos-preprint}, } ``` ## License MIT for code, methodology, and corpus aggregation. Underlying data sources retain their original licenses (HC3, AINL-Eval-2025, ai-text-detection-pile).