Datasets:
File size: 65,164 Bytes
897ced8 74ffc1a 897ced8 74ffc1a 5c0315b 897ced8 74ffc1a 897ced8 74ffc1a 5c0315b 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 5c0315b 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 5c0315b 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 74ffc1a 897ced8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 | <!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<meta name="generator" content="pandoc" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
<title>ContentOS Preprint v1.0.2</title>
<style>
/* Default styles provided by pandoc.
** See https://pandoc.org/MANUAL.html#variables-for-html for config info.
*/
html {
color: #1a1a1a;
background-color: #fdfdfd;
}
body {
margin: 0 auto;
max-width: 36em;
padding-left: 50px;
padding-right: 50px;
padding-top: 50px;
padding-bottom: 50px;
hyphens: auto;
overflow-wrap: break-word;
text-rendering: optimizeLegibility;
font-kerning: normal;
}
@media (max-width: 600px) {
body {
font-size: 0.9em;
padding: 12px;
}
h1 {
font-size: 1.8em;
}
}
@media print {
html {
background-color: white;
}
body {
background-color: transparent;
color: black;
font-size: 12pt;
}
p, h2, h3 {
orphans: 3;
widows: 3;
}
h2, h3, h4 {
page-break-after: avoid;
}
}
p {
margin: 1em 0;
}
a {
color: #1a1a1a;
}
a:visited {
color: #1a1a1a;
}
img {
max-width: 100%;
}
svg {
height: auto;
max-width: 100%;
}
h1, h2, h3, h4, h5, h6 {
margin-top: 1.4em;
}
h5, h6 {
font-size: 1em;
font-style: italic;
}
h6 {
font-weight: normal;
}
ol, ul {
padding-left: 1.7em;
margin-top: 1em;
}
li > ol, li > ul {
margin-top: 0;
}
blockquote {
margin: 1em 0 1em 1.7em;
padding-left: 1em;
border-left: 2px solid #e6e6e6;
color: #606060;
}
code {
white-space: pre-wrap;
font-family: Menlo, Monaco, Consolas, 'Lucida Console', monospace;
font-size: 85%;
margin: 0;
hyphens: manual;
}
pre {
margin: 1em 0;
overflow: auto;
}
pre code {
padding: 0;
overflow: visible;
overflow-wrap: normal;
}
.sourceCode {
background-color: transparent;
overflow: visible;
}
hr {
border: none;
border-top: 1px solid #1a1a1a;
height: 1px;
margin: 1em 0;
}
table {
margin: 1em 0;
border-collapse: collapse;
width: 100%;
overflow-x: auto;
display: block;
font-variant-numeric: lining-nums tabular-nums;
}
table caption {
margin-bottom: 0.75em;
}
tbody {
margin-top: 0.5em;
border-top: 1px solid #1a1a1a;
border-bottom: 1px solid #1a1a1a;
}
th {
border-top: 1px solid #1a1a1a;
padding: 0.25em 0.5em 0.25em 0.5em;
}
td {
padding: 0.125em 0.5em 0.25em 0.5em;
}
header {
margin-bottom: 4em;
text-align: center;
}
#TOC li {
list-style: none;
}
#TOC ul {
padding-left: 1.3em;
}
#TOC > ul {
padding-left: 0;
}
#TOC a:not(:hover) {
text-decoration: none;
}
span.smallcaps{font-variant: small-caps;}
div.columns{display: flex; gap: min(4vw, 1.5em);}
div.column{flex: auto; overflow-x: auto;}
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
/* The extra [class] is a hack that increases specificity enough to
override a similar rule in reveal.js */
ul.task-list[class]{list-style: none;}
ul.task-list li input[type="checkbox"] {
font-size: inherit;
width: 0.8em;
margin: 0 0.8em 0.2em -1.6em;
vertical-align: middle;
}
.display.math{display: block; text-align: center; margin: 0.5rem auto;}
/* CSS for syntax highlighting */
html { -webkit-text-size-adjust: 100%; }
pre > code.sourceCode { white-space: pre; position: relative; }
pre > code.sourceCode > span { display: inline-block; line-height: 1.25; }
pre > code.sourceCode > span:empty { height: 1.2em; }
.sourceCode { overflow: visible; }
code.sourceCode > span { color: inherit; text-decoration: inherit; }
div.sourceCode { margin: 1em 0; }
pre.sourceCode { margin: 0; }
@media screen {
div.sourceCode { overflow: auto; }
}
@media print {
pre > code.sourceCode { white-space: pre-wrap; }
pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
}
pre.numberSource code
{ counter-reset: source-line 0; }
pre.numberSource code > span
{ position: relative; left: -4em; counter-increment: source-line; }
pre.numberSource code > span > a:first-child::before
{ content: counter(source-line);
position: relative; left: -1em; text-align: right; vertical-align: baseline;
border: none; display: inline-block;
-webkit-touch-callout: none; -webkit-user-select: none;
-khtml-user-select: none; -moz-user-select: none;
-ms-user-select: none; user-select: none;
padding: 0 4px; width: 4em;
color: #aaaaaa;
}
pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; }
div.sourceCode
{ }
@media screen {
pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; }
}
code span.al { color: #ff0000; font-weight: bold; } /* Alert */
code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */
code span.at { color: #7d9029; } /* Attribute */
code span.bn { color: #40a070; } /* BaseN */
code span.bu { color: #008000; } /* BuiltIn */
code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */
code span.ch { color: #4070a0; } /* Char */
code span.cn { color: #880000; } /* Constant */
code span.co { color: #60a0b0; font-style: italic; } /* Comment */
code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */
code span.do { color: #ba2121; font-style: italic; } /* Documentation */
code span.dt { color: #902000; } /* DataType */
code span.dv { color: #40a070; } /* DecVal */
code span.er { color: #ff0000; font-weight: bold; } /* Error */
code span.ex { } /* Extension */
code span.fl { color: #40a070; } /* Float */
code span.fu { color: #06287e; } /* Function */
code span.im { color: #008000; font-weight: bold; } /* Import */
code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */
code span.kw { color: #007020; font-weight: bold; } /* Keyword */
code span.op { color: #666666; } /* Operator */
code span.ot { color: #007020; } /* Other */
code span.pp { color: #bc7a00; } /* Preprocessor */
code span.sc { color: #4070a0; } /* SpecialChar */
code span.ss { color: #bb6688; } /* SpecialString */
code span.st { color: #4070a0; } /* String */
code span.va { color: #19177c; } /* Variable */
code span.vs { color: #4070a0; } /* VerbatimString */
code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */
</style>
</head>
<body>
<header id="title-block-header">
<h1 class="title">ContentOS Preprint v1.0.2</h1>
</header>
<h1
id="contentos-a-reproducible-bilingual-ai-text-detection-ensemble-with-adversarial-robustness-evaluation">ContentOS:
A Reproducible Bilingual AI-Text-Detection Ensemble with Adversarial
Robustness Evaluation</h1>
<blockquote>
<p>ContentOS team, Humanswith.ai, 2026-04-27. Pre-print version v1.0.
Source: <code>services/ml-services-hwai/benchmark/paper.md</code>
(auto-merged from three companion drafts; see
<code>merge_paper.py</code>).</p>
</blockquote>
<h2 id="abstract">Abstract</h2>
<p>Commercial AI-text-detection vendors publish accuracy claims of 99%+
on proprietary corpora that remain inaccessible to external auditors.
Independent peer-reviewed evaluations have repeatedly shown these claims
drop to 0.70-0.88 AUROC on out-of-distribution and modern-era text. We
present <strong>ContentOS</strong>, a reproducible ensemble of four AI
detectors (Fast-DetectGPT, RADAR-Vicuna, Binoculars, Desklib-fine-tuned
DeBERTa-v3-large) calibrated on a 12,000-sample bilingual (English +
Russian) corpus drawn from seven public datasets covering 2022-2026 era
AI generators (GPT-4o, Gemini 2.5, Groq Llama, Cerebras Llama).</p>
<p>We release the full calibration corpus, evaluation harness,
regression test suite, and a 300-sample held-out adversarial corpus
produced via cross-model single-pass paraphrasing.</p>
<p><strong>Headline numbers — v1.11 ensemble on 176-sample expanded
smoke battery (2026-04-29 measurement):</strong> AUROC <strong>0.864
(English)</strong> and <strong>0.846 (Russian)</strong>, with English
Wrong-rate of 4% and median latency of 1.2 seconds on commodity 8-vCPU
hardware. Earlier 44-text hand-curated smoke (v1.0 paper measurement)
reported 0.821 EN / 0.837 RU; the 4× expanded battery with proper class
balance per (lang, genre) cell stabilized the numbers upward.</p>
<p>On the 300-sample adversarial paired set (cross-model paraphrasing
attack, OOD-augmented baseline), ensemble AUROC reaches
<strong>0.998</strong> (re-measured 2026-04-29 with current
calibration). Earlier v1.0 paper measurement was 0.985 — the slight
increase reflects the intervening calibration tuning between Gap-7 and
current state.</p>
<p>The contribution of this work is <strong>field-leading
reproducibility</strong>, not state-of-the-art absolute AUROC. Anyone
can clone the repository, run the regression test in 0.05 seconds, and
reproduce all reported numbers in 90 minutes on a $25/month Hetzner
instance. We argue that reproducibility should be the dominant axis of
competition in commercial AI-text detection, and treat the openness of
our methodology as the strategic moat for production deployment.</p>
<p><strong>Keywords:</strong> AI-text detection, ensemble calibration,
reproducibility, adversarial robustness, multilingual NLP, regression
testing, OOD evaluation.</p>
<hr />
<h2 id="introduction">§1. Introduction</h2>
<p>The verifiability problem. Commercial AI-text detection vendors
publish accuracy claims of 99%+ on proprietary corpora that remain
inaccessible to external auditors. Independent peer-reviewed evaluations
(Pu 2024, Tulchinskii 2023, Chakraborty 2025, Sadasivan 2024) repeatedly
demonstrate that these claims drop to 0.70-0.88 AUROC on
out-of-distribution (OOD) text and fall further—often below 0.65—under
paraphrase attack. The credibility gap between marketing claims and
peer-reviewed evidence is now wide enough that we believe the dominant
axis of competition in this field should shift from “who claims the
highest AUROC” to “whose methodology survives independent
reproduction”.</p>
<p>We present <strong>ContentOS</strong>, an open ensemble of four
published AI-text detectors—Fast-DetectGPT (Bao 2024), RADAR-Vicuna (Hu
2023), Binoculars (Hans 2024), and a Desklib-fine-tuned
DeBERTa-v3-large—calibrated together with a five-feature text-level
structural head. We release:</p>
<ol type="1">
<li>The full 12,000-sample bilingual (English + Russian) calibration
corpus, drawn from seven public datasets covering 2022-2026 era AI
generators (HC3, AINL-Eval-2025, ai-text-detection-pile, our own LiteLLM
and GPT-4o self-generation, and pre-LLM-era Russian journalism).</li>
<li>The full evaluation harness, including a 44-text hand-curated
out-of-distribution smoke battery selected for known failure modes
(formal AI, journalistic human, paraphrased AI).</li>
<li>A 300-sample held-out adversarial corpus produced via cross-model
paraphrasing (gemini-2.5-flash, groq-llama-3.3-70b,
cerebras-llama-3.1-8b, gpt-4o-mini), enabling reproducible adversarial
AUROC measurement.</li>
<li>The complete calibration JSON file, regression test suite with
pinned per-detector baselines, and atomic-swap deployment scripts.</li>
<li>All training, evaluation, and threshold-tuning scripts.</li>
</ol>
<p>Our headline numbers, reproducible end-to-end on Hetzner CX43-class
hardware ($25/month) within 90 minutes:</p>
<ul>
<li><strong>English ensemble OOD AUROC: 0.864</strong> (176-sample
expanded smoke, 2026-04-29)</li>
<li><strong>Russian ensemble OOD AUROC: 0.846</strong> (176-sample
expanded smoke, 2026-04-29)</li>
<li><strong>English ensemble adversarial AUROC: 0.998</strong> on
300-sample paraphrase-paired OOD-augmented set (re-measured
2026-04-29)</li>
<li><strong>English ensemble p50 latency: 1.2 seconds</strong> (8-core
CPU, no GPU)</li>
</ul>
<p>Earlier v1.0 paper reported 0.802/0.847 on the original 44-text
smoke; the expanded 176-sample battery with class balance per (lang,
genre) cell revealed that several “weak slots” at small n_h were
sample-size noise, and stabilized values upward.</p>
<p>The first three numbers are competitive with the best peer-reviewed
commercial figures while remaining honestly reported on OOD and
adversarial evaluations. The fourth—latency—was achieved by removing
Binoculars from the English call path after observing that its
calibrated AUROC dropped to 0.478 on our smoke battery while inflating
per-request wall time to 60-120 seconds.</p>
<p>We argue that reproducibility is the defensible competitive moat in
AI detection. Vendors whose accuracy claims cannot be independently
reproduced on a fixed corpus should be treated with the same skepticism
as a peer-reviewed paper that withholds its data.</p>
<hr />
<h2 id="related-work">§2. Related Work</h2>
<p><strong>Detection methods.</strong> Modern AI-text detection breaks
roughly into three families: (1) zero-shot statistical methods that
compute curvature (DetectGPT, Mitchell 2023; Fast-DetectGPT, Bao 2024)
or perplexity ratios between two language models (Binoculars, Hans 2024;
GLTR, Gehrmann 2019); (2) supervised classifiers fine-tuned on
AI-generated text (DeBERTa-v3-based classifiers, Desklib v1.01;
Hello-Detect, OpenAI 2023, deprecated); and (3) adversarially-trained
discriminators (RADAR, Hu 2023). We adopt one representative from each
family plus a structural head and combine via weighted Platt-calibrated
ensemble.</p>
<p><strong>Ensemble approaches.</strong> Spitale et al. (2024)
demonstrated that detector ensembles outperform individual methods on
cross-domain test sets, with weight tuning per-detector quality being
more important than raw detector selection. Our work confirms this:
rebalancing production weights from “binoculars-dominant” (0.50) to
“desklib-dominant” (0.45 with desklib at 0.821 AUROC) yielded a +0.111
OOD AUROC improvement with no other change.</p>
<p><strong>Existing benchmarks.</strong> The most comparable open
benchmarks are RAID (Dugan 2024, 6.3M samples), MAGE (Li 2024, 154k
samples) and MGTBench (Chen 2024). These are larger than ours but focus
on detection accuracy rather than full-pipeline reproducibility. None
publishes a calibrated production ensemble alongside its corpus, the
regression test infrastructure to keep calibration honest, or an
adversarial pair-set for documenting humanizer robustness. We position
ContentOS as smaller-scale but more deployment-ready.</p>
<p><strong>Adversarial evaluations.</strong> Sadasivan et al. (2024)
showed that recursive paraphrasing reduces commercial AI detector AUROC
from 0.99 to 0.50-0.70. Krishna et al. (2023) introduced DIPPER, a
paraphrase model explicitly designed to evade detection. Our adversarial
set uses single-pass cross-model paraphrasing—a milder attack than
DIPPER—so our 0.984 EN AUROC is best read as “robust against single-pass
humanization”, not “robust against trained adversaries”.</p>
<p><strong>Russian-language detection.</strong> Russian AI-text
detection has been under-studied. The AINL-Eval-2025 shared task
(released this year) is the first reproducible Russian benchmark with
multiple AI generators (GPT-4, Gemma, Llama-3). We incorporate it as
1,381 training samples. Our Russian ensemble OOD AUROC of 0.847—compared
to the AINL-Eval-2025 best-team in-distribution AUROC of approximately
0.92—suggests that production deployment requires deliberate OOD
calibration; in-distribution numbers overestimate field performance by
0.07-0.10 AUROC.</p>
<hr />
<h2 id="calibration-corpus">§3. Calibration Corpus</h2>
<p>We build a 12,000-sample multi-source bilingual corpus drawn from
seven public datasets covering English and Russian. Sources span four AI
generators (GPT-3.5, ChatGPT, GPT-4o, Gemini 2.5, Llama 3.x) and three
eras (2022, 2024, 2026), with explicit human baselines drawn from
non-LLM-era sources where possible.</p>
<h3 id="sources">3.1 Sources</h3>
<table>
<colgroup>
<col style="width: 20%" />
<col style="width: 20%" />
<col style="width: 20%" />
<col style="width: 20%" />
<col style="width: 20%" />
</colgroup>
<thead>
<tr>
<th>Source</th>
<th>Lang</th>
<th>n (train)</th>
<th>Era</th>
<th>Schema</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hello-SimpleAI/HC3 (<code>all.jsonl</code>)</td>
<td>EN</td>
<td>1,411</td>
<td>2022-23</td>
<td>ChatGPT vs human Q&A across 5 domains (reddit_eli5, finance,
medicine, open_qa, wiki_csai)</td>
</tr>
<tr>
<td>d0rj/HC3-ru</td>
<td>RU</td>
<td>1,412</td>
<td>2022-23</td>
<td>RU translation of HC3 with regenerated AI side</td>
</tr>
<tr>
<td>iis-research-team/AINL-Eval-2025</td>
<td>RU</td>
<td>1,381</td>
<td>2024-25</td>
<td>Multi-model RU detection task; AI side covers GPT-4, Gemma, Llama
3</td>
</tr>
<tr>
<td>artem9k/ai-text-detection-pile (shards 0+6)</td>
<td>EN</td>
<td>1,389</td>
<td>2022-23</td>
<td>shard 0 = 100% human, shard 6 = 100% AI; 2×198k raw rows</td>
</tr>
<tr>
<td><code>ru_human_harvest</code></td>
<td>RU</td>
<td>696</td>
<td>2010-22</td>
<td>Pre-LLM journalism (lenta.ru, ria.ru) + curation-corpus + editorial
RU</td>
</tr>
<tr>
<td>LiteLLM EN gen</td>
<td>EN</td>
<td>695</td>
<td>2026</td>
<td>Internal generation: gemini-2.5-flash + groq-llama 3.3 70B at temp
0.7-0.9</td>
</tr>
<tr>
<td>LiteLLM RU gen</td>
<td>RU</td>
<td>711</td>
<td>2026</td>
<td>Same setup, RU prompts</td>
</tr>
<tr>
<td>OpenAI GPT-4o EN gen</td>
<td>EN</td>
<td>726</td>
<td>2026</td>
<td>Direct OpenAI API; HC3-en seeds; temp 0.85</td>
</tr>
<tr>
<td><strong>Total train split</strong></td>
<td>—</td>
<td><strong>8,400</strong></td>
<td>—</td>
<td>—</td>
</tr>
</tbody>
</table>
<p>Validation and test splits are stratified 70/15/15 by
<code>(lang, label)</code>.</p>
<h3 id="stratification">3.2 Stratification</h3>
<p>Stratification preserves both label balance (EN 1400/2800 human/AI in
train, RU 2100/2100) and per-source representation. Per-bucket cap of
1,000 prevents any single source dominating; the cap is applied after
random shuffling within each <code>(source, lang, label)</code>
bucket.</p>
<p>The stratification step writes split-level histograms to confirm
shape:</p>
<pre><code>train:
('en', 0): 1400 ('en', 1): 2800
('ru', 0): 2100 ('ru', 1): 2100
sources: {hc3_en: 1411, hc3_ru: 1412, ainl_eval_2025: 1381,
ai_text_pile: 1389, ru_human_harvest: 696,
litellm_en_gen: 674, litellm_ru_gen: 711, gpt4o_en_gen: 726}</code></pre>
<h3 id="quality-controls">3.3 Quality controls</h3>
<ul>
<li><strong>Length filter:</strong> 200 ≤ len(text) ≤ 8,000 characters;
texts outside are dropped at load time.</li>
<li><strong>Per-bucket cap:</strong> 1,000 samples per
<code>(source, lang, label)</code> triple.</li>
<li><strong>Deduplication:</strong> within-source duplicates removed via
exact-match hash. Cross-source near-duplicates (e.g. HC3 RU translations
of HC3 EN) intentionally retained for cross-language coverage.</li>
<li><strong>Domain diversity:</strong> every source contributes ≥ 5
unique domain tags; per-source domain distribution recorded in corpus
build log.</li>
</ul>
<h3 id="en-imbalance-correction-v1.10-patch">3.4 EN imbalance correction
(v1.10 patch)</h3>
<p>Initial v1.9 corpus had a 60/40 AI-skew on EN side because the HC3
loader took only the first <code>human_answers</code> element per row,
which often fell below the 200-char minimum. v1.10 increases this to up
to 3 human answers per row, recovering ~700 additional human EN samples.
The corpus build script now produces 50/50 EN balance under the same
per-bucket cap.</p>
<p>This change is committed at
<code>services/ml-services-hwai/scripts/build_calibration_corpus.py</code>
function <code>from_hc3_en()</code>.</p>
<h3 id="russian-journalism-subcorpus-ru_human_harvest">3.5 Russian
journalism subcorpus (<code>ru_human_harvest</code>)</h3>
<p>The Russian human side draws partly from a custom Fork-1 harvest:
~10,000 pre-LLM samples (2010-2022) from lenta.ru, ria.ru, and the
curation-corpus project. We hypothesised that journalistic register
would help calibrate detectors against formal RU prose. An ablation
study (described in §6.3) empirically refutes this — removing journalism
samples from radar’s calibration corpus yields only +0.023 AUROC
improvement, not the +0.10+ predicted. We retain the journalism subset
in the public release for transparency but discuss the negative result
in §7.</p>
<hr />
<h2 id="detection-pipeline">§4. Detection Pipeline</h2>
<h3 id="detectors">4.1 Detectors</h3>
<p>The ensemble combines four independently published detectors plus a
text-level structural feature head:</p>
<table>
<colgroup>
<col style="width: 20%" />
<col style="width: 20%" />
<col style="width: 20%" />
<col style="width: 20%" />
<col style="width: 20%" />
</colgroup>
<thead>
<tr>
<th>Detector</th>
<th>Architecture</th>
<th>Backbone</th>
<th>Per-detector AUROC EN</th>
<th>Per-detector AUROC RU</th>
</tr>
</thead>
<tbody>
<tr>
<td>Fast-DetectGPT (<code>ai_detect</code>)</td>
<td>Curvature-based zero-shot</td>
<td>GPT-Neo-1.3B</td>
<td>0.976 (cal_test)</td>
<td>0.732 (cal_test)</td>
</tr>
<tr>
<td>RADAR (<code>radar</code>)</td>
<td>Adversarial trained classifier</td>
<td>RoBERTa-large</td>
<td>0.605 (cal_test)</td>
<td>0.540 (cal_test)</td>
</tr>
<tr>
<td>Binoculars (<code>binoculars</code>)</td>
<td>Cross-model perplexity ratio</td>
<td>Falcon-7B / Falcon-7B-instruct</td>
<td>n/a (skipped EN, see §4.4)</td>
<td>0.592 (smoke)</td>
</tr>
<tr>
<td>Desklib (<code>desklib</code>)</td>
<td>Fine-tuned classifier</td>
<td>DeBERTa-v3-large (Desklib v1.01)</td>
<td>0.893 (cal_test)</td>
<td>not calibrated</td>
</tr>
<tr>
<td>Text-level (<code>text_level</code>)</td>
<td>Hand-engineered structural features</td>
<td>n/a</td>
<td>additive contribution</td>
<td>additive contribution</td>
</tr>
</tbody>
</table>
<p><code>auroc_cal</code> reported above are from the n=750 held-out
cal_test split. OOD numbers from the hand-curated 44-text smoke battery
appear in §5.2.</p>
<h3 id="per-detector-calibration">4.2 Per-detector calibration</h3>
<p>Each detector returns a raw score in either <code>[-∞, +∞]</code>
(Fast-DetectGPT curvature) or <code>[0, 1]</code> (others). We fit
per-(detector, language) Platt sigmoids on the train split:</p>
<pre><code>calibrated_score = 1 / (1 + exp(A * raw + B))</code></pre>
<p>Hyperparameters <code>A, B</code> are fit by maximum likelihood using
<code>scipy.optimize.minimize</code> with logistic loss, and persisted
in <code>calibration.json</code>. We detect inverted fits
(<code>A > 0</code>, occurs when raw score is anti-correlated with
label) and emit a warning; v1.10 has <code>fits_inverted=1</code>
corresponding to RADAR’s RU calibration where AUROC < 0.5.</p>
<h3 id="ensemble-weighting">4.3 Ensemble weighting</h3>
<p>The ensemble produces a weighted average of calibrated detector
scores plus a text-level component:</p>
<pre><code>ensemble_score = w_tl * tl_score
+ (1 - w_tl) * Σ_d (w_d * calibrated_score_d / Σ_d w_d)</code></pre>
<p>where <code>w_d</code> are detector weights (per-language,
env-overridable) and <code>w_tl</code> is the text-level weight (0.18
short / 0.35 long). Production v1.10 weights after empirical
AUROC-proportional tuning:</p>
<pre><code>EN 4-way (fd, rd, bn, ds): 0.20, 0.34, 0.01, 0.45
RU 3-way (fd, rd, bn): 0.79, 0.00, 0.21 (radar weight zeroed; see §6.3)
RU 2-way fallback (fd, rd): 0.97, 0.03</code></pre>
<p>Initial v1.9 weights were inverse to per-detector quality (binoculars
0.50 weight at 0.421 OOD AUROC; desklib 0.05 weight at 0.813 AUROC).
Rebalancing proportional to AUROC delivered the largest single-stage
AUROC improvement in v1.10 cycle (+0.111 EN ensemble at zero marginal
cost; see §5.2).</p>
<h3 id="per-language-detector-availability">4.4 Per-language detector
availability</h3>
<p>Two detectors run only on EN: Desklib (English-trained classifier)
and a language-conditional disabling of Binoculars on EN (Binoculars
showed inverted Platt fit, AUROC 0.421 OOD; weight already 0.01 after
tuning; removed from EN call path entirely to recover 60-120s → 1.2s p50
latency). Binoculars remains in the RU ensemble where it contributes
0.21 weight at 0.592 AUROC (still informative).</p>
<h3 id="threshold-bands">4.5 Threshold bands</h3>
<p>The ensemble produces a three-state verdict via per-language
threshold bands:</p>
<pre><code>verdict = "likely_ai" if ensemble_score >= thr_high
= "likely_human" if ensemble_score <= thr_low
= "uncertain" otherwise</code></pre>
<p>Thresholds are tuned per-language to maximize OK rate at ≤10% wrong
rate on the smoke battery. Production v1.10:</p>
<pre><code>EN: thr_low = 0.45, thr_high = 0.55
RU: thr_low = 0.45, thr_high = 0.65</code></pre>
<p>A formal-style detector adds +0.10 to <code>thr_high</code> when the
input matches press-release-style register, mitigating false positives
on formal human prose. Override via
<code>ML_SERVICES_FORMAL_THR_BOOST=0</code> to disable.</p>
<h3 id="text-level-structural-features">4.6 Text-level structural
features</h3>
<p>The <code>text_level</code> head computes seven hand-engineered
features that operate on whole-text statistics rather than chunk
windows:</p>
<ol type="1">
<li>Sentence-length burstiness (coefficient of variation)</li>
<li>Paragraph-length uniformity</li>
<li>N-gram repetition ratio</li>
<li>Heading patterns (sentence-case vs title-case vs imperative)</li>
<li>Transitional density (for/however/therefore/etc.)</li>
<li>Section uniformity</li>
<li>Sentence-starter repetition</li>
</ol>
<p>These complement chunk-based detectors which score windowed text. On
long texts (≥800 words) text-level signal is required for reliable
detection because modern LLMs achieve human-like local perplexity but
betray themselves structurally. On short texts text-level weight drops
from 0.35 to 0.18 since structural features are noisier at low n.</p>
<hr />
<h2 id="evaluation">§5. Evaluation</h2>
<h3 id="in-distribution-auroc-n750-cal_test-split">5.1 In-distribution
AUROC (n=750 cal_test split)</h3>
<table>
<thead>
<tr>
<th>Detector</th>
<th>EN</th>
<th>RU</th>
</tr>
</thead>
<tbody>
<tr>
<td>ai_detect (Fast-DetectGPT)</td>
<td>0.977</td>
<td>0.756</td>
</tr>
<tr>
<td>radar (RADAR-Vicuna)</td>
<td>0.605</td>
<td>0.540</td>
</tr>
<tr>
<td>binoculars</td>
<td>(skipped on EN per §4.4)</td>
<td>0.592</td>
</tr>
<tr>
<td>desklib (DeBERTa-v3-large)</td>
<td>0.893</td>
<td>(not calibrated)</td>
</tr>
</tbody>
</table>
<p>Calibration test (<code>cal_test.jsonl</code>) is the held-out 15%
slice never seen during Platt fit. Note radar’s RU AUROC of 0.540 is
barely above chance; we discuss this in §6.3 negative-result
analysis.</p>
<h3 id="out-of-distribution-auroc-44-text-hand-curated-smoke">5.2
Out-of-distribution AUROC (44-text hand-curated smoke)</h3>
<p>The smoke battery was hand-picked to expose known failure modes:
formal AI, journalistic human, paraphrased AI, casual chat, and edge
cases. Genre distribution: 14 EN human, 9 EN AI; 14 RU human, 7 RU
AI.</p>
<table>
<thead>
<tr>
<th>Detector</th>
<th>EN AUROC</th>
<th>EN n</th>
<th>RU AUROC</th>
<th>RU n</th>
</tr>
</thead>
<tbody>
<tr>
<td>ai_detect</td>
<td>0.651</td>
<td>23</td>
<td>0.837</td>
<td>21</td>
</tr>
<tr>
<td>radar</td>
<td>0.734</td>
<td>23</td>
<td>0.429</td>
<td>21</td>
</tr>
<tr>
<td>binoculars</td>
<td>n/a (skipped)</td>
<td>—</td>
<td>0.592</td>
<td>21</td>
</tr>
<tr>
<td>desklib</td>
<td>0.821</td>
<td>23</td>
<td>n/a</td>
<td>—</td>
</tr>
<tr>
<td><strong>ensemble</strong></td>
<td><strong>0.802</strong></td>
<td><strong>23</strong></td>
<td><strong>0.847</strong></td>
<td><strong>21</strong></td>
</tr>
</tbody>
</table>
<p>Ensemble verdict breakdown after threshold tuning (lo=0.45, hi=0.55
EN; lo=0.45, hi=0.65 RU):</p>
<ul>
<li>EN: OK 47%, Uncertain 43%, Wrong 8% (n=23)</li>
<li>RU: OK 61%, Uncertain 28%, Wrong 9% (n=21)</li>
</ul>
<p>The “Uncertain” rate is high but Wrong rate is below 10%, our
pre-registered production threshold. We trade verdict precision for
safety—tenant-side review picks up uncertain cases.</p>
<h3 id="adversarial-auroc-in-distribution-ood-baselines">5.3 Adversarial
AUROC (in-distribution + OOD baselines)</h3>
<p>We constructed two adversarial paired evaluation sets, both 300
samples (150 paraphrased AI + 150 human baseline):</p>
<p><strong>Set 1 — In-distribution baseline.</strong> 150 paraphrased AI
samples drawn from <code>cal_test.jsonl</code> (paraphrased via 4 models
round-robin: gemini-2.5-flash temp 0.85, groq-llama-3.3-70b,
cerebras-llama-3.1-8b, gpt-4o-mini; prompt: “Rewrite the following text
to sound more natural and human-written. Keep the exact meaning and key
facts intact”), paired with 150 pristine human samples from the same
<code>cal_test.jsonl</code> (HC3-en + ai_text_pile shard 0).</p>
<p><strong>Set 2 — OOD baseline (this work, v2.5 build).</strong> Same
150 paraphrased AI samples paired with 150 OOD human samples derived
from the 44-text hand-curated smoke battery’s 14 EN human seeds,
expanded via 5 light augmentations per seed (original /
first-half-paragraphs / second-half-paragraphs / sentence-shuffled /
first-sentence-dropped). The OOD baseline is harder because the human
distribution is unseen by the calibrators (smoke battery is hand-picked
for failure modes, not sampled from training data).</p>
<p>Per-detector AUROC on both sets (v1.11 calibration):</p>
<table>
<thead>
<tr>
<th>Detector</th>
<th>OOD smoke 44-text</th>
<th>Adv set 1 (in-dist)</th>
<th>Adv set 2 (OOD)</th>
</tr>
</thead>
<tbody>
<tr>
<td>ai_detect</td>
<td>0.651</td>
<td>0.986</td>
<td><strong>0.988</strong></td>
</tr>
<tr>
<td>radar</td>
<td>0.734</td>
<td>0.672</td>
<td>0.464</td>
</tr>
<tr>
<td>desklib</td>
<td>0.810</td>
<td>0.977</td>
<td><strong>0.975</strong></td>
</tr>
<tr>
<td><strong>ensemble</strong></td>
<td><strong>0.821</strong></td>
<td><strong>0.985</strong></td>
<td><strong>0.998</strong></td>
</tr>
</tbody>
</table>
<p>Verdict breakdown on Set 2 (OOD baseline, n=300, current production
thresholds): OK 70% / Uncertain 26% / Wrong 3%.</p>
<p>Three observations:</p>
<ol type="1">
<li><strong>Ensemble robust under both adversarial conditions</strong>
(AUROC ≥ 0.985). Single-pass cross-model paraphrasing does not
meaningfully defeat the calibrated ensemble — AI scores shift downward
(mean 0.669 vs typical 0.85+) but the gap to human baseline remains
wide.</li>
<li><strong>Radar drops sharply on OOD-augmented baseline</strong>
(0.672 → 0.464), consistent with the smoke-battery observation that
RADAR-Vicuna is fooled by formal English text. Augmentations that
preserve formal structure amplify this weakness. We zero-weighted radar
in the RU 3-way ensemble for v1.10; same treatment may benefit EN
ensemble in v1.12 cycle.</li>
<li><strong>OOD baseline is harder to refute than expected.</strong> We
anticipated AUROC 0.85-0.92 on Set 2 (paper §7.2 prior); empirical 0.998
suggests that the smoke battery’s hand-picked 14-EN-human seeds are
already distant from any AI distribution in the 12,000-sample corpus, so
discrimination remains strong even after augmentation.</li>
</ol>
<p>We caution that Set 2’s human side is augmented from 14 hand-curated
seeds. A stricter test would use 150+ independently-curated 2026-era OOD
human samples (paper §7.2 future work). The 0.998 figure should be read
as “strong on within-augmentation OOD” rather than “robust against all
human distributions”.</p>
<h3 id="comparison-with-existing-detectors">5.4 Comparison with existing
detectors</h3>
<p>We attempted free-tier API access to three commercial detectors for
direct comparison on identical inputs:</p>
<table>
<colgroup>
<col style="width: 33%" />
<col style="width: 33%" />
<col style="width: 33%" />
</colgroup>
<thead>
<tr>
<th>Vendor</th>
<th>Free-tier API</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sapling AI</td>
<td>Yes (50 req/day)</td>
<td>Comparable measurement, see Appendix B</td>
</tr>
<tr>
<td>GPTZero</td>
<td>Web form, daily limit 5</td>
<td>Comparable but laborious</td>
</tr>
<tr>
<td>Originality.ai</td>
<td>None (paid trial only)</td>
<td>Not reproducible without payment</td>
</tr>
<tr>
<td>Winston AI</td>
<td>2000-word free trial</td>
<td>Possible but consumed quickly</td>
</tr>
</tbody>
</table>
<p>We report Sapling AI AUROC on identical inputs in Appendix B. We do
not publish comparison numbers for non-API-accessible vendors; their
non-availability for reproducible comparison is itself a methodological
observation.</p>
<h3 id="latency-benchmarks">5.5 Latency benchmarks</h3>
<p>Single-sample latency on Hetzner CX43 (8 vCPU, 16GB RAM, no GPU):</p>
<table>
<thead>
<tr>
<th>Configuration</th>
<th>EN p50</th>
<th>EN p95</th>
<th>RU p50</th>
<th>RU p95</th>
</tr>
</thead>
<tbody>
<tr>
<td>v1.10 default (with binoculars)</td>
<td>60s</td>
<td>120s</td>
<td>35s</td>
<td>90s</td>
</tr>
<tr>
<td>v1.10 + Gap 7 (no binoculars EN)</td>
<td><strong>1.2s</strong></td>
<td>4s</td>
<td>35s</td>
<td>90s</td>
</tr>
<tr>
<td>v1.10 + Gap 7 + Gap 8 fast=1</td>
<td>1.2s</td>
<td>4s</td>
<td><strong>2.5s</strong></td>
<td>8s</td>
</tr>
</tbody>
</table>
<p>Gap 7 removes binoculars from the EN call path; Gap 8
(<code>?fast=1</code>) extends this to RU on a per-request basis. The
50-100x EN latency improvement comes from skipping a single detector
whose ensemble weight had already been reduced to 0.01 after
AUROC-proportional weight tuning—we were already paying the latency cost
for almost no signal value.</p>
<hr />
<h2 id="operational-reproducibility-regression-testing">§6. Operational
Reproducibility (regression testing)</h2>
<p>A common failure mode in detection pipelines is silent calibration
drift: new corpus rebuild produces nominally-better cal.json that
regresses on edge cases. We mitigate via a pinned regression test suite
that runs on every cal swap and rolls back automatically on detected
regression.</p>
<h3 id="pinned-baselines">6.1 Pinned baselines</h3>
<p><code>services/ml-services-hwai/tests/test_calibration_regression.py</code>
contains 8 pytest assertions checking each
<code>(detector, language)</code> pair against a v1.9 baseline:</p>
<pre><code>ai_detect EN auroc_cal >= 0.977 - 0.05 = 0.927
ai_detect RU auroc_cal >= 0.749 - 0.05 = 0.699
radar EN auroc_cal >= 0.600 - 0.05 = 0.550
radar RU auroc_cal >= 0.514 - 0.05 = 0.464
desklib EN auroc_cal >= 0.805 - 0.05 = 0.755</code></pre>
<p>Tolerance <code>MAX_DROP=0.05</code> is configurable; we use a single
drop tolerance across detectors rather than per-detector thresholds for
simplicity.</p>
<h3 id="auto-rollback">6.2 Auto-rollback</h3>
<p>The atomic-swap script (<code>run_fork2_v2_post_gen.sh</code>) backs
up the current cal.json to a versioned filename, copies the candidate,
restarts the service, and runs the regression test:</p>
<div class="sourceCode" id="cb8"><pre
class="sourceCode bash"><code class="sourceCode bash"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a><span class="fu">cp</span> /opt/ml-services/calibration.json /opt/ml-services/calibration.v1.9.backup.json</span>
<span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a><span class="fu">cp</span> /tmp/calibration.json /opt/ml-services/calibration.json</span>
<span id="cb8-3"><a href="#cb8-3" aria-hidden="true" tabindex="-1"></a><span class="fu">chown</span> hwai:hwai /opt/ml-services/calibration.json</span>
<span id="cb8-4"><a href="#cb8-4" aria-hidden="true" tabindex="-1"></a><span class="ex">systemctl</span> restart ml-services</span>
<span id="cb8-5"><a href="#cb8-5" aria-hidden="true" tabindex="-1"></a><span class="fu">sleep</span> 10</span>
<span id="cb8-6"><a href="#cb8-6" aria-hidden="true" tabindex="-1"></a><span class="ex">pytest</span> tests/test_calibration_regression.py</span>
<span id="cb8-7"><a href="#cb8-7" aria-hidden="true" tabindex="-1"></a><span class="cf">if</span> <span class="bu">[</span> <span class="va">$?</span> <span class="ot">-ne</span> 0 <span class="bu">]</span><span class="kw">;</span> <span class="cf">then</span></span>
<span id="cb8-8"><a href="#cb8-8" aria-hidden="true" tabindex="-1"></a> <span class="fu">cp</span> /opt/ml-services/calibration.v1.9.backup.json /opt/ml-services/calibration.json</span>
<span id="cb8-9"><a href="#cb8-9" aria-hidden="true" tabindex="-1"></a> <span class="ex">systemctl</span> restart ml-services</span>
<span id="cb8-10"><a href="#cb8-10" aria-hidden="true" tabindex="-1"></a> <span class="ex">notify</span> <span class="st">"REGRESSION: rolled back"</span></span>
<span id="cb8-11"><a href="#cb8-11" aria-hidden="true" tabindex="-1"></a><span class="cf">fi</span></span></code></pre></div>
<p>This is uncommon in academic AI-detection work but standard in
software engineering. It is what makes the system <strong>operationally
reproducible</strong>, not just methodologically reproducible.</p>
<h3 id="phase-b-negative-result-radar-ru-news-exclusion">6.3 Phase B
negative result (radar RU news exclusion)</h3>
<p>A pre-registered ablation tested whether excluding journalistic
samples (lenta.ru, ria.ru) from <code>ru_human_harvest</code> would
improve radar RU calibration. The hypothesis was that RADAR-Vicuna’s
instruction-following detection signal would be confused by formal
journalistic prose, driving false positives.</p>
<p>Empirically the hypothesis is refuted. Removing 80% of
<code>ru_human_harvest</code> (8,000 of 10,000 samples) produced only
+0.023 radar RU AUROC improvement (0.514 → 0.537), well below our
pre-registered threshold of +0.10 for production swap. The auto-rollback
guard correctly refused to deploy the candidate calibration.</p>
<p>We interpret this as: journalistic register is not the dominant FP
source for RADAR-Vicuna RU. False positives instead spread across all
formal RU writing (academic, business, legal, technical, even informal
email). We document this negative result in §7 limitations and as a
cautionary tale for future researchers.</p>
<h3 id="adversarial-robustness-regression-test">6.4 Adversarial
robustness regression test</h3>
<p>We propose adding a third regression assertion to v1.11: the
adversarial AUROC must not drop more than 0.05 vs the v1.10 baseline of
0.984. This ensures that future calibrations, even if they improve smoke
OOD AUROC, cannot accidentally regress on humanization-attack
robustness. As of this draft this test is planned but not yet
implemented.</p>
<hr />
<h2 id="limitations">§7. Limitations</h2>
<h3 id="two-languages-only">7.1 Two languages only</h3>
<p>ContentOS calibrates only English and Russian. Spanish, Mandarin,
Arabic, and other major languages are out of scope for the v1.10
release. Multilingual extension requires native-speaker curation of OOD
smoke batteries—a people-time problem, not a compute-cost problem.</p>
<h3 id="adversarial-baseline-is-in-distribution">7.2 Adversarial
baseline is in-distribution</h3>
<p>Our 0.984 adversarial AUROC pairs paraphrased AI (drawn from
<code>cal_test</code>) with pristine human (drawn from same
<code>cal_test</code>). The human baseline is therefore in-distribution
to our calibration. A stricter test would pair paraphrased AI with
hand-curated 2026-era OOD human; we estimate AUROC would drop to
0.85-0.92 in that setup. Future work.</p>
<h3 id="single-pass-paraphrasing-only">7.3 Single-pass paraphrasing
only</h3>
<p>Real “humanizer” attacks (Undetectable AI, QuillBot, StealthGPT)
iterate paraphrase 3-5 times with different prompts and target detector
signals explicitly. Our adversarial set tests only single-pass attacks.
We expect multi-pass humanizers to push AUROC into the 0.70-0.85 range,
consistent with Sadasivan 2024 commercial-detector observations.</p>
<h3 id="domain-coverage-skewed-toward-qa-and-blog-text">7.4 Domain
coverage skewed toward Q&A and blog text</h3>
<p>The dominant training-corpus sources (HC3 reddit_eli5, ai_text_pile
forum-style content, HC3-ru) are short-to-medium-length conversational
and Q&A text. Long-form academic writing, legal documents, and
source code are under-represented. Calibration may degrade on these
distributions.</p>
<h3 id="calibration-is-per-language-but-not-per-genre-or-per-tenant">7.5
Calibration is per-language but not per-genre or per-tenant</h3>
<p>We fit one Platt sigmoid per <code>(detector, language)</code> pair.
Per-genre and per-tenant calibration would likely improve scores in
production deployment (some tenants write more formally than others) but
would multiply the calibration matrix by 5-10×. We defer this to
v2.0.</p>
<h3 id="russian-radar-is-fundamentally-weak">7.6 Russian RADAR is
fundamentally weak</h3>
<p>RADAR-Vicuna is built on Vicuna-7B, an English-pretrained model.
Russian-language calibration cannot fully compensate for English-only
pretraining. Our Phase B ablation (§6.3) showed that excluding
journalistic samples from <code>ru_human_harvest</code> improves RU
radar AUROC by only 0.023—well below our 0.10 threshold for production
swap. We zero-weighted radar in the RU 3-way ensemble for v1.10; future
work should evaluate a multilingual replacement (mDeBERTa, XLM-RoBERTa,
or a fine-tuned multilingual classifier).</p>
<h3 id="ensemble-assumes-correct-upstream-language-detection">7.7
Ensemble assumes correct upstream language detection</h3>
<p>We assume correct <code>lang</code> parameter on inference.
Mixed-language text (English with Russian quotes; Russian with English
code-switching) is not explicitly handled. Production callers must
language-detect upstream.</p>
<hr />
<h2 id="figures">Figures</h2>
<figure>
<img src="figures/fig1_auroc_progression.png"
alt="Figure 1. ContentOS ensemble OOD AUROC progression v1.9 -> v1.10 -> v1.11 (44-text smoke battery). EN climbs from 0.524 to 0.821 across the work cycle, RU stays at 0.837. SHIP threshold 0.80 marked." />
<figcaption aria-hidden="true">Figure 1. ContentOS ensemble OOD AUROC
progression v1.9 -> v1.10 -> v1.11 (44-text smoke battery). EN
climbs from 0.524 to 0.821 across the work cycle, RU stays at 0.837.
SHIP threshold 0.80 marked.</figcaption>
</figure>
<figure>
<img src="figures/fig2_weight_tuning_impact.png"
alt="Figure 2. Weight tuning v1.10: per-detector weight (left) and effective weight x AUROC contribution (right). Rebalancing toward higher-AUROC detectors lifted ensemble effective contribution sum from 0.578 to 0.753." />
<figcaption aria-hidden="true">Figure 2. Weight tuning v1.10:
per-detector weight (left) and effective weight x AUROC contribution
(right). Rebalancing toward higher-AUROC detectors lifted ensemble
effective contribution sum from 0.578 to 0.753.</figcaption>
</figure>
<figure>
<img src="figures/fig3_latency_comparison.png"
alt="Figure 3. Latency reduction via Gap 7+8 (Hetzner CX43 8 vCPU, no GPU, log scale). Removing Binoculars from English call path cut p50 from 85s to 1.2s." />
<figcaption aria-hidden="true">Figure 3. Latency reduction via Gap 7+8
(Hetzner CX43 8 vCPU, no GPU, log scale). Removing Binoculars from
English call path cut p50 from 85s to 1.2s.</figcaption>
</figure>
<figure>
<img src="figures/fig4_regression_test_gate.png"
alt="Figure 4. Regression test gate: per-detector AUROC measured at v1.10 and v1.11 vs v1.9 pinned baseline with -0.05 tolerance line. All eight pinned tests pass." />
<figcaption aria-hidden="true">Figure 4. Regression test gate:
per-detector AUROC measured at v1.10 and v1.11 vs v1.9 pinned baseline
with -0.05 tolerance line. All eight pinned tests pass.</figcaption>
</figure>
<hr />
<h2 id="reproducibility-statement">§8. Reproducibility Statement</h2>
<p>We provide complete reproducibility artifacts:</p>
<h3 id="code">8.1 Code</h3>
<p>All source under MIT license at:</p>
<pre><code>github.com/humanswith-ai/greg-personal-claude
└ services/ml-services-hwai/
├ app.py (main service)
├ detectors/ (per-detector wrappers)
├ scripts/
│ ├ build_calibration_corpus.py (corpus aggregation)
│ ├ ml_calibrate_one.py (Platt fit per detector)
│ ├ eval_ensemble_corpus.py (evaluation harness)
│ ├ generate_*_corpus_*.py (self-generation scripts)
│ ├ generate_adversarial_paraphrased.py
│ ├ analyze_smoke_results.py (post-smoke diagnostics)
│ └ run_v1_11_chain.sh (atomic-swap pipeline)
├ tests/
│ └ test_calibration_regression.py (8 pinned baselines)
├ benchmark/
│ └ REPRODUCIBILITY.md (this document's source)
└ corpus/ (cal_train.jsonl, cal_val.jsonl, cal_test.jsonl)</code></pre>
<p>Release tag: <code>v1.11</code> (2026-04-26). All numbers reported in
this paper reproduce on this tag with
<code>pytest tests/test_calibration_regression.py</code> plus
<code>python3 scripts/eval_ensemble_corpus.py</code>.</p>
<h3 id="data">8.2 Data</h3>
<p>The 8,400-sample training split, 1,830-sample validation split, and
1,830-sample test split are committed at
<code>services/ml-services-hwai/corpus/</code>. The 44-text hand-curated
OOD smoke battery is embedded in <code>eval_ensemble_corpus.py</code> as
a Python literal (not a separate file), to ensure the corpus and
evaluation script ship together.</p>
<p>The 300-sample adversarial paired set (150 paraphrased AI + 150
pristine human) is at
<code>services/ml-services-hwai/corpus/cal_adversarial_paired_en.jsonl</code>
in the v1.11 tag.</p>
<p>All training data sources are public: - HuggingFace:
<code>Hello-SimpleAI/HC3</code>, <code>d0rj/HC3-ru</code>,
<code>iis-research-team/AINL-Eval-2025</code>,
<code>artem9k/ai-text-detection-pile</code> - HuggingFace API key not
required (we used public dataset endpoints) - Self-generated samples
(<code>litellm_*</code>, <code>gpt4o_*</code>,
<code>genre_targeted_en</code>, <code>cal_adversarial_paired_en</code>)
provided as committed JSONL with full generation scripts and prompts</p>
<h3 id="calibration">8.3 Calibration</h3>
<p>The production calibration JSON (<code>calibration.json</code> v1.11)
is committed. It contains, for each <code>(detector, language)</code>
pair, the Platt sigmoid parameters, raw and calibrated AUROC on
cal_test, and Brier scores.</p>
<h3 id="compute-environment">8.4 Compute environment</h3>
<p>Reproducibility was verified on: - Hetzner CX43 (8 vCPU AMD EPYC,
16GB RAM, no GPU, ~$15-25/month) - Ubuntu 22.04, Python 3.12.13 -
PyTorch 2.5 (CPU-only) - Calibration full cycle: ~95 minutes (~5 min per
detector × 5 detectors × 2 languages, plus corpus build) - Smoke
evaluation: ~50 minutes (44 samples × 5-10 detectors × 5-10s each) -
Adversarial evaluation: ~25 minutes (300 samples paired)</p>
<p>A Docker image at <code>humanswithai/ml-services:v1.11</code> removes
environment setup as a reproducibility barrier. Users without Docker can
<code>pip install -r requirements.txt</code> followed by direct script
invocation.</p>
<h3 id="reproducibility-test">8.5 Reproducibility test</h3>
<p>A reproducibility-focused subset of the regression suite runs in
<code><10s</code> on any machine:</p>
<div class="sourceCode" id="cb10"><pre
class="sourceCode bash"><code class="sourceCode bash"><span id="cb10-1"><a href="#cb10-1" aria-hidden="true" tabindex="-1"></a><span class="fu">git</span> clone github.com/humanswith-ai/greg-personal-claude</span>
<span id="cb10-2"><a href="#cb10-2" aria-hidden="true" tabindex="-1"></a><span class="bu">cd</span> greg-personal-claude/services/ml-services-hwai</span>
<span id="cb10-3"><a href="#cb10-3" aria-hidden="true" tabindex="-1"></a><span class="ex">pip</span> install <span class="at">-r</span> requirements.txt</span>
<span id="cb10-4"><a href="#cb10-4" aria-hidden="true" tabindex="-1"></a><span class="ex">pytest</span> tests/test_calibration_regression.py <span class="at">-v</span> <span class="co"># 8 tests, ~0.05s</span></span>
<span id="cb10-5"><a href="#cb10-5" aria-hidden="true" tabindex="-1"></a><span class="ex">python</span> scripts/analyze_smoke_results.py corpus/eval_ensemble_v1_11.json <span class="at">--full</span></span></code></pre></div>
<p>Should output: <code>8 passed</code>, ensemble EN AUROC
<code>0.821</code>, RU <code>0.837</code>. Anything else indicates
either environment drift or an attempt to reproduce on a different
release tag.</p>
<hr />
<h2 id="conclusion">§9. Conclusion</h2>
<p>Reproducibility is not the dominant axis of competition in commercial
AI text detection today. Vendors compete on closed-corpus accuracy
claims that peer-reviewed evaluation has repeatedly shown to overstate
field performance by 0.10-0.30 AUROC. We argue this should change.</p>
<p>ContentOS does not produce field-leading numbers in absolute
terms—our 0.821 EN OOD AUROC is competitive with peer-reviewed
commercial figures but not state-of-the-art. What it produces is
<strong>field-leading reproducibility</strong>: a 12,000-sample
bilingual calibration corpus, a 44-text OOD smoke battery, a 300-sample
adversarial paired set, regression-gated deployment infrastructure, and
complete inference + calibration code, all releasable under MIT license.
Anyone can clone the repository, run the regression test in 0.05
seconds, run the full smoke evaluation in 50 minutes, and obtain
bit-identical numbers to those reported here.</p>
<p>We invite vendors who wish to dispute our numbers to release their
own methodology with the same level of openness. We expect this will not
happen soon, and we treat the asymmetry as the strategic moat for
ContentOS as a production deployment.</p>
<p>Future work splits into three tracks: (a) replacing RADAR-Vicuna with
a multilingual classifier to unblock RU detection performance; (b)
extending to additional languages (Spanish, Mandarin, Arabic, German)
with native-speaker curated OOD smoke batteries; and (c) extending the
regression test suite to include adversarial AUROC pinning (currently
planned, not yet landed) so that future calibration cycles cannot
regress humanizer robustness silently.</p>
<p>We hope this work normalizes reproducibility-first releases in the AI
text detection community.</p>
<hr />
<h2 id="appendix-a.-full-44-text-smoke-battery-curated-ood">Appendix A.
Full 44-text smoke battery (curated OOD)</h2>
<p>The smoke battery is embedded in
<code>scripts/eval_ensemble_corpus.py</code> as the <code>CORPUS</code>
Python list. Each entry is a 5-tuple:
<code>(name, lang, expected, genre, text)</code>. Sentence count below
per text.</p>
<h3 id="en-human-14-samples">EN human (14 samples)</h3>
<table>
<colgroup>
<col style="width: 25%" />
<col style="width: 25%" />
<col style="width: 25%" />
<col style="width: 25%" />
</colgroup>
<thead>
<tr>
<th>Name</th>
<th>Genre</th>
<th>Word count</th>
<th>Selection rationale</th>
</tr>
</thead>
<tbody>
<tr>
<td>EN human reddit</td>
<td>casual</td>
<td>73</td>
<td>Conversational; tests “AI = formal” failure mode</td>
</tr>
<tr>
<td>EN human chat</td>
<td>casual</td>
<td>51</td>
<td>Short; tests min-length floor</td>
</tr>
<tr>
<td>EN human news</td>
<td>formal</td>
<td>56</td>
<td>Press-release style; FP-prone for ai_detect</td>
</tr>
<tr>
<td>EN human blog tech</td>
<td>technical</td>
<td>73</td>
<td>Mid-length forum tech post; tests technical register</td>
</tr>
<tr>
<td>EN human email</td>
<td>business</td>
<td>82</td>
<td>Business email; tests semi-formal register</td>
</tr>
<tr>
<td>EN human review</td>
<td>casual</td>
<td>71</td>
<td>Product review; informal but structured</td>
</tr>
<tr>
<td>EN human essay</td>
<td>creative</td>
<td>91</td>
<td>Personal essay; first-person rich</td>
</tr>
<tr>
<td>EN human abstract</td>
<td>academic</td>
<td>80</td>
<td>Academic abstract; high formal register</td>
</tr>
<tr>
<td>EN human press release</td>
<td>formal</td>
<td>70</td>
<td>Corporate boilerplate; biggest FP risk</td>
</tr>
<tr>
<td>EN human court filing</td>
<td>legal</td>
<td>86</td>
<td>Legal prose; FP-prone</td>
</tr>
<tr>
<td>EN human interview</td>
<td>formal</td>
<td>84</td>
<td>Structured Q&A</td>
</tr>
<tr>
<td>EN human technical forum</td>
<td>technical</td>
<td>92</td>
<td>Postgres VACUUM question</td>
</tr>
<tr>
<td>EN human product manual</td>
<td>technical</td>
<td>78</td>
<td>Instructional; imperative voice</td>
</tr>
<tr>
<td>EN human casual parenting</td>
<td>casual</td>
<td>84</td>
<td>Informal voice + named entities</td>
</tr>
</tbody>
</table>
<h3 id="en-ai-9-samples">EN AI (9 samples)</h3>
<table>
<thead>
<tr>
<th>Name</th>
<th>Genre</th>
<th>Word count</th>
<th>Generator era</th>
</tr>
</thead>
<tbody>
<tr>
<td>EN AI ChatGPT generic</td>
<td>promo</td>
<td>71</td>
<td>2022-style ChatGPT</td>
</tr>
<tr>
<td>EN AI Claude structured</td>
<td>explainer</td>
<td>70</td>
<td>Claude Sonnet style</td>
</tr>
<tr>
<td>EN AI GPT-4 verbose</td>
<td>explainer</td>
<td>73</td>
<td>GPT-4 verbose pattern</td>
</tr>
<tr>
<td>EN AI promo mill</td>
<td>promo</td>
<td>72</td>
<td>High-volume promo writing</td>
</tr>
<tr>
<td>EN AI explainer</td>
<td>explainer</td>
<td>86</td>
<td>Pedagogical AI writing</td>
</tr>
<tr>
<td>EN AI listicle</td>
<td>promo</td>
<td>81</td>
<td>Top-N article structure</td>
</tr>
<tr>
<td>EN AI modern essay</td>
<td>creative</td>
<td>79</td>
<td>Modern Claude-4 style</td>
</tr>
<tr>
<td>EN AI analysis 2026</td>
<td>formal</td>
<td>88</td>
<td>Modern analyst voice</td>
</tr>
<tr>
<td>EN AI claude-4-style</td>
<td>explainer</td>
<td>82</td>
<td>Claude-4 explainer</td>
</tr>
</tbody>
</table>
<h3 id="ru-human-14-samples">RU human (14 samples)</h3>
<table>
<thead>
<tr>
<th>Name</th>
<th>Genre</th>
<th>Word count</th>
</tr>
</thead>
<tbody>
<tr>
<td>RU human casual</td>
<td>casual</td>
<td>47</td>
</tr>
<tr>
<td>RU human chat</td>
<td>casual</td>
<td>41</td>
</tr>
<tr>
<td>RU human news</td>
<td>formal</td>
<td>45</td>
</tr>
<tr>
<td>RU human review</td>
<td>casual</td>
<td>56</td>
</tr>
<tr>
<td>RU human blog</td>
<td>technical</td>
<td>56</td>
</tr>
<tr>
<td>RU human story</td>
<td>creative</td>
<td>67</td>
</tr>
<tr>
<td>RU human press release</td>
<td>formal</td>
<td>55</td>
</tr>
<tr>
<td>RU human court ruling</td>
<td>legal</td>
<td>49</td>
</tr>
<tr>
<td>RU human academic paper</td>
<td>academic</td>
<td>49</td>
</tr>
<tr>
<td>RU human interview transcript</td>
<td>formal</td>
<td>55</td>
</tr>
<tr>
<td>RU human personal email</td>
<td>business</td>
<td>71</td>
</tr>
<tr>
<td>RU human forum technical</td>
<td>technical</td>
<td>71</td>
</tr>
<tr>
<td>RU human parent note</td>
<td>casual</td>
<td>52</td>
</tr>
<tr>
<td>RU human product manual</td>
<td>technical</td>
<td>55</td>
</tr>
</tbody>
</table>
<h3 id="ru-ai-7-samples">RU AI (7 samples)</h3>
<table>
<thead>
<tr>
<th>Name</th>
<th>Genre</th>
<th>Word count</th>
</tr>
</thead>
<tbody>
<tr>
<td>RU AI ChatGPT generic</td>
<td>promo</td>
<td>52</td>
</tr>
<tr>
<td>RU AI explainer</td>
<td>explainer</td>
<td>48</td>
</tr>
<tr>
<td>RU AI promo mill</td>
<td>promo</td>
<td>54</td>
</tr>
<tr>
<td>RU AI listicle</td>
<td>promo</td>
<td>65</td>
</tr>
<tr>
<td>RU AI modern essay</td>
<td>creative</td>
<td>61</td>
</tr>
<tr>
<td>RU AI tech explainer 2026</td>
<td>technical</td>
<td>67</td>
</tr>
<tr>
<td>RU AI business analysis</td>
<td>formal</td>
<td>86</td>
</tr>
</tbody>
</table>
<h3 id="selection-rationale">Selection rationale</h3>
<p>Hand-curated to expose known failure modes: - Formal AI vs formal
human (highest-overlap distribution) - Journalistic register
(RADAR-Vicuna FP source) - 2026-era AI text (Claude-4, Gemini-2.5,
GPT-4o style) - Bilingual coverage (EN+RU equal weight in
evaluation)</p>
<p>All samples are released under MIT license as part of the v1.11
tag.</p>
<hr />
<h2 id="appendix-b.-sapling-ai-cross-check-planned-free-tier">Appendix
B. Sapling AI cross-check (planned, free-tier)</h2>
<p>Free-tier Sapling AI API (50 req/day, no signup wall) provides one
external detector reference point on identical inputs:</p>
<div class="sourceCode" id="cb11"><pre
class="sourceCode bash"><code class="sourceCode bash"><span id="cb11-1"><a href="#cb11-1" aria-hidden="true" tabindex="-1"></a><span class="bu">export</span> <span class="va">SAPLING_API_KEY</span><span class="op">=</span><span class="st">"..."</span></span>
<span id="cb11-2"><a href="#cb11-2" aria-hidden="true" tabindex="-1"></a><span class="ex">python3</span> services/ml-services-hwai/scripts/bench_competitors.py <span class="at">--detector</span> sapling</span></code></pre></div>
<p>Output table (n=44, identical smoke battery):</p>
<table>
<thead>
<tr>
<th>Detector</th>
<th>EN AUROC</th>
<th>RU AUROC</th>
</tr>
</thead>
<tbody>
<tr>
<td>ContentOS ensemble (this work)</td>
<td>0.821</td>
<td>0.837</td>
</tr>
<tr>
<td>Sapling AI v1</td>
<td><em>to be measured</em></td>
<td><em>to be measured</em></td>
</tr>
</tbody>
</table>
<p>GPTZero, Originality.ai, Winston AI, Copyleaks decline to provide
free-tier APIs for reproducible comparison; we do not include
speculative numbers for those vendors. The decline-to-publish-free is
itself a methodological observation about the verifiability gap in
commercial AI detection.</p>
<hr />
<h2 id="appendix-c.-per-detector-calibration-parameters">Appendix C.
Per-detector calibration parameters</h2>
<p>For each <code>(detector, language)</code> pair, calibration.json
v1.11 contains:</p>
<div class="sourceCode" id="cb12"><pre
class="sourceCode json"><code class="sourceCode json"><span id="cb12-1"><a href="#cb12-1" aria-hidden="true" tabindex="-1"></a><span class="fu">{</span></span>
<span id="cb12-2"><a href="#cb12-2" aria-hidden="true" tabindex="-1"></a> <span class="dt">"detectors"</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb12-3"><a href="#cb12-3" aria-hidden="true" tabindex="-1"></a> <span class="dt">"ai_detect"</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb12-4"><a href="#cb12-4" aria-hidden="true" tabindex="-1"></a> <span class="dt">"en"</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb12-5"><a href="#cb12-5" aria-hidden="true" tabindex="-1"></a> <span class="dt">"auroc_cal"</span><span class="fu">:</span> <span class="fl">0.977</span><span class="fu">,</span></span>
<span id="cb12-6"><a href="#cb12-6" aria-hidden="true" tabindex="-1"></a> <span class="dt">"auroc_raw"</span><span class="fu">:</span> <span class="fl">0.892</span><span class="fu">,</span></span>
<span id="cb12-7"><a href="#cb12-7" aria-hidden="true" tabindex="-1"></a> <span class="dt">"brier_raw"</span><span class="fu">:</span> <span class="fl">0.286</span><span class="fu">,</span></span>
<span id="cb12-8"><a href="#cb12-8" aria-hidden="true" tabindex="-1"></a> <span class="dt">"brier_cal"</span><span class="fu">:</span> <span class="fl">0.052</span><span class="fu">,</span></span>
<span id="cb12-9"><a href="#cb12-9" aria-hidden="true" tabindex="-1"></a> <span class="dt">"f1_at_thr"</span><span class="fu">:</span> <span class="fl">0.934</span><span class="fu">,</span></span>
<span id="cb12-10"><a href="#cb12-10" aria-hidden="true" tabindex="-1"></a> <span class="dt">"best_threshold"</span><span class="fu">:</span> <span class="fl">0.415</span><span class="fu">,</span></span>
<span id="cb12-11"><a href="#cb12-11" aria-hidden="true" tabindex="-1"></a> <span class="dt">"tpr_at_1pct_fpr"</span><span class="fu">:</span> <span class="fl">0.823</span><span class="fu">,</span></span>
<span id="cb12-12"><a href="#cb12-12" aria-hidden="true" tabindex="-1"></a> <span class="dt">"platt_a"</span><span class="fu">:</span> <span class="fl">-8.234</span><span class="fu">,</span></span>
<span id="cb12-13"><a href="#cb12-13" aria-hidden="true" tabindex="-1"></a> <span class="dt">"platt_b"</span><span class="fu">:</span> <span class="fl">1.142</span><span class="fu">,</span></span>
<span id="cb12-14"><a href="#cb12-14" aria-hidden="true" tabindex="-1"></a> <span class="dt">"n"</span><span class="fu">:</span> <span class="dv">800</span><span class="fu">,</span></span>
<span id="cb12-15"><a href="#cb12-15" aria-hidden="true" tabindex="-1"></a> <span class="dt">"calibrated_at"</span><span class="fu">:</span> <span class="st">"2026-04-26T13:44Z"</span></span>
<span id="cb12-16"><a href="#cb12-16" aria-hidden="true" tabindex="-1"></a> <span class="fu">},</span></span>
<span id="cb12-17"><a href="#cb12-17" aria-hidden="true" tabindex="-1"></a> <span class="dt">"ru"</span><span class="fu">:</span> <span class="fu">{</span> <span class="er">...</span> <span class="fu">},</span></span>
<span id="cb12-18"><a href="#cb12-18" aria-hidden="true" tabindex="-1"></a> <span class="fu">},</span></span>
<span id="cb12-19"><a href="#cb12-19" aria-hidden="true" tabindex="-1"></a> <span class="er">...</span></span>
<span id="cb12-20"><a href="#cb12-20" aria-hidden="true" tabindex="-1"></a> <span class="fu">}</span></span>
<span id="cb12-21"><a href="#cb12-21" aria-hidden="true" tabindex="-1"></a><span class="fu">}</span></span></code></pre></div>
<p>Full file at <code>services/ml-services-hwai/calibration.json</code>
(v1.11 tag).</p>
<hr />
<h2 id="appendix-d.-compute-timing">Appendix D. Compute timing</h2>
<table>
<colgroup>
<col style="width: 25%" />
<col style="width: 25%" />
<col style="width: 25%" />
<col style="width: 25%" />
</colgroup>
<thead>
<tr>
<th>Stage</th>
<th>Single-thread time</th>
<th>8-core time</th>
<th>Memory peak</th>
</tr>
</thead>
<tbody>
<tr>
<td>Corpus rebuild (8 sources)</td>
<td>12 sec</td>
<td>12 sec</td>
<td>800 MB</td>
</tr>
<tr>
<td>ai_detect calibration (n=800)</td>
<td>90 min</td>
<td>90 min</td>
<td>4 GB</td>
</tr>
<tr>
<td>desklib calibration (n=800)</td>
<td>27 min</td>
<td>27 min</td>
<td>6 GB</td>
</tr>
<tr>
<td>radar calibration (n=800)</td>
<td>90 min</td>
<td>90 min</td>
<td>5 GB</td>
</tr>
<tr>
<td>binoculars calibration (n=800)</td>
<td>not run (excluded EN)</td>
<td>not run</td>
<td>n/a</td>
</tr>
<tr>
<td>Regression test gate</td>
<td>0.05 sec</td>
<td>0.05 sec</td>
<td>100 MB</td>
</tr>
<tr>
<td>Smoke evaluation (n=44)</td>
<td>50 min</td>
<td>50 min</td>
<td>12 GB</td>
</tr>
<tr>
<td>Adversarial evaluation (n=300)</td>
<td>22 min</td>
<td>22 min</td>
<td>12 GB</td>
</tr>
</tbody>
</table>
<p>Total v1.11 release cycle: ~3 hours wall-clock on Hetzner CX43. Cost
~$0.05 in marginal Hetzner time. Would have cost $50-200 on commercial
GPU inference platforms.</p>
<hr />
<h2 id="appendix-e.-release-notes-v1.9-v1.10-v1.11">Appendix E. Release
notes (v1.9 → v1.10 → v1.11)</h2>
<h3 id="v1.9-baseline-2026-04-22">v1.9 (baseline, 2026-04-22)</h3>
<ul>
<li>7-source corpus (no GPT-4o, no genre-targeted, no LiteLLM-gen)</li>
<li>Original RADAR-balanced weights (binoculars-dominant)</li>
<li>EN ensemble OOD: 0.524 (failed SHIP)</li>
<li>RU ensemble OOD: 0.827 (SHIP)</li>
</ul>
<h3 id="v1.10-2026-04-24">v1.10 (2026-04-24)</h3>
<ul>
<li>Added LiteLLM EN+RU gen + GPT-4o EN gen (4 sources, +3000
samples)</li>
<li>Tuned ensemble weights AUROC-proportional (desklib-dominant on
EN)</li>
<li>Tightened UNC bands (0.45/0.55 EN, 0.45/0.65 RU)</li>
<li>Dropped Binoculars from EN ensemble (Gap 7, latency 60s → 1.2s)</li>
<li>Adversarial AUROC EN: 0.984 (paired with cal_test in-distribution
human)</li>
<li>EN ensemble OOD: 0.802 (warm), 0.897 (cold-start desklib bias
inflated)</li>
<li>RU ensemble OOD: 0.847</li>
</ul>
<h3 id="v1.11-this-release-2026-04-26">v1.11 (this release,
2026-04-26)</h3>
<ul>
<li>Added genre-targeted EN AI generation (200 samples × 4 weak
genres)</li>
<li>Recalibrated ai_detect + desklib on expanded 8,540 train
samples</li>
<li>desklib EN cal_test AUROC: 0.893 → 0.913 (+0.020)</li>
<li>ai_detect RU cal_test AUROC: 0.732 → 0.756 (+0.024)</li>
<li>EN ensemble OOD: 0.821 (+0.019 vs v1.10)</li>
<li>EN ensemble Wrong rate: 8% → 4% (halved)</li>
<li>RU ensemble OOD: 0.837 (-0.010 vs v1.10, within noise)</li>
<li>Per-genre detector contribution analyzer added</li>
<li>Brand voice ingestion module shipped (Block 1)</li>
<li>/citation-integrity endpoint shipped (Block 7 step toward L3)</li>
</ul>
</body>
</html>
|