Datasets:
add README landing page
Browse files
README.md
CHANGED
|
@@ -33,23 +33,27 @@ is maintained under the organization account (see "Code repository" below).
|
|
| 33 |
|
| 34 |
## Code repository
|
| 35 |
|
| 36 |
-
|
| 37 |
-
tooling will be released at:
|
| 38 |
|
| 39 |
-
|
| 40 |
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
|
| 45 |
-
## Headline numbers (v1.11 production
|
| 46 |
|
| 47 |
| Metric | EN | RU |
|
| 48 |
|---|---|---|
|
| 49 |
-
| OOD AUROC (
|
| 50 |
| Wrong-rate | 4% | 9% |
|
| 51 |
| p50 latency (EN ensemble) | **1.2 s** | — |
|
| 52 |
-
| Adversarial AUROC (n=300, OOD) | **0.998** | — |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
## Files
|
| 55 |
|
|
|
|
| 33 |
|
| 34 |
## Code repository
|
| 35 |
|
| 36 |
+
Public benchmark + evaluation scripts:
|
|
|
|
| 37 |
|
| 38 |
+
**https://github.com/humanswith-ai/contentos-benchmark**
|
| 39 |
|
| 40 |
+
The repo includes regression test suite (8 pinned baselines, 0.05s),
|
| 41 |
+
streaming-CSV eval scripts (partial-tolerant), per-genre AUROC
|
| 42 |
+
analyzer, and the calibration JSON shape for v1.11 production state.
|
| 43 |
|
| 44 |
+
## Headline numbers (v1.11 production, 2026-04-29 measurement)
|
| 45 |
|
| 46 |
| Metric | EN | RU |
|
| 47 |
|---|---|---|
|
| 48 |
+
| OOD AUROC (176-sample expanded smoke) | **0.864** | **0.846** |
|
| 49 |
| Wrong-rate | 4% | 9% |
|
| 50 |
| p50 latency (EN ensemble) | **1.2 s** | — |
|
| 51 |
+
| Adversarial AUROC (n=300, OOD-paired) | **0.998** | — |
|
| 52 |
+
|
| 53 |
+
Earlier v1.0 paper reported 0.802 / 0.847 on the original 44-text
|
| 54 |
+
smoke battery; the 4× expanded battery with class balance per
|
| 55 |
+
(lang, genre) cell stabilized numbers upward. Per-genre details in
|
| 56 |
+
the [companion repo](https://github.com/humanswith-ai/contentos-benchmark).
|
| 57 |
|
| 58 |
## Files
|
| 59 |
|