File size: 7,171 Bytes
19a1940 98fde0f 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 3b25f8b 19a1940 5607093 3b25f8b 5607093 19a1940 3b25f8b 9508dad 19a1940 8781ee0 19a1940 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 | ---
language:
- en
license: mit
pretty_name: Clarus Clinical Stability Benchmark
tags:
- clarusc64
- stability-reasoning
- clinical-benchmark
- tabular-reasoning
- system-stability
- trajectory-analysis
---
# Benchmark Documentation
Core
- benchmark_structure.md
- benchmark_matrix.md
- datasets.md
Evaluation
- evaluation_framework.md
- transfer_matrix.md
- clarus_score.md
Robustness
- missing_data_protocol.md
- imbalance_protocol.md
- robustness_suite.md
Theory
- stability_manifold.md
- stability_topology.md
- stability_mechanisms.md
Results
- baseline_results.md
- leaderboard.md
# Clarus Clinical Stability Benchmark
The Clarus Clinical Stability Benchmark evaluates whether machine learning models can detect **latent instability in complex clinical systems**.
Most tabular benchmarks reward models for learning correlations within a single dataset.
The Clarus benchmark instead evaluates whether models can infer instability from **interacting proxy signals across multiple physiological and operational regimes**.
Each dataset represents a simplified regime in which instability emerges from multi-variable interaction rather than single-variable thresholds.
---
# Benchmark Concept
In real clinical systems, deterioration rarely occurs because one measurement crosses a threshold.
Instead, instability emerges when several components drift simultaneously.
Examples include:
- circulatory compensation failure
- microvascular perfusion loss
- metabolic energy collapse
- respiratory control failure
- endocrine dysregulation
- thermoregulatory breakdown
- coagulation instability
- hospital operational overload
Each dataset exposes a different regime while keeping the underlying structure similar:
**instability arises from interacting system signals.**
The generative rules that determine the labels are intentionally not published.
Models must infer instability from observable proxies.
---
# Included Datasets
| Stability Regime | Dataset |
|---|---|
| Hemodynamic collapse | ClarusC64/clinical-hemodynamic-collapse-v0.1 |
| Sepsis trajectory instability | ClarusC64/clinical-sepsis-trajectory-instability-v0.1 |
| Intervention delay failure | ClarusC64/clinical-intervention-delay-failure-v0.1 |
| Organ coupling cascade | ClarusC64/clinical-organ-coupling-cascade-v0.1 |
| Recovery window detection | ClarusC64/clinical-recovery-window-detection-v0.1 |
| Ventilation–Perfusion instability | ClarusC64/clinical-ventilation-perfusion-instability-v0.1 |
| Hemorrhage compensation collapse | ClarusC64/clinical-hemorrhage-compensation-collapse-v0.1 |
| Electrolyte instability | ClarusC64/clinical-electrolyte-instability-v0.1 |
| Microcirculation instability | ClarusC64/clinical-microcirculation-instability-v0.1 |
| Endocrine instability | ClarusC64/clinical-endocrine-instability-v0.1 |
| Thermoregulation instability | ClarusC64/clinical-thermoregulation-instability-v0.1 |
| Cellular energy instability | ClarusC64/clinical-cellular-energy-instability-v0.1 |
| Respiratory drive instability | ClarusC64/clinical-respiratory-drive-instability-v0.1 |
| Coagulation instability | ClarusC64/clinical-coagulation-instability-v0.1 |
| Hospital operational collapse | ClarusC64/clinical-hospital-operational-collapse-v0.1 |
Each dataset repository contains:
data/train.csv
data/test.csv
scorer.py
README.md
---
# Evaluation Protocol
Predictions must follow the format:
scenario_id,prediction
Example:
MC101,0
MC102,1
Evaluation is performed using the **scorer located in the dataset repository**.
Example:
python scorer.py --predictions predictions.csv --truth data/test.csv --output metrics.json
The `--truth` path refers to the dataset's local `data/test.csv` file.
Metrics reported include:
- accuracy
- precision
- recall
- f1
- confusion matrix
---
# Benchmark Tasks
The benchmark supports three evaluation settings.
## 1 Single-Dataset Evaluation
Train and test on the same dataset.
Purpose:
Measure baseline performance within a single stability regime.
---
## 2 Cross-Regime Transfer
Train on one regime and test on another.
Example:
Train → clinical-hemodynamic-collapse-v0.1
Test → clinical-microcirculation-instability-v0.1
Purpose:
Determine whether models learn **general stability reasoning** rather than dataset-specific correlations.
---
## 3 Multi-Regime Training
Train on multiple datasets simultaneously.
Evaluate performance across all regimes.
Purpose:
Test whether models can learn shared stability representations across physiological systems.
---
# Dataset Design Principles
The Clarus datasets follow several explicit design rules.
### No Single-Feature Dominance
No observable variable strongly predicts the label independently.
Target:
|correlation| < 0.30
---
### Interaction-Based Labels
Instability emerges from interactions between multiple variables rather than isolated thresholds.
---
### Adversarial Symmetry
Rows with nearly identical values may produce opposite labels.
This prevents trivial heuristics.
---
### Decoy Variables
Some variables appear meaningful but do not determine the label independently.
---
### Hidden Generative Logic
The dataset generator and rule equations are intentionally not published.
Models must infer instability from proxy signals.
---
# Baseline Results
Reference baseline experiments are provided in:
baseline_results.md
These establish approximate difficulty levels for common tabular models.
---
# Benchmark Architecture
The benchmark can be interpreted as observing a **shared stability manifold** through different clinical regimes.
Each dataset exposes a different control system while preserving the underlying concept of instability emerging from interacting signals.
Additional details are provided in:
stability_manifold.md
---
# Research Applications
The benchmark supports research into:
- system stability reasoning
- interaction-based tabular learning
- cross-domain generalization
- clinical early warning modeling
- infrastructure and system risk detection
---
Quick Start
# Quick Start
This example demonstrates how to evaluate a simple model on one Clarus dataset.
---
## 1 Install dependencies
Example environment:
pip install pandas scikit-learn
---
## 2 Load the dataset
train = data/train.csv
test = data/test.csv
---
## 3 Train a simple baseline model
Example using logistic regression:
import pandas as pd
from sklearn.linear_model import LogisticRegression
train = pd.read_csv("data/train.csv")
X = train.drop(columns=["scenario_id","label"])
y = train["label"]
model = LogisticRegression()
model.fit(X, y)
---
## 4 Generate predictions
test = pd.read_csv("data/test.csv")
X_test = test.drop(columns=["scenario_id","label"])
pred = model.predict(X_test)
out = pd.DataFrame({
"scenario_id": test["scenario_id"],
"prediction": pred
})
out.to_csv("predictions.csv", index=False)
---
## 5 Evaluate predictions
Run the official scorer:
python scorer.py --predictions predictions.csv --truth data/test.csv
The scorer returns:
- accuracy
- precision
- recall
- f1
- confusion matrix
# License
MIT |