Clarus Evaluation Framework
The Clarus Stability Benchmark evaluates whether machine learning models can detect latent instability dynamics rather than relying on simple correlations.
The evaluation framework measures model capability across multiple reasoning levels.
These levels test progressively more difficult forms of stability reasoning.
Evaluation Levels
Level 1 — Single Dataset Evaluation
Models are trained and evaluated on the same dataset.
Purpose:
Measure whether the model can detect instability patterns within a single system.
Procedure:
- Train model on
train.csv - Generate predictions for
test.csv - Evaluate using the dataset scorer
Metrics:
- accuracy
- precision
- recall
- f1 score
- confusion matrix
Limitations:
High performance at this level does not necessarily indicate true stability reasoning.
Models may rely on dataset-specific correlations.
Level 2 — Within-Domain Transfer
Models are trained on one dataset and evaluated on a different dataset within the same domain.
Example:
Train on:
protein-folding-pathway-instability
Test on:
protein-aggregation-risk-instability
Purpose:
Evaluate whether models can generalize stability reasoning across related systems.
Evaluation procedure:
- Train on source dataset
- Predict on target dataset
- Score using target dataset scorer
Level 3 — Cross-Domain Transfer
Models are trained on one system domain and evaluated on another.
Domains in the Clarus benchmark include:
- clinical systems
- molecular / protein systems
- quantum systems
Example transfer tasks:
| Train Domain | Test Domain |
|---|---|
| clinical | clinical |
| protein | protein |
| quantum | quantum |
| clinical | protein |
| protein | quantum |
| quantum | clinical |
Purpose:
Determine whether models learn general stability geometry rather than domain-specific patterns.
Robustness Evaluation
The benchmark includes robustness tests that simulate real-world system conditions.
Missing Data Evaluation
Real systems often contain incomplete observations.
Trajectory datasets may include variants where timepoints are missing.
Variants include:
- missing t0
- missing t1
- missing t2
- random missing
Purpose:
Evaluate whether models can infer stability dynamics when observations are incomplete.
The prediction task remains unchanged.
Class Imbalance Evaluation
Many real-world systems exhibit rare failure events.
Datasets may include variants with different class distributions.
Supported imbalance regimes:
- balanced (50 / 50)
- mild imbalance (70 / 30)
- severe imbalance (90 / 10)
- extreme imbalance (99 / 1)
Purpose:
Evaluate whether models detect true instability rather than relying on class prevalence.
Accuracy alone is insufficient under imbalance conditions.
Recommended metrics:
- precision
- recall
- F1 score
Transfer Stability Score
To summarize transfer performance, the benchmark defines the Transfer Stability Score (TSS).
Definition:
TSS = mean F1 score across all transfer evaluation tasks.
Interpretation:
High TSS indicates that the model has learned stability reasoning that generalizes across systems.
Low TSS suggests the model relies on dataset-specific patterns.
Benchmark Objective
The Clarus benchmark evaluates whether models can detect instability dynamics across complex systems.
The benchmark tests five core capabilities:
- pattern detection within individual datasets
- interaction reasoning across variables
- trajectory reasoning over time
- robustness to incomplete observation
- cross-domain stability reasoning
Models that succeed across all levels demonstrate the ability to reason about latent stability geometry rather than simple statistical correlations.