File size: 3,818 Bytes
877c8e1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 | # Clarus Evaluation Framework
The Clarus Stability Benchmark evaluates whether machine learning models can detect **latent instability dynamics** rather than relying on simple correlations.
The evaluation framework measures model capability across multiple reasoning levels.
These levels test progressively more difficult forms of stability reasoning.
---
# Evaluation Levels
## Level 1 — Single Dataset Evaluation
Models are trained and evaluated on the same dataset.
Purpose:
Measure whether the model can detect instability patterns within a single system.
Procedure:
1. Train model on `train.csv`
2. Generate predictions for `test.csv`
3. Evaluate using the dataset scorer
Metrics:
- accuracy
- precision
- recall
- f1 score
- confusion matrix
Limitations:
High performance at this level does not necessarily indicate true stability reasoning.
Models may rely on dataset-specific correlations.
---
## Level 2 — Within-Domain Transfer
Models are trained on one dataset and evaluated on a different dataset within the same domain.
Example:
Train on:
protein-folding-pathway-instability
Test on:
protein-aggregation-risk-instability
Purpose:
Evaluate whether models can generalize stability reasoning across related systems.
Evaluation procedure:
1. Train on source dataset
2. Predict on target dataset
3. Score using target dataset scorer
---
## Level 3 — Cross-Domain Transfer
Models are trained on one system domain and evaluated on another.
Domains in the Clarus benchmark include:
- clinical systems
- molecular / protein systems
- quantum systems
Example transfer tasks:
| Train Domain | Test Domain |
|---|---|
| clinical | clinical |
| protein | protein |
| quantum | quantum |
| clinical | protein |
| protein | quantum |
| quantum | clinical |
Purpose:
Determine whether models learn general **stability geometry** rather than domain-specific patterns.
---
# Robustness Evaluation
The benchmark includes robustness tests that simulate real-world system conditions.
---
## Missing Data Evaluation
Real systems often contain incomplete observations.
Trajectory datasets may include variants where timepoints are missing.
Variants include:
- missing t0
- missing t1
- missing t2
- random missing
Purpose:
Evaluate whether models can infer stability dynamics when observations are incomplete.
The prediction task remains unchanged.
---
## Class Imbalance Evaluation
Many real-world systems exhibit rare failure events.
Datasets may include variants with different class distributions.
Supported imbalance regimes:
- balanced (50 / 50)
- mild imbalance (70 / 30)
- severe imbalance (90 / 10)
- extreme imbalance (99 / 1)
Purpose:
Evaluate whether models detect true instability rather than relying on class prevalence.
Accuracy alone is insufficient under imbalance conditions.
Recommended metrics:
- precision
- recall
- F1 score
---
# Transfer Stability Score
To summarize transfer performance, the benchmark defines the **Transfer Stability Score (TSS)**.
Definition:
TSS = mean F1 score across all transfer evaluation tasks.
Interpretation:
High TSS indicates that the model has learned stability reasoning that generalizes across systems.
Low TSS suggests the model relies on dataset-specific patterns.
---
# Benchmark Objective
The Clarus benchmark evaluates whether models can detect instability dynamics across complex systems.
The benchmark tests five core capabilities:
1. pattern detection within individual datasets
2. interaction reasoning across variables
3. trajectory reasoning over time
4. robustness to incomplete observation
5. cross-domain stability reasoning
Models that succeed across all levels demonstrate the ability to reason about **latent stability geometry** rather than simple statistical correlations. |