auth_pressure float64 | buffer float64 | coupling float64 | lag float64 | reciprocity float64 | drift float64 | label_cascade int64 |
|---|---|---|---|---|---|---|
0.22 | 0.86 | 0.18 | 0.2 | 0.88 | 0.1 | 0 |
0.35 | 0.78 | 0.3 | 0.28 | 0.84 | 0.12 | 0 |
0.48 | 0.66 | 0.44 | 0.38 | 0.78 | 0.18 | 0 |
0.56 | 0.6 | 0.55 | 0.44 | 0.74 | 0.22 | 0 |
0.62 | 0.52 | 0.64 | 0.52 | 0.66 | 0.28 | 1 |
0.7 | 0.44 | 0.72 | 0.6 | 0.58 | 0.34 | 1 |
0.76 | 0.36 | 0.8 | 0.66 | 0.52 | 0.4 | 1 |
0.82 | 0.3 | 0.86 | 0.72 | 0.46 | 0.46 | 1 |
0.68 | 0.48 | 0.76 | 0.58 | 0.62 | 0.32 | 1 |
Clarus Adversarial Cascade Simulator v0.2
Adversarial boundary discovery for cascade-prone system configurations.
You provide a configuration.
The simulator maps how close it is to systemic collapse.
Interactive Demo
Live Gradio interface available in Hugging Face Spaces.
Workflow:
- Input baseline configuration (6 sliders)
- Score configuration → View risk assessment
- Run adversarial search → Discover worst-case boundary states
- View scenario pack → Executable sandbox QA stress tests
- Apply redesign → Harden configuration
- Re-score → Confirm reduced cascade probability
Complete red-team workflow in browser.
No installation required.
Core Configuration Variables
• auth_pressure
• buffer
• coupling
• lag
• reciprocity
• drift
All values normalized 0.0–1.0.
These represent interaction pressure, recovery margin, cross-service dependency, latency exposure, acknowledgement symmetry, and configuration drift.
What Search v0.2 Does
The simulator performs:
• multi-start search
• radius annealing
• gradient-informed stepping
• top 5 boundary configurations
• delta L1 and L2 distance metrics
• minimum perturbation to RED estimate
This is collapse-surface discovery.
Not brute force fuzzing.
It answers:
• Which nearby configurations are structurally dangerous?
• How far is the system from RED?
• Which parameters most strongly amplify instability?
Use Cases
Pre-Deployment Validation
Test agent configurations before production rollout.
Identify cascade vulnerabilities early.
Red Team Automation
Automated boundary discovery.
Comprehensive structural risk mapping.
Faster than manual pentesting cycles.
Architecture Comparison
Test multiple designs side-by-side.
Rank by cascade probability.
Select the most resilient architecture.
Continuous Monitoring
Integrate with CI/CD pipelines.
Score every configuration change.
Block risky deployments automatically.
What Makes This Different
vs Manual Red-Teaming
• Automated exploration
• Multi-start structural search
• Quantified cascade probability
• Measurable distance to failure
vs Random Fuzzing
• Gradient-informed, not blind sampling
• Efficient boundary convergence
• Distance metrics, not pass/fail only
vs Traditional Security Scanners
• Structural interaction modelling
• Detects cascade amplification
• Pre-deployment testing
• System-level risk assessment
This is configuration geometry.
Not vulnerability enumeration.
Example Output
Input Configuration
{
"auth_pressure": 0.55,
"buffer": 0.60,
"coupling": 0.50,
"lag": 0.40,
"reciprocity": 0.75,
"drift": 0.20
}
Risk Assessment
{
"cascade_probability": 0.32,
"risk_band": "AMBER"
}
Adversarial Search (600 steps, 8 starts)
{
"worst_case_probability": 0.87,
"risk_band": "RED",
"delta_l1": 0.45,
"delta_l2": 0.23
}
Scenario Pack (Sandbox QA Tests)
{
"scenario_id": "cross_service_dependency_stress",
"steps": [
"Increase concurrent workloads in sandbox",
"Introduce controlled dependency fan-out",
"Track retry amplification and error propagation"
],
"stop_conditions": [
"Retry storm signatures",
"Cascading timeouts across services"
]
}
Mitigation Pack
{
"change": "Reduce synchronous coupling",
"action": "Add isolation pools or async boundaries",
"expected_effect": "Lower propagation probability"
}
Batch Testing Capability
You can rank multiple configurations:
def batch_test(configs: list) -> list:
results = []
for cfg in configs:
risk = score(cfg)
results.append((cfg, risk))
results.sort(key=lambda x: x[1]["cascade_probability"], reverse=True)
return results
Applications:
• Compare identity models
• Evaluate deployment variants
• Select safest agent orchestration design
Exportable Assessment Reports
Structured export supports documentation and compliance:
def export_report(cfg, risk, boundary_configs, scenarios, mitigations, min_delta):
report = {
"configuration": cfg,
"risk_assessment": risk,
"boundary_configs": boundary_configs,
"test_scenarios": scenarios,
"mitigations": mitigations,
"min_delta_analysis": min_delta
}
return report
Supports:
• Security review
• Governance reporting
• CI/CD validation records
Distance to RED
The simulator estimates:
• Minimum perturbation required to enter RED
• Sensitivity to coupling and buffer shifts
• Structural fragility margin
Conceptual extension:
Estimate time-to-RED under drift velocity assumptions.
This shifts from reactive detection to proactive planning.
Files
data/train.csv
Synthetic configuration dataset
data/tester.csv
Evaluation dataset
scorer.py
Prints accuracy, precision, recall, f1, confusion matrix
app.py
Gradio interface implementing scoring, adversarial search, redesign and re-test
Scope
This repository demonstrates structural cascade geometry using synthetic data.
It does not claim telemetry calibration.
Small samples reveal structure.
Production-scale data determines operational exposure.
License
MIT
Structural Note
This dataset identifies a measurable coupling pattern associated with systemic instability.
The sample demonstrates the geometry.
Production-scale data determines operational exposure.
What Production Deployment Enables
• 50K–1M row calibrated datasets
• Multi-node coupling analysis
• Real-time coherence monitoring
• Early warning before cascade events
• Collapse surface modelling
• CI/CD integration
Enterprise & Research Collaboration
Clarus develops production-scale coherence monitoring infrastructure for complex AI and institutional systems.
team@clarusinvariant.com
Instability is detectable.
Governance determines whether it propagates.
- Downloads last month
- 39