aria-llm / aria_llm /__init__.py
SofiTesfay2010's picture
v0.3: add calibration profiles, auto-tune, benchmarks
13ef6c0 verified
"""
ARIA: Adaptive Reliability & Integrity Attachment (v0.3)
=========================================================
A lightweight, attachable module for LLMs that addresses four structural
failures identified in frontier AI systems:
1. Compound Error Accumulation (R^n decay)
2. Median Trap / Lack of "Taste" (drift toward statistical average)
3. Semantic Drift (forgetting the "why" in long-horizon tasks)
4. Logic Looping (repeating failed approaches)
ARIA works like LoRA — but instead of adapting weights for a task, it hooks into
the inference pipeline to monitor and correct failure modes in real-time.
v0.3 changes:
- Calibration profile save/load (JSON)
- Auto-tune correction_scale from calibration variance
- GSM8K + code generation benchmarks
- SOTA comparison with ITI, CAA, CAST, RepE, A-LQR, ReflCtrl, ReProbe
Usage:
from aria_llm import ARIA, ARIAConfig
aria = ARIA.attach(model, tokenizer)
output = model.generate(input_ids, max_new_tokens=500)
print(aria.report_text())
aria.detach()
# Save calibration for reuse
aria.save_calibration("profiles/gpt2_calibration.json")
# Load on next run (skip calibration phase)
aria2 = ARIA.attach(model, tokenizer)
aria2.load_calibration("profiles/gpt2_calibration.json")
"""
__version__ = "0.3.0"
from aria_llm.config import ARIAConfig
from aria_llm.core import ARIA
from aria_llm.detectors import (
CompoundErrorDetector,
SemanticDriftDetector,
LogicLoopDetector,
MedianTrapDetector,
)
from aria_llm.correctors import (
SteeringCorrector,
GoalAnchor,
TrajectoryDiverger,
TasteAmplifier,
)
from aria_llm.dashboard import ARIADashboard
__all__ = [
"ARIA",
"ARIAConfig",
"CompoundErrorDetector",
"SemanticDriftDetector",
"LogicLoopDetector",
"MedianTrapDetector",
"SteeringCorrector",
"GoalAnchor",
"TrajectoryDiverger",
"TasteAmplifier",
"ARIADashboard",
]