mekosotto Claude Opus 4.7 (1M context) commited on
Commit
1c727f2
·
1 Parent(s): 42366a8

feat(frontend): trust caption — precision-at-confidence below decision card

Browse files

Surfaces the calibration bin from /predict/bbb under the confidence
progress bar, e.g. 'Test set'te ≥75% güven üreten tahminlerin
precision'ı 92% (n=146).' Silently skips when the response has no
calibration field (legacy models). support==0 path shows an explicit
hedge instead of misleading zeros.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

Files changed (1) hide show
  1. src/frontend/app.py +18 -0
src/frontend/app.py CHANGED
@@ -482,6 +482,24 @@ def _render_prediction_card(result: dict) -> None:
482
  )
483
  st.progress(float(result["confidence"]))
484
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
485
  # SHAP attributions chart
486
  n_features = len(result["top_features"])
487
  st.markdown(
 
482
  )
483
  st.progress(float(result["confidence"]))
484
 
485
+ # Trust caption — precision-at-confidence from held-out 20% test split.
486
+ # Silent skip when the API response has no calibration field (legacy models).
487
+ calibration = result.get("calibration")
488
+ if calibration is not None:
489
+ threshold_pct = round(calibration["threshold"] * 100)
490
+ precision_pct = round(calibration["precision"] * 100)
491
+ support = calibration["support"]
492
+ if support == 0:
493
+ st.caption(
494
+ "📊 Bu güven aralığında held-out test örneği yok — "
495
+ "kalibrasyon bilgisi mevcut değil."
496
+ )
497
+ else:
498
+ st.caption(
499
+ f"📊 Test set'te ≥{threshold_pct}% güven üreten tahminlerin "
500
+ f"precision'ı **{precision_pct}%** (n={support})."
501
+ )
502
+
503
  # SHAP attributions chart
504
  n_features = len(result["top_features"])
505
  st.markdown(