Text Classification
Transformers
ONNX
Safetensors
English
modernbert
rag
governance
hallucination-detection
epistemic-honesty
classification
fitz-gov
pyrrho
text-embeddings-inference
Instructions to use yafitzdev/pyrrho-modernbert-base-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use yafitzdev/pyrrho-modernbert-base-v1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="yafitzdev/pyrrho-modernbert-base-v1")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("yafitzdev/pyrrho-modernbert-base-v1") model = AutoModelForSequenceClassification.from_pretrained("yafitzdev/pyrrho-modernbert-base-v1") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -25,8 +25,6 @@ metrics:
|
|
| 25 |
|
| 26 |
> Decide whether your retrieved sources support a confident answer, contradict each other, or simply don't contain it — **without an LLM call**.
|
| 27 |
|
| 28 |
-
Named for [Pyrrho of Elis](https://en.wikipedia.org/wiki/Pyrrho), the Greek philosopher whose school practiced suspension of judgment when evidence was insufficient.
|
| 29 |
-
|
| 30 |
This is a fine-tune of [`answerdotai/ModernBERT-base`](https://huggingface.co/answerdotai/ModernBERT-base) on [fitz-gov](https://github.com/yafitzdev/fitz-gov) v5.1 for **3-class RAG governance classification**: given a `(query, retrieved contexts)` pair, predicts one of:
|
| 31 |
|
| 32 |
| Verdict | Meaning |
|
|
@@ -52,19 +50,6 @@ Validated on the [fitz-gov](https://github.com/yafitzdev/fitz-gov) v5.1 eval spl
|
|
| 52 |
| Abstain recall | **92.94 ± 1.11** | 86.5 | **+6.44** |
|
| 53 |
| Macro F1 | 86.10 ± 0.80 | n/a | — |
|
| 54 |
|
| 55 |
-
Every margin is multiple standard deviations larger than seed noise — not a lucky-run artifact.
|
| 56 |
-
|
| 57 |
-
### Uncalibrated (argmax) numbers, for reference
|
| 58 |
-
|
| 59 |
-
| Metric | pyrrho v1 (uncalibrated) |
|
| 60 |
-
|---|---|
|
| 61 |
-
| Overall accuracy | 86.82 ± 1.54 |
|
| 62 |
-
| Macro F1 | 86.63 ± 1.32 |
|
| 63 |
-
| False-trustworthy rate | 8.33 ± 2.86 |
|
| 64 |
-
| Trustworthy recall | 83.23 ± 5.33 |
|
| 65 |
-
| Disputed recall | 93.09 ± 3.34 |
|
| 66 |
-
| Abstain recall | 88.81 ± 2.76 |
|
| 67 |
-
|
| 68 |
---
|
| 69 |
|
| 70 |
## Known limitations
|
|
@@ -163,7 +148,7 @@ Training data: fitz-gov v5.1 `tier1_core`, stratified 80/20 split by `(label, di
|
|
| 163 |
|
| 164 |
## Dataset
|
| 165 |
|
| 166 |
-
This model is trained and evaluated on [**fitz-gov**](https://github.com/yafitzdev/fitz-gov), a 2,980-case benchmark for RAG governance (epistemic honesty). The eval split (584 cases) is a stratified 20% hold-out from `tier1_core` (2,920 cases, 62.7% hard difficulty, 17 domains, 113+ subcategories).
|
| 167 |
|
| 168 |
fitz-gov commit at training time: `3e1d22e22fdff726330a0d70503b07f73dacf817`
|
| 169 |
|
|
|
|
| 25 |
|
| 26 |
> Decide whether your retrieved sources support a confident answer, contradict each other, or simply don't contain it — **without an LLM call**.
|
| 27 |
|
|
|
|
|
|
|
| 28 |
This is a fine-tune of [`answerdotai/ModernBERT-base`](https://huggingface.co/answerdotai/ModernBERT-base) on [fitz-gov](https://github.com/yafitzdev/fitz-gov) v5.1 for **3-class RAG governance classification**: given a `(query, retrieved contexts)` pair, predicts one of:
|
| 29 |
|
| 30 |
| Verdict | Meaning |
|
|
|
|
| 50 |
| Abstain recall | **92.94 ± 1.11** | 86.5 | **+6.44** |
|
| 51 |
| Macro F1 | 86.10 ± 0.80 | n/a | — |
|
| 52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
---
|
| 54 |
|
| 55 |
## Known limitations
|
|
|
|
| 148 |
|
| 149 |
## Dataset
|
| 150 |
|
| 151 |
+
This model is trained and evaluated on [**fitz-gov V5.1**](https://github.com/yafitzdev/fitz-gov), a 2,980-case benchmark for RAG governance (epistemic honesty). The eval split (584 cases) is a stratified 20% hold-out from `tier1_core` (2,920 cases, 62.7% hard difficulty, 17 domains, 113+ subcategories).
|
| 152 |
|
| 153 |
fitz-gov commit at training time: `3e1d22e22fdff726330a0d70503b07f73dacf817`
|
| 154 |
|