lmprobe: Linear Probe on Qwen2.5-1.5B
Truth probe for 'X is larger than Y' statements. Perfect accuracy (100%) โ structural/relational knowledge is robust at 1.5B scale.
Classes
- 0: false_statement
- 1: true_statement
Usage
from lmprobe import LinearProbe
probe = LinearProbe.from_hub("latent-lab/larger-than-truth-qwen2.5-1.5b", trust_classifier=True)
predictions = probe.predict(["your text here"])
Probe Details
- Base model:
Qwen/Qwen2.5-1.5B - Model revision:
8faed761d45a263340a0528343f099c05c9a4323 - Layers: all (0โ27, 28 layers)
- Pooling: last_token
- Classifier: logistic_regression
- Task: classification
- Random state: 42
Evaluation
| Metric | Value |
|---|---|
| accuracy | 1.0000 |
| auroc | 1.0000 |
| f1 | 1.0000 |
| precision | 1.0000 |
| recall | 1.0000 |
Training Data
Positive examples: 792
Negative examples: 792
Positive hash:
sha256:4037705894f0a698a994bea48700aec726e8a67ae753295bf6fbf59339f3e4b1Negative hash:
sha256:5212981b2ba107e1d5e4cc77e851ba81f91b56433b6f30fa6f02902b0d23399aEvaluation samples: 396
Evaluation hash:
sha256:dfff0237de05b45c8239ea3db1737da926b3b46da0969b663ae370d85b0611cc
Reproducibility
- lmprobe version: 0.5.8
- Python: 3.12.3
- PyTorch: 2.10.0+cu128
- scikit-learn: 1.8.0
- transformers: 5.3.0
Model tree for latent-lab/larger-than-truth-qwen2.5-1.5b
Base model
Qwen/Qwen2.5-1.5B