Evangelism Retriever (MiniLM-L12-v2)
Part of Model 9: Evangelism & Apologetics Pipeline for bible.systems.
Model Description
A fine-tuned all-MiniLM-L12-v2 sentence transformer for retrieving relevant apologetics passages given a user query. Used as the RAG retriever in the evangelism pipeline.
Performance
- Cosine Pearson: 0.9011
- Cosine Spearman: 0.8577
- Training: 3 epochs with MultipleNegativesRankingLoss (MNRL)
Pipeline Architecture
For non-evangelism intents, the retriever finds relevant passages from the apologetics corpus:
User Question -> [Intent Classifier] -> [Retriever] -> Top-5 passages -> [Generator]
The retriever encodes both queries and passages into 384-dimensional embeddings, then uses cosine similarity for ranking.
Usage
from sentence_transformers import SentenceTransformer
import numpy as np
model = SentenceTransformer("LoveJesus/evangelism-retriever-chirho")
query = "What evidence supports the resurrection?"
passages = [
"Over 500 witnesses saw the risen Christ (1 Corinthians 15:6).",
"The empty tomb was never disputed by Jesus' enemies.",
"The disciples were transformed from fearful to bold after the resurrection.",
]
query_emb = model.encode([query])
passage_embs = model.encode(passages)
scores = np.dot(passage_embs, query_emb.T).flatten()
for i in np.argsort(scores)[::-1]:
print(f" [{scores[i]:.3f}] {passages[i]}")
Training Data
10,622 query-passage pairs from apologetics Q&A, creation science evidence, historical evidence, miracle testimonies, and Spurgeon sermons.
Related Models
- LoveJesus/evangelism-intent-classifier-chirho - Intent classifier
- LoveJesus/evangelism-generator-chirho - Response generator (Qwen3-14B LoRA)
- LoveJesus/evangelism-dataset-chirho - Training dataset
- Downloads last month
- 25
Dataset used to train LoveJesus/evangelism-retriever-chirho
Space using LoveJesus/evangelism-retriever-chirho 1
Evaluation results
- Cosine Pearsonself-reported0.901
- Cosine Spearmanself-reported0.858