Evangelism Retriever (MiniLM-L12-v2)

Part of Model 9: Evangelism & Apologetics Pipeline for bible.systems.

Model Description

A fine-tuned all-MiniLM-L12-v2 sentence transformer for retrieving relevant apologetics passages given a user query. Used as the RAG retriever in the evangelism pipeline.

Performance

  • Cosine Pearson: 0.9011
  • Cosine Spearman: 0.8577
  • Training: 3 epochs with MultipleNegativesRankingLoss (MNRL)

Pipeline Architecture

For non-evangelism intents, the retriever finds relevant passages from the apologetics corpus:

User Question -> [Intent Classifier] -> [Retriever] -> Top-5 passages -> [Generator]

The retriever encodes both queries and passages into 384-dimensional embeddings, then uses cosine similarity for ranking.

Usage

from sentence_transformers import SentenceTransformer
import numpy as np

model = SentenceTransformer("LoveJesus/evangelism-retriever-chirho")

query = "What evidence supports the resurrection?"
passages = [
    "Over 500 witnesses saw the risen Christ (1 Corinthians 15:6).",
    "The empty tomb was never disputed by Jesus' enemies.",
    "The disciples were transformed from fearful to bold after the resurrection.",
]

query_emb = model.encode([query])
passage_embs = model.encode(passages)
scores = np.dot(passage_embs, query_emb.T).flatten()

for i in np.argsort(scores)[::-1]:
    print(f"  [{scores[i]:.3f}] {passages[i]}")

Training Data

10,622 query-passage pairs from apologetics Q&A, creation science evidence, historical evidence, miracle testimonies, and Spurgeon sermons.

Related Models

Downloads last month
25
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train LoveJesus/evangelism-retriever-chirho

Space using LoveJesus/evangelism-retriever-chirho 1

Evaluation results