creator-reranker-v2
Fine-tuned reranker trained with preference learning (RewardTrainer / Bradley-Terry loss) on top of creator-reranker-v1.
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model = AutoModelForSequenceClassification.from_pretrained("YourHFUsername/creator-reranker-v2")
tokenizer = AutoTokenizer.from_pretrained("YourHFUsername/creator-reranker-v2")
def score(query, fact):
inputs = tokenizer(query, fact, return_tensors="pt", truncation=True, max_length=512)
with torch.no_grad():
return model(**inputs).logits.squeeze().item()
print(score("works in fashion and beauty", "beauty influencer, collaborates with luxury brands"))
Training
- Base model:
BAAI/bge-reranker-baseโ fine-tuned โcreator-reranker-v1โ preference training โcreator-reranker-v2 - Training method: RewardTrainer (Bradley-Terry pairwise loss)
- Pairs: ~3,755 preference pairs (gap โฅ 0.3) derived from scored (query, fact, label) data
- Best checkpoint: epoch 1 (early stopping)
- Label range: 0.0 (undesirable) โ 1.0 (desirable)
- Downloads last month
- 29
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for Anas989898/creator-reranker-v2
Base model
Anas989898/creator-reranker-v1