creator-reranker-v2

Fine-tuned reranker trained with preference learning (RewardTrainer / Bradley-Terry loss) on top of creator-reranker-v1.

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

model     = AutoModelForSequenceClassification.from_pretrained("YourHFUsername/creator-reranker-v2")
tokenizer = AutoTokenizer.from_pretrained("YourHFUsername/creator-reranker-v2")

def score(query, fact):
    inputs = tokenizer(query, fact, return_tensors="pt", truncation=True, max_length=512)
    with torch.no_grad():
        return model(**inputs).logits.squeeze().item()

print(score("works in fashion and beauty", "beauty influencer, collaborates with luxury brands"))

Training

  • Base model: BAAI/bge-reranker-base โ†’ fine-tuned โ†’ creator-reranker-v1 โ†’ preference training โ†’ creator-reranker-v2
  • Training method: RewardTrainer (Bradley-Terry pairwise loss)
  • Pairs: ~3,755 preference pairs (gap โ‰ฅ 0.3) derived from scored (query, fact, label) data
  • Best checkpoint: epoch 1 (early stopping)
  • Label range: 0.0 (undesirable) โ†’ 1.0 (desirable)
Downloads last month
29
Safetensors
Model size
0.3B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Anas989898/creator-reranker-v2

Finetuned
(1)
this model