SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • "Fails : 'It has been the subject of multiple non-trivial published works whose source is independent from the musician/ensemble itself and reliable"
  • 'cant see the point of this article per '
  • 'more non-notable 40K cruft. This is about as
2
  • "doesn't just contain plot summary, therefore does not violate . &mdash"
  • "A fairly notable bill. Whether it will eventually pass isn't the question, it's whether it's notable. The article could be sourced better, I suppose, but it meets all of the criteria at "
  • "It clearly meets the criteria so should be kept. We currently don't have sources to pass WP:GNG but it is rule of thumb to find someone who passes WP:GNG or not. Resources are always not online they could be found in local newspapers and cricket books, magazines"
3
  • 'to Vancouver School Board per precedent as stated at '
  • 'to Monk (season 4), per and . No delete reason has been articulated above that precludes a merge or redirection'
  • "The
0
  • '"Reasonably well published"" seems a bit subjective. The subject has been co-author on five papers, but lead author on only one. This is a fairly typical (i.e.

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("research-dump/bge-base-en-v1.5_wikipedia_stance_wikipedia_stance")
# Run inference
preds = model("fails  and ")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 2 39.499 482
Label Training Sample Count
0 71
1 637
2 247
3 45

Training Hyperparameters

  • batch_size: (8, 2)
  • num_epochs: (5, 5)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 10
  • body_learning_rate: (1e-05, 1e-05)
  • head_learning_rate: 5e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: True
  • use_amp: True
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0004 1 0.2788 -
0.2 500 0.2355 0.1984
0.4 1000 0.1157 0.1949
0.6 1500 0.0543 0.2121
0.8 2000 0.0331 0.1751
1.0 2500 0.0244 0.1868
1.2 3000 0.0159 0.1976
1.4 3500 0.0153 0.1794
1.6 4000 0.0144 0.1921
1.8 4500 0.0127 0.1830
2.0 5000 0.0115 0.1822
2.2 5500 0.012 0.1753
2.4 6000 0.0096 0.1868
2.6 6500 0.0095 0.1771
2.8 7000 0.0092 0.2017
3.0 7500 0.0101 0.1865
3.2 8000 0.0086 0.1906
3.4 8500 0.01 0.1820
3.6 9000 0.0087 0.1864
3.8 9500 0.0093 0.1949
4.0 10000 0.0097 0.1906
4.2 10500 0.0097 0.1962
4.4 11000 0.0091 0.1925
4.6 11500 0.0086 0.1892
4.8 12000 0.0076 0.1964
5.0 12500 0.0096 0.1953

Framework Versions

  • Python: 3.12.7
  • SetFit: 1.1.1
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.2
  • PyTorch: 2.6.0+cu124
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for research-dump/bge-base-en-v1.5_wikipedia_stance_wikipedia_stance

Finetuned
(458)
this model

Paper for research-dump/bge-base-en-v1.5_wikipedia_stance_wikipedia_stance