SetFit with BAAI/bge-base-en-v1.5
This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
Model Sources
Model Labels
| Label |
Examples |
| 0 |
- 'Reasoning:\nThe provided answer "It provides a comprehensive understanding of the situation" partially aligns with the intended concept, suggesting a holistic evaluation. However, it lacks direct specificity in relation to the document's key aspects, which emphasize deciding malicious behavior and subsequent actions, such as remediation, based on a collective assessment of significant machines, behaviors, and users affected. The answer misses the emphasis on prioritization and context provided by evaluating all factors collectively.\n\nEvaluation: Bad'
- 'Reasoning:\nThe given document explicitly outlines the steps required to exclude a MalOp during the remediation process. The answer, however, incorrectly states that the information is not covered in the document and suggests referring to additional sources. This contradicts the clear instructions provided in the document.\n\nEvaluation: Bad'
- 'Reasoning:\nThe answer directly addresses the question by stating that a quarantined file should be un-quarantined before being submitted, which aligns with the information provided in the document. The specific instruction to un-quarantine the file first if it is quarantined is accurately reflected in the response.\nEvaluation: Good'
|
| 1 |
- 'Reasoning:\nThe answer is precise, correct, and directly addresses the question. It matches the information provided in the "Result" section of the document, which states that the computer will generate a dump file containing the entire contents of the sensor's RAM at the time of the failure.\n\nEvaluation: Good'
- 'Reasoning:\nThe provided answer "To identify cyber security threats" directly aligns with the information in the document which talks about the primary function of the platform being to identify cyber security threats using advanced technologies and methods.\nEvaluation: Good'
- 'Reasoning:\nThere is no fifth scenario detailed in the provided document, and the answer correctly identifies that the specific query is not covered in the provided information.\nEvaluation: Good'
|
Evaluation
Metrics
| Label |
Accuracy |
| all |
0.5493 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_cybereason_gpt-4o_improved-cot_chat_few_shot_only_reasoning_1726752428.08")
preds = model("Reasoning:
The answer directly addresses the question and is correctly grounded in the document. The percentage indeed refers to the total amount of successful completion of response actions.
Evaluation: Good")
Training Details
Training Set Metrics
| Training set |
Min |
Median |
Max |
| Word count |
18 |
43.0580 |
94 |
| Label |
Training Sample Count |
| 0 |
34 |
| 1 |
35 |
Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
| Epoch |
Step |
Training Loss |
Validation Loss |
| 0.0058 |
1 |
0.246 |
- |
| 0.2890 |
50 |
0.2593 |
- |
| 0.5780 |
100 |
0.2385 |
- |
| 0.8671 |
150 |
0.0897 |
- |
| 1.1561 |
200 |
0.004 |
- |
| 1.4451 |
250 |
0.0022 |
- |
| 1.7341 |
300 |
0.002 |
- |
| 2.0231 |
350 |
0.0017 |
- |
| 2.3121 |
400 |
0.0017 |
- |
| 2.6012 |
450 |
0.0014 |
- |
| 2.8902 |
500 |
0.0013 |
- |
| 3.1792 |
550 |
0.0013 |
- |
| 3.4682 |
600 |
0.0012 |
- |
| 3.7572 |
650 |
0.0012 |
- |
| 4.0462 |
700 |
0.0013 |
- |
| 4.3353 |
750 |
0.0012 |
- |
| 4.6243 |
800 |
0.0012 |
- |
| 4.9133 |
850 |
0.0011 |
- |
Framework Versions
- Python: 3.10.14
- SetFit: 1.1.0
- Sentence Transformers: 3.1.0
- Transformers: 4.44.0
- PyTorch: 2.4.1+cu121
- Datasets: 2.19.2
- Tokenizers: 0.19.1
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}