SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2
This is a SetFit model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
- Use a spaCy model to select possible aspect span candidates.
- Use a SetFit model to filter these possible aspect span candidates.
- Use this SetFit model to classify the filtered aspect span candidates.
Model Details
Model Description
Model Sources
Model Labels
| Label |
Examples |
| neutral |
- 'skip taking the cord with me because:I charge it at night and skip taking the cord with me because of the good battery life.'
- 'The tech guy then said the:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'
- 'all dark, power light steady, hard:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'
|
| positive |
- 'of the good battery life.:I charge it at night and skip taking the cord with me because of the good battery life.'
- 'is of high quality, has a:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'
- 'has a killer GUI, is extremely:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'
|
| negative |
- 'then said the service center does not do:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'
- 'concern to the "sales" team, which is:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'
- 'on, no GUI, screen all:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'
|
| conflict |
- '-No backlit keyboard, but not:-No backlit keyboard, but not an issue for me.'
- "to replace the battery once, but:I did have to replace the battery once, but that was only a couple months ago and it's been working perfect ever since."
|
Evaluation
Metrics
| Label |
Accuracy |
| all |
0.7008 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import AbsaModel
model = AbsaModel.from_pretrained(
"joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect",
"joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity",
spacy_model="en_core_web_sm",
)
preds = model("This laptop meets every expectation and Windows 7 is great!")
Training Details
Training Set Metrics
| Training set |
Min |
Median |
Max |
| Word count |
3 |
25.5873 |
48 |
| Label |
Training Sample Count |
| conflict |
2 |
| negative |
45 |
| neutral |
30 |
| positive |
49 |
Training Hyperparameters
- batch_size: (128, 128)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
Training Results
| Epoch |
Step |
Training Loss |
Validation Loss |
| 0.0120 |
1 |
0.2721 |
- |
| 0.6024 |
50 |
0.0894 |
0.2059 |
| 1.2048 |
100 |
0.0014 |
0.2309 |
| 1.8072 |
150 |
0.0006 |
0.2359 |
| 2.4096 |
200 |
0.0005 |
0.2373 |
| 3.0120 |
250 |
0.0004 |
0.2364 |
| 3.6145 |
300 |
0.0003 |
0.2371 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.7
- SetFit: 1.0.3
- Sentence Transformers: 2.3.0
- spaCy: 3.7.2
- Transformers: 4.37.2
- PyTorch: 2.1.2+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.1
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}