SetFit Polarity Model with sentence-transformers/paraphrase-mpnet-base-v2
This is a SetFit model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
- Use a spaCy model to select possible aspect span candidates.
- Use a SetFit model to filter these possible aspect span candidates.
- Use this SetFit model to classify the filtered aspect span candidates.
Model Details
Model Description
Model Sources
Model Labels
| Label |
Examples |
| neutral |
- "i.e. just more profit for $TSLA:I'm pretty sure, all an EV tax incentive will do, is raise the price of Teslas, at least for the next few years.\n\ni.e. just more profit for $TSLA\nAs if demand wasn't abundant enough already."
- "increase also increase profit and Tesla the:C'mon @SaraEisen you know as well as I do that the price increases in $TSLA vehicles is not related to any weakness in the stock today. It's purely macro today. Plus, price increase also increase profit and Tesla the only Auto maker that is making high margins on sales."
- "when choosing a car. They just:The key thing people get wrong when thinking EV competition is bad for Tesla:\n\nVast majority of consumers aren't prioritizing the environment when choosing a car. They just want the best product & technology, which Tesla offers and just happens to be an EV\n\n$TSLA https://t.co/w3cKqeJkQW"
|
| negative |
- "is raise the price of Teslas,:I'm pretty sure, all an EV tax incentive will do, is raise the price of Teslas, at least for the next few years.\n\ni.e. just more profit for $TSLA\nAs if demand wasn't abundant enough already."
- '"The price of batteries for:"The price of batteries for electric vehicles looks set to rise in 2022 after many years of sharp decline. The supplies of lithium and other raw materials fail to keep up with huge demand." $NIO $TSLA $XPEV $LI\n\nhttps://t.co/2CAJCxTC2C'
- '. But the price could be cheaper:C’mon @elonmusk! Australians are busting to buy EVs & the best one is @Tesla imho. But the price could be cheaper, if you built a #gigafactory in Australia. 70% of the lithium in the cars is #aussie so why not set up a #gigafactorydownunder? All the talent and minerals are here!'
|
| positive |
- 'a $30k car with $70k:John Hennessey gets a $TSLA Plaid. \nA retired OEM executive describes Tesla as a $30k car with $70k in batteries. \nThe perfect description of a Tesla https://t.co/m5J5m3AuMJ'
- "want the best product &:The key thing people get wrong when thinking EV competition is bad for Tesla:\n\nVast majority of consumers aren't prioritizing the environment when choosing a car. They just want the best product & technology, which Tesla offers and just happens to be an EV\n\n$TSLA https://t.co/w3cKqeJkQW"
- "the most important product on Earth;:Tesla's 4680 battery (and it's manufactuing process) will end up being the most important product on Earth; it's that important. It will enable massive scale & cost reductions over time. It will enable faster charging times & longer range EVs, both will drive adoption.\n\n$TSLA"
|
Evaluation
Metrics
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import AbsaModel
model = AbsaModel.from_pretrained(
"NazmusAshrafi/setfit-MiniLM-mpnet-absa-tesla-tweet-aspect",
"NazmusAshrafi/setfit-MiniLM-mpnet-absa-tesla-tweet-polarity",
)
preds = model("The food was great, but the venue is just way too busy.")
Training Details
Training Set Metrics
| Training set |
Min |
Median |
Max |
| Word count |
26 |
46.2121 |
61 |
| Label |
Training Sample Count |
| negative |
11 |
| neutral |
12 |
| positive |
10 |
Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
| Epoch |
Step |
Training Loss |
Validation Loss |
| 0.0217 |
1 |
0.186 |
- |
Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- spaCy: 3.6.1
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.1
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}