Overview
This SpaRTA adapter spacializes the google/gemma-2b base model to do sentiment classification of English text sentences.
Adapter Description
PEFT method: SpaRTA @ 99.8% sparsity
Base Model: google/gemma-2b
Task: Text Classification (Sentiment Analysis)
Language: English
Inputs and Outputs
Input:
Text string representing a sentence to be classified as having positive or negative sentiment. No formating is needed, only the raw sentence is required.
Output:
One of two tokens represeting the sentiment class of the input: with token id 0 for negative sentiment, and 1 for positive.
How to Use
For instructions on how to load and use this adapter to classify input sentences, see https://pypi.org/project/peft-sparta/.
Training Details
Training Procedure
The adapter was trained on the SST-2 dataset, using a 99.8% sparsity, that is, freezing 99.8% of the Gemma-2B model parameters and training only on the remaining 0.2%. The trainable parameters were chosen randomly from the self-attention value (Wv) and output (Wo) projection metrices. This resulted in a total of approx. 5 million trainable parameters.
Training Data
The Gemma 2B model was fine-tuned using SpaRTA on the SST-2 (Stanford Sentiment Treebank) dataset. We used only 66,349 examples of the training spilt for training, with the remainder for validation; and used the validation split with 872 examples for testing (not seen during training)
Intended Use
Binary sentiment classification (positive/negative).
Performance (on Test set)
- Balanced accuracy: 96.9%
- Per class accuracy:
- negative sentiment: 96.5%
- positive sentiment: 97.3%
- MCC: 0.938
- F1-score: 0.970
- Downloads last month
- 1
Model tree for jesusriosal/sparta-gemma_2b-sst2
Base model
google/gemma-2b