YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
sst2_FullFT
Fine-tuned LLaMA model on SST2 dataset.
- LoRA: Full Fine-Tuning
- LoRA Rank: N/A
- Tasks: SST2
- Base Model: LLaMA 1B (Meta)
- Optimizer: AdamW
- Batch Size: 32
- Max Sequence Length: 128 tokens
- Tokenizer: LLaMA-1B tokenizer
Trained using the 🤗 Transformers Trainer API.
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emirhanboge/LLaMA_1B_sst2_FullFT")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support