YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

mnli_FullFT

Fine-tuned LLaMA model on MNLI dataset.

  • LoRA: Full Fine-Tuning
  • LoRA Rank: N/A
  • Tasks: MNLI
  • Base Model: LLaMA 1B (Meta)
  • Optimizer: AdamW
  • Batch Size: 32
  • Max Sequence Length: 128 tokens
  • Tokenizer: LLaMA-1B tokenizer
  • Trained using the 🤗 Transformers Trainer API

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("emirhanboge/LLaMA_1B_mnli_FullFT")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support