YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

GG Team Instruction-Tuned Adapters (LLaMA 3.2-3B)

This repository provides a collection of PEFT adapters (LoRA) trained on various instruction-tuning datasets using the base model LLaMA 3.2-3B. These adapters are developed by GG Team - CSE476 @ Arizona State University.

Adapter Variants

Folder Dataset(s) Used Description
llama-3.2-3B-sft Alpaca Fine-tuned only on the original Alpaca dataset
llama-3.2-3B-sft-dolly Alpaca + Dolly Fine-tuned on Databricks' Dolly dataset
llama-3.2-3B-sft-FLAN Alpaca + Dolly + FLAN Fine-tuned on FLAN and Alpaca mixed
sft_a_d Alpaca + Dolly Combined dataset fine-tuning (Alpaca + Dolly)
sft_a_d1 Alpaca(cleaned) + Dolly Combined dataset fine-tuning (Alpaca + Dolly)

๐Ÿ› ๏ธ Usage (with peft)

Here's an example of loading one of the adapters using ๐Ÿค— Transformers and PEFT:

from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-3B")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-3B")

# Load adapter (choose one)
model = PeftModel.from_pretrained(base_model, "gg-cse476/gg/sft_a_d")

# Inference
prompt = "Explain how a rocket works in simple terms."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support