Yoruba LLaMA 8B v2 - LoRA Adapter
Fine-tuned Yoruba language adapter for LLaMA 3 8B by Johnson Pedia (Oduduwa AI).
π Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3-8B-Instruct",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")
# Load Yoruba adapter
model = PeftModel.from_pretrained(base_model, "JohnsonPedia/yoruba_llama_8B_v2-lora-adapter")
# Chat!
prompt = "Bawo ni o αΉ£e wa?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π» Works on CPU!
# CPU inference (no GPU needed)
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3-8B-Instruct",
torch_dtype=torch.float16,
device_map="cpu" # β CPU only
)
π Model Details
- Base: LLaMA 3 8B Instruct
- Language: Yoruba (yo)
- Training: LoRA (r=16, alpha=16)
- Size: ~50MB adapter
- Use: Yoruba chatbot, Q&A, translation
π― What It Does
- Answers questions in Yoruba
- Uses proper diacritics (Γ , αΊΉ, α», αΉ£, gb)
- Culturally appropriate responses
- Natural Yoruba conversations
πΎ Requirements
- RAM: 16GB (full precision) or 10GB (8-bit)
- Disk: 20GB
- GPU: Optional (works on CPU!)
π Credits
Built with Unsloth β’ Based on Meta LLaMA 3 β’ By Johnson Pedia
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for JohnsonPedia/yoruba_llama_8B_v2-lora-adapter
Base model
meta-llama/Meta-Llama-3-8B-Instruct