YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
GrowAI โ LLaMA-3-8B Agricultural Advisor v2 (Darija/French/Arabic)
Fine-tuned LoRA adapter for multilingual agricultural advising targeting Moroccan farmers. This is v2, trained on a significantly larger dataset than v1.
Base model
unsloth/llama-3-8b-bnb-4bit
Training
- Dataset: 1,598 samples across Darija, French, Modern Standard Arabic, English
- Topics: irrigation, crop disease, fertilization, pest control, soil management
- Method: QLoRA (4-bit NF4) with LoRA r=32, alpha=64, rslora=True
- Epochs: 5 | Train loss: 0.268 (vs v1: 1.081)
- Hardware: 1ร NVIDIA A100 80GB (CINECA Leonardo HPC)
Improvement over v1
| v1 | v2 | |
|---|---|---|
| Dataset size | 446 samples | 1,598 samples |
| Train loss | 1.081 | 0.268 |
| LoRA rank | 16 | 32 |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained(
"unsloth/llama-3-8b-bnb-4bit",
load_in_4bit=True,
device_map="auto"
)
model = PeftModel.from_pretrained(base, "Hishammaghraoui/growai-llama3-8b-agri-darija-v2")
tokenizer = AutoTokenizer.from_pretrained("Hishammaghraoui/growai-llama3-8b-agri-darija-v2")
prompt = "ูููุงุด ูุณูู ุงูุฎุถุฑุฉ ูููุช ุงูุตููุ"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=300, temperature=0.3)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Project
Part of the MetaFarm GrowAI platform โ WhatsApp-based AI agricultural advisor for Moroccan farmers. GitHub: https://github.com/Hmauto/metafarm-cineca
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support