Darija Qwen2.5-1.5B โ LoRA Adapter (Unsloth)
Modele fine-tune sur Qwen/Qwen2.5-1.5B-Instruct pour le dialecte marocain (Darija). Entrainement via Unsloth (2x plus rapide, 40% moins de VRAM).
Dataset
- MBZUAI-Paris/Darija-SFT-Mixture
- 15 000 samples selectionnes et nettoyes
Configuration LoRA
- lora_rank: 8
- lora_alpha: 16
- lora_dropout: 0.05
- lora_target: q, k, v, o, gate, up, down
Utilisation
from unsloth import FastLanguageModel
import torch
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "/kaggle/working/darija_qwen_lora_adapter",
max_seq_length = 512,
dtype = torch.float16,
load_in_4bit = True,
)
FastLanguageModel.for_inference(model)
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support