KIKI EMBEDDED SFT โ LoRA Adapter
Fine-tuned LoRA adapter for embedded domain expertise, based on Qwen/Qwen3-8B.
Part of the KIKI Models Tuning pipeline for the FineFab platform.
Training Details
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3-8B |
| Method | QLoRA (4-bit NF4) |
| LoRA Rank | 16 |
| Epochs | 3 |
| Dataset | 8344 examples |
| Domain | embedded |
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B", device_map="auto")
model = PeftModel.from_pretrained(model, "clemsail/kiki-embedded-sft")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
License
Apache 2.0
- Downloads last month
- 52