YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
TinyLlama-1.1B-Chat MLX LoRA (Alpaca-100, r=16)
This repo contains a LoRA adapter trained with Apple MLX (mlx-lm) on Mac M-series.
- Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Finetune type: LoRA (default mlx-lm settings)
- Data: 100-example subset of Alpaca, JSONL with fields
prompt,completion - Steps: 200 iters, batch size 8, learning rate 2e-4
- Trainable params:
0.417% (4.6M of 1.1B)
Usage (MLX)
from mlx_lm import load, generate
# Option A: use local adapter folder (fastest)
model, tokenizer = load(
"TinyLlama/TinyLlama-1.1B-Chat-v1.0",
adapter_path="out/tinyllama_lora_r16"
)
# Option B: download from Hugging Face then pass the local path via snapshot_download
Training (mlx-lm CLI)
python -m mlx_lm lora --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --train --data /path/to/data_dir_with_train_jsonl --batch-size 8 --iters 200 --learning-rate 2e-4 --adapter-path out/tinyllama_lora_r16
Notes
- Trained on Apple Silicon (unified 36 GB memory) using MLX.
- For stronger adaptation: increase
--iters(e.g., 1000–2000) and/or raise LoRA rank via config (e.g., r=32).
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support