π€ TinyLlama-Alpaca-Expert
This model is a fine-tuned version of TinyLlama-1.1B on the Alpaca-Cleaned dataset. It is designed to follow instructions concisely and accurately, demonstrating efficient fine-tuning on a single GPU.
π Model Details
- Developed by: [Abdelrahman Mohamed]
- Architecture: TinyLlama (1.1 Billion Parameters)
- Training Technique: QLoRA (4-bit Quantization) + LoRA (Low-Rank Adaptation)
- Fine-tuning Type: Supervised Fine-Tuning (SFT)
- Language: English (Primary)
π οΈ Training Configuration
The model was trained on a Tesla T4 GPU (Google Colab) with the following hyperparameters:
- Steps: 100
- Batch Size: 4 (with Gradient Accumulation = 4)
- Learning Rate: 2e-4
- Optimizer: Paged AdamW (32-bit)
- Rank (r): 16
- Alpha: 32
- Precision: FP16 (Mixed Precision)
π Training Results
The model showed a significant decrease in loss, indicating successful pattern recognition of the instruction-response format:
- Starting Loss: 1.5664
- Final Loss: 1.1798
π» Usage Instructions
To use this model, you can use the following code snippet:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "[Your_Username]/[Your_Model_Name]"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Template for Instruction
prompt = "### Instruction:\nGive me a tip for learning Python faster.\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=150, temperature=0.7, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 3
Model tree for abdoelmon/tinyllama-alpaca-full-elmon
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0