πŸ€– TinyLlama-Alpaca-Expert

This model is a fine-tuned version of TinyLlama-1.1B on the Alpaca-Cleaned dataset. It is designed to follow instructions concisely and accurately, demonstrating efficient fine-tuning on a single GPU.

πŸš€ Model Details

  • Developed by: [Abdelrahman Mohamed]
  • Architecture: TinyLlama (1.1 Billion Parameters)
  • Training Technique: QLoRA (4-bit Quantization) + LoRA (Low-Rank Adaptation)
  • Fine-tuning Type: Supervised Fine-Tuning (SFT)
  • Language: English (Primary)

πŸ› οΈ Training Configuration

The model was trained on a Tesla T4 GPU (Google Colab) with the following hyperparameters:

  • Steps: 100
  • Batch Size: 4 (with Gradient Accumulation = 4)
  • Learning Rate: 2e-4
  • Optimizer: Paged AdamW (32-bit)
  • Rank (r): 16
  • Alpha: 32
  • Precision: FP16 (Mixed Precision)

πŸ“ˆ Training Results

The model showed a significant decrease in loss, indicating successful pattern recognition of the instruction-response format:

  • Starting Loss: 1.5664
  • Final Loss: 1.1798

πŸ’» Usage Instructions

To use this model, you can use the following code snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "[Your_Username]/[Your_Model_Name]"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    device_map="auto"
)

# Template for Instruction
prompt = "### Instruction:\nGive me a tip for learning Python faster.\n\n### Response:\n"

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=150, temperature=0.7, do_sample=True)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
3
Safetensors
Model size
1B params
Tensor type
F32
Β·
BF16
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for abdoelmon/tinyllama-alpaca-full-elmon

Quantized
(140)
this model