hoangtung386/TinyLlama-1.1B-qlora

Fine-tuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T using QLoRA.

Model Details

  • Base Model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
  • Method: QLoRA (Quantized Low-Rank Adaptation)
  • Dataset: HuggingFaceH4/ultrachat_200k
  • Training Samples: 5,000

Training Configuration

LoRA Config

r: 64
lora_alpha: 32
lora_dropout: 0.1
target_modules: {'k_proj', 'gate_proj', 'up_proj', 'down_proj', 'v_proj', 'q_proj', 'o_proj'}

Training Args

learning_rate: 0.0002
epochs: 3
batch_size: 2
gradient_accumulation: 4
optimizer: OptimizerNames.PAGED_ADAMW
scheduler: SchedulerType.COSINE

Training Results

Metric Value
Loss 1.2668
Runtime 7698.13s
Samples/sec 1.95
Steps N/A

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("hoangtung386/TinyLlama-1.1B-qlora")
model = AutoModelForCausalLM.from_pretrained("hoangtung386/TinyLlama-1.1B-qlora")

prompt = "<|user|>\nWhat is AI?</s>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Framework Versions

  • Transformers: 4.41.2
  • PyTorch: 2.5.1+cu124
  • PEFT: 0.11.1
  • TRL: 0.9.4
Downloads last month
-
Safetensors
Model size
1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hoangtung386/TinyLlama-1.1B-qlora

Dataset used to train hoangtung386/TinyLlama-1.1B-qlora