Adapter Checkpoint β LoRA (Llama-2-7b)
This repository contains a LoRA adapter checkpoint trained on top of
meta-llama/Llama-2-7b-hf for chat / instruction-following tasks.
Repository Contents
| Path | Description |
|---|---|
adapter_config.json |
PEFT / LoRA configuration (rank, alpha, target modules β¦) |
adapter_model.bin |
Serialised adapter weights |
examples/chat/zero_shot/prompt.json |
Zero-shot chat prompt template |
examples/chat/few_shot/prompt.json |
Few-shot chat prompt template |
Quick Start
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
repo_id = "dongbobo/adapter-checkpoint-lora"
config = PeftConfig.from_pretrained(repo_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
base = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base, repo_id)
model.eval()
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Prompt Templates
Two ready-to-use prompt templates are provided for chat inference.
Zero-Shot Template
No in-context examples β the model relies purely on its instruction tuning.
π examples/chat/zero_shot/prompt.json
{
"template": {
"system": "You are a helpful, respectful, and honest assistant ...",
"user": "{{user_message}}",
"assistant": ""
}
}
Few-Shot Template
Three canonical Q&A demonstrations are prepended to steer the model's style.
π examples/chat/few_shot/prompt.json
{
"template": {
"system": "You are a helpful, respectful, and honest assistant ...",
"shots": ["..."],
"user": "{{user_message}}",
"assistant": ""
}
}
Adapter Configuration
Key hyperparameters (see adapter_config.json for the full schema):
| Parameter | Value |
|---|---|
peft_type |
LORA |
task_type |
CAUSAL_LM |
r |
8 |
lora_alpha |
16 |
lora_dropout |
0.05 |
target_modules |
q_proj, v_proj |
bias |
none |
License
Released under the Apache 2.0 licence.
- Downloads last month
- 45
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for dongbobo/adapter-checkpoint-lora
Base model
meta-llama/Llama-2-7b-hf