Introduction

This model originate from "LLaMA 2-7b" we trained only response part with the "Alpaca-GPT-4" dataset, utilizing LoRA (Low-Rank Adaptation) training. The weights from LoRA are merged into the model.

Details

Used Datasets

  • vicgalle/alpaca-gpt4
Downloads last month
8
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Seungyoun/llama-2-7b-alpaca-gpt4

Quantizations
1 model