Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 251
This model is an adapter model trained with QloRA technique.
You can acces Llama-2 paper by clicking here
| Average | ARC (25-shot) | HellaSwag (10-shot) | MMLU (5-shot) | TruthfulQA (0-shot) | |
|---|---|---|---|---|---|
| Scores | 67.3 | 66.38 | 84.51 | 62.75 | 55.57 |