Datasets:

Modalities:
Text
ArXiv:

Update dataset card with paper info and metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +30 -706
README.md CHANGED
@@ -1,722 +1,46 @@
1
- <div align="center">
2
-
3
-
4
- # ⚡ LitGPT
5
-
6
- **20+ high-performance LLMs with recipes to pretrain, finetune, and deploy at scale.**
7
-
8
- <pre>
9
- ✅ From scratch implementations ✅ No abstractions ✅ Beginner friendly
10
- ✅ Flash attention ✅ FSDP ✅ LoRA, QLoRA, Adapter
11
- ✅ Reduce GPU memory (fp4/8/16/32) ✅ 1-1000+ GPUs/TPUs ✅ 20+ LLMs
12
- </pre>
13
-
14
-
15
  ---
16
-
17
-
18
- ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pytorch-lightning)
19
- ![cpu-tests](https://github.com/lightning-AI/lit-stablelm/actions/workflows/cpu-tests.yml/badge.svg) [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/lit-stablelm/blob/master/LICENSE) [![Discord](https://img.shields.io/discord/1077906959069626439)](https://discord.gg/VptPCZkGNa)
20
-
21
- <p align="center">
22
- <a href="#quick-start">Quick start</a> •
23
- <a href="#choose-from-20-llms">Models</a> •
24
- <a href="#finetune-an-llm">Finetune</a> •
25
- <a href="#deploy-an-llm">Deploy</a> •
26
- <a href="#all-workflows">All workflows</a> •
27
- <a href="#state-of-the-art-features">Features</a> •
28
- <a href="#training-recipes">Recipes (YAML)</a> •
29
- <a href="https://lightning.ai/">Lightning AI</a> •
30
- <a href="#tutorials">Tutorials</a>
31
- </p>
32
-
33
- &nbsp;
34
-
35
- <a target="_blank" href="https://lightning.ai/lightning-ai/studios/litgpt-quick-start">
36
- <img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/get-started-badge.svg" height="36px" alt="Get started"/>
37
- </a>
38
-
39
- &nbsp;
40
-
41
- </div>
42
-
43
- # Use, finetune, pretrain, and deploy LLMs Lightning fast ⚡⚡
44
- Every LLM is implemented from scratch with **no abstractions** and **full control**, making them blazing fast, minimal, and performant at enterprise scale.
45
-
46
- ✅ **Enterprise ready -** Apache 2.0 for unlimited enterprise use.</br>
47
- ✅ **Developer friendly -** Easy debugging with no abstraction layers and single file implementations.</br>
48
- ✅ **Optimized performance -** Models designed to maximize performance, reduce costs, and speed up training.</br>
49
- ✅ **Proven recipes -** Highly-optimized training/finetuning recipes tested at enterprise scale.</br>
50
-
51
- &nbsp;
52
-
53
- # Quick start
54
- Install LitGPT
55
- ```
56
- pip install 'litgpt[extra]'
57
- ```
58
-
59
- Load and use any of the [20+ LLMs](#choose-from-20-llms):
60
- ```python
61
- from litgpt import LLM
62
-
63
- llm = LLM.load("microsoft/phi-2")
64
- text = llm.generate("Fix the spelling: Every fall, the family goes to the mountains.")
65
- print(text)
66
- # Corrected Sentence: Every fall, the family goes to the mountains.
67
- ```
68
-
69
- &nbsp;
70
-
71
- ✅ Optimized for fast inference</br>
72
- ✅ Quantization</br>
73
- ✅ Runs on low-memory GPUs</br>
74
- ✅ No layers of internal abstractions</br>
75
- ✅ Optimized for production scale</br>
76
-
77
- <details>
78
- <summary>Advanced install options</summary>
79
-
80
- Install from source:
81
-
82
- ```bash
83
- git clone https://github.com/Lightning-AI/litgpt
84
- cd litgpt
85
- pip install -e '.[all]'
86
- ```
87
- </details>
88
-
89
- [Explore the full Python API docs](tutorials/python-api.md).
90
-
91
- &nbsp;
92
-
93
- ---
94
- # Choose from 20+ LLMs
95
- Every model is written from scratch to maximize performance and remove layers of abstraction:
96
-
97
- | Model | Model size | Author | Reference |
98
- |----|----|----|----|
99
- | Llama 3, 3.1, 3.2, 3.3 | 1B, 3B, 8B, 70B, 405B | Meta AI | [Meta AI 2024](https://github.com/meta-llama/llama3) |
100
- | Code Llama | 7B, 13B, 34B, 70B | Meta AI | [Rozière et al. 2023](https://arxiv.org/abs/2308.12950) |
101
- | CodeGemma | 7B | Google | [Google Team, Google Deepmind](https://ai.google.dev/gemma/docs/codegemma) |
102
- | Gemma 2 | 2B, 9B, 27B | Google | [Google Team, Google Deepmind](https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf) |
103
- | Phi 4 | 14B | Microsoft Research | [Abdin et al. 2024](https://arxiv.org/abs/2412.08905) |
104
- | Qwen2.5 | 0.5B, 1.5B, 3B, 7B, 14B, 32B, 72B | Alibaba Group | [Qwen Team 2024](https://qwenlm.github.io/blog/qwen2.5/) |
105
- | Qwen2.5 Coder | 0.5B, 1.5B, 3B, 7B, 14B, 32B | Alibaba Group | [Hui, Binyuan et al. 2024](https://arxiv.org/abs/2409.12186) |
106
- | R1 Distill Llama | 8B, 70B | DeepSeek AI | [DeepSeek AI 2025](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf) |
107
- | ... | ... | ... | ... |
108
-
109
- <details>
110
- <summary>See full list of 20+ LLMs</summary>
111
-
112
- &nbsp;
113
-
114
- #### All models
115
-
116
- | Model | Model size | Author | Reference |
117
- |----|----|----|----|
118
- | CodeGemma | 7B | Google | [Google Team, Google Deepmind](https://ai.google.dev/gemma/docs/codegemma) |
119
- | Code Llama | 7B, 13B, 34B, 70B | Meta AI | [Rozière et al. 2023](https://arxiv.org/abs/2308.12950) |
120
- | Falcon | 7B, 40B, 180B | TII UAE | [TII 2023](https://falconllm.tii.ae) |
121
- | Falcon 3 | 1B, 3B, 7B, 10B | TII UAE | [TII 2024](https://huggingface.co/blog/falcon3) |
122
- | FreeWilly2 (Stable Beluga 2) | 70B | Stability AI | [Stability AI 2023](https://stability.ai/blog/stable-beluga-large-instruction-fine-tuned-models) |
123
- | Function Calling Llama 2 | 7B | Trelis | [Trelis et al. 2023](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v2) |
124
- | Gemma | 2B, 7B | Google | [Google Team, Google Deepmind](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) |
125
- | Gemma 2 | 9B, 27B | Google | [Google Team, Google Deepmind](https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf) |
126
- | Gemma 3 | 1B, 4B, 12B, 27B | Google | [Google Team, Google Deepmind](https://arxiv.org/pdf/2503.19786) |
127
- | Llama 2 | 7B, 13B, 70B | Meta AI | [Touvron et al. 2023](https://arxiv.org/abs/2307.09288) |
128
- | Llama 3.1 | 8B, 70B | Meta AI | [Meta AI 2024](https://github.com/meta-llama/llama3) |
129
- | Llama 3.2 | 1B, 3B | Meta AI | [Meta AI 2024](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/) |
130
- | Llama 3.3 | 70B | Meta AI | [Meta AI 2024](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) |
131
- | Mathstral | 7B | Mistral AI | [Mistral AI 2024](https://mistral.ai/news/mathstral/) |
132
- | MicroLlama | 300M | Ken Wang | [MicroLlama repo](https://github.com/keeeeenw/MicroLlama) |
133
- | Mixtral MoE | 8x7B | Mistral AI | [Mistral AI 2023](https://mistral.ai/news/mixtral-of-experts/) |
134
- | Mistral | 7B, 123B | Mistral AI | [Mistral AI 2023](https://mistral.ai/news/announcing-mistral-7b/) |
135
- | Mixtral MoE | 8x22B | Mistral AI | [Mistral AI 2024](https://mistral.ai/news/mixtral-8x22b/) |
136
- | OLMo | 1B, 7B | Allen Institute for AI (AI2) | [Groeneveld et al. 2024](https://aclanthology.org/2024.acl-long.841/) |
137
- | OpenLLaMA | 3B, 7B, 13B | OpenLM Research | [Geng & Liu 2023](https://github.com/openlm-research/open_llama) |
138
- | Phi 1.5 & 2 | 1.3B, 2.7B | Microsoft Research | [Li et al. 2023](https://arxiv.org/abs/2309.05463) |
139
- | Phi 3 | 3.8B | Microsoft Research | [Abdin et al. 2024](https://arxiv.org/abs/2404.14219) |
140
- | Phi 4 | 14B | Microsoft Research | [Abdin et al. 2024](https://arxiv.org/abs/2412.08905) |
141
- | Phi 4 Mini Instruct | 3.8B | Microsoft Research | [Microsoft 2025](https://arxiv.org/abs/2503.01743) |
142
- | Phi 4 Mini Reasoning | 3.8B | Microsoft Research | [Xu, Peng et al. 2025](https://arxiv.org/abs/2504.21233) |
143
- | Phi 4 Reasoning | 3.8B | Microsoft Research | [Abdin et al. 2025](https://arxiv.org/abs/2504.21318) |
144
- | Phi 4 Reasoning Plus | 3.8B | Microsoft Research | [Abdin et al. 2025](https://arxiv.org/abs/2504.21318) |
145
- | Platypus | 7B, 13B, 70B | Lee et al. | [Lee, Hunter, and Ruiz 2023](https://arxiv.org/abs/2308.07317) |
146
- | Pythia | {14,31,70,160,410}M, {1,1.4,2.8,6.9,12}B | EleutherAI | [Biderman et al. 2023](https://arxiv.org/abs/2304.01373) |
147
- | Qwen2.5 | 0.5B, 1.5B, 3B, 7B, 14B, 32B, 72B | Alibaba Group | [Qwen Team 2024](https://qwenlm.github.io/blog/qwen2.5/) |
148
- | Qwen2.5 Coder | 0.5B, 1.5B, 3B, 7B, 14B, 32B | Alibaba Group | [Hui, Binyuan et al. 2024](https://arxiv.org/abs/2409.12186) |
149
- | Qwen2.5 1M (Long Context) | 7B, 14B | Alibaba Group | [Qwen Team 2025](https://qwenlm.github.io/blog/qwen2.5-1m/) |
150
- | Qwen2.5 Math | 1.5B, 7B, 72B | Alibaba Group | [An, Yang et al. 2024](https://arxiv.org/abs/2409.12122) |
151
- | QwQ | 32B | Alibaba Group | [Qwen Team 2025](https://qwenlm.github.io/blog/qwq-32b/) |
152
- | QwQ-Preview | 32B | Alibaba Group | [Qwen Team 2024](https://qwenlm.github.io/blog/qwq-32b-preview/) |
153
- | Qwen3 | 0.6B, 1.7B, 4B, 8B, 14B, 32B | Alibaba Group | [Qwen Team 2025](https://arxiv.org/abs/2505.09388/) |
154
- | Qwen3 MoE | 30B, 235B | Alibaba Group | [Qwen Team 2025](https://arxiv.org/abs/2505.09388/) |
155
- | R1 Distill Llama | 8B, 70B | DeepSeek AI | [DeepSeek AI 2025](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf) |
156
- | SmolLM2 | 135M, 360M, 1.7B | Hugging Face | [Hugging Face 2024](https://github.com/huggingface/smollm) |
157
- | Salamandra | 2B, 7B | Barcelona Supercomputing Centre | [BSC-LTC 2024](https://github.com/BSC-LTC/salamandra) |
158
- | StableCode | 3B | Stability AI | [Stability AI 2023](https://stability.ai/blog/stablecode-llm-generative-ai-coding) |
159
- | StableLM | 3B, 7B | Stability AI | [Stability AI 2023](https://github.com/Stability-AI/StableLM) |
160
- | StableLM Zephyr | 3B | Stability AI | [Stability AI 2023](https://stability.ai/blog/stablecode-llm-generative-ai-coding) |
161
- | TinyLlama | 1.1B | Zhang et al. | [Zhang et al. 2023](https://github.com/jzhang38/TinyLlama) |
162
-
163
-
164
- **Tip**: You can list all available models by running the `litgpt download list` command.
165
-
166
-
167
- </details>
168
-
169
- &nbsp;
170
-
171
- ---
172
-
173
- # Workflows
174
-
175
- <p align="center">
176
- <a href="#finetune-an-llm">Finetune</a> •
177
- <a href="#pretrain-an-llm">Pretrain</a> •
178
- <a href="#continue-pretraining-an-llm">Continued pretraining</a> •
179
- <a href="#evaluate-an-llm">Evaluate</a> •
180
- <a href="#deploy-an-llm">Deploy</a> •
181
- <a href="#test-an-llm">Test</a>
182
- </p>
183
-
184
- &nbsp;
185
-
186
- Use the command line interface to run advanced workflows such as pretraining or finetuning on your own data.
187
-
188
-
189
- ## All workflows
190
- After installing LitGPT, select the model and workflow to run (finetune, pretrain, evaluate, deploy, etc...):
191
-
192
- ```bash
193
- # litgpt [action] [model]
194
- litgpt serve meta-llama/Llama-3.2-3B-Instruct
195
- litgpt finetune meta-llama/Llama-3.2-3B-Instruct
196
- litgpt pretrain meta-llama/Llama-3.2-3B-Instruct
197
- litgpt chat meta-llama/Llama-3.2-3B-Instruct
198
- litgpt evaluate meta-llama/Llama-3.2-3B-Instruct
199
- ```
200
-
201
- &nbsp;
202
-
203
- ----
204
-
205
- ## Finetune an LLM
206
-
207
- <div align="center">
208
- <a target="_blank" href="https://lightning.ai/lightning-ai/studios/litgpt-finetune">
209
- <img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/run-on-studio.svg" height="36px" alt="Run on Studios"/>
210
- </a>
211
- </div>
212
-
213
- &nbsp;
214
-
215
- Finetuning is the process of taking a pretrained AI model and further training it on a smaller, specialized dataset tailored to a specific task or application.
216
-
217
-
218
- &nbsp;
219
-
220
- ```bash
221
- # 0) setup your dataset
222
- curl -L https://huggingface.co/datasets/ksaw008/finance_alpaca/resolve/main/finance_alpaca.json -o my_custom_dataset.json
223
-
224
- # 1) Finetune a model (auto downloads weights)
225
- litgpt finetune microsoft/phi-2 \
226
- --data JSON \
227
- --data.json_path my_custom_dataset.json \
228
- --data.val_split_fraction 0.1 \
229
- --out_dir out/custom-model
230
-
231
- # 2) Test the model
232
- litgpt chat out/custom-model/final
233
-
234
- # 3) Deploy the model
235
- litgpt serve out/custom-model/final
236
- ```
237
-
238
- [Read the full finetuning docs](tutorials/finetune.md)
239
-
240
- &nbsp;
241
-
242
- ----
243
-
244
- ## Deploy an LLM
245
-
246
- <div align="center">
247
- <a target="_blank" href="https://lightning.ai/lightning-ai/studios/litgpt-serve">
248
- <img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/deploy-on-studios.svg" height="36px" alt="Deploy on Studios"/>
249
- </a>
250
- </div>
251
-
252
- &nbsp;
253
-
254
- Deploy a pretrained or finetune LLM to use it in real-world applications. Deploy, automatically sets up a web server that can be accessed by a website or app.
255
-
256
- ```bash
257
- # deploy an out-of-the-box LLM
258
- litgpt serve microsoft/phi-2
259
-
260
- # deploy your own trained model
261
- litgpt serve path/to/microsoft/phi-2/checkpoint
262
- ```
263
-
264
- <details>
265
- <summary>Show code to query server:</summary>
266
-
267
- &nbsp;
268
-
269
- Test the server in a separate terminal and integrate the model API into your AI product:
270
- ```python
271
- # 3) Use the server (in a separate Python session)
272
- import requests, json
273
- response = requests.post(
274
- "http://127.0.0.1:8000/predict",
275
- json={"prompt": "Fix typos in the following sentence: Example input"}
276
- )
277
- print(response.json()["output"])
278
- ```
279
- </details>
280
-
281
- [Read the full deploy docs](tutorials/deploy.md).
282
-
283
- &nbsp;
284
-
285
- ----
286
-
287
- ## Evaluate an LLM
288
- Evaluate an LLM to test its performance on various tasks to see how well it understands and generates text. Simply put, we can evaluate things like how well would it do in college-level chemistry, coding, etc... (MMLU, Truthful QA, etc...)
289
-
290
- ```bash
291
- litgpt evaluate microsoft/phi-2 --tasks 'truthfulqa_mc2,mmlu'
292
- ```
293
-
294
- [Read the full evaluation docs](tutorials/evaluation.md).
295
-
296
- &nbsp;
297
-
298
- ----
299
-
300
- ## Test an LLM
301
-
302
- <div align="center">
303
- <a target="_blank" href="https://lightning.ai/lightning-ai/studios/litgpt-chat">
304
- <img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/run-on-studio.svg" height="36px" alt="Run on Studios"/>
305
- </a>
306
- </div>
307
-
308
- &nbsp;
309
-
310
- Test how well the model works via an interactive chat. Use the `chat` command to chat, extract embeddings, etc...
311
-
312
- Here's an example showing how to use the Phi-2 LLM:
313
- ```bash
314
- litgpt chat microsoft/phi-2
315
-
316
- >> Prompt: What do Llamas eat?
317
- ```
318
-
319
- <details>
320
- <summary>Full code:</summary>
321
-
322
- &nbsp;
323
-
324
- ```bash
325
- # 1) List all supported LLMs
326
- litgpt download list
327
-
328
- # 2) Use a model (auto downloads weights)
329
- litgpt chat microsoft/phi-2
330
-
331
- >> Prompt: What do Llamas eat?
332
- ```
333
-
334
- The download of certain models requires an additional access token. You can read more about this in the [download](tutorials/download_model_weights.md#specific-models-and-access-tokens) documentation.
335
-
336
- </details>
337
-
338
- [Read the full chat docs](tutorials/inference.md).
339
-
340
- &nbsp;
341
-
342
- ----
343
-
344
- ## Pretrain an LLM
345
-
346
- <div align="center">
347
- <a target="_blank" href="https://lightning.ai/lightning-ai/studios/litgpt-pretrain">
348
- <img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/run-on-studio.svg" height="36px" alt="Run on Studios"/>
349
- </a>
350
- </div>
351
-
352
- &nbsp;
353
-
354
- Pretraining is the process of teaching an AI model by exposing it to a large amount of data before it is fine-tuned for specific tasks.
355
-
356
- <details>
357
- <summary>Show code:</summary>
358
-
359
- &nbsp;
360
-
361
- ```bash
362
- mkdir -p custom_texts
363
- curl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txt
364
- curl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt
365
-
366
- # 1) Download a tokenizer
367
- litgpt download EleutherAI/pythia-160m \
368
- --tokenizer_only True
369
-
370
- # 2) Pretrain the model
371
- litgpt pretrain EleutherAI/pythia-160m \
372
- --tokenizer_dir EleutherAI/pythia-160m \
373
- --data TextFiles \
374
- --data.train_data_path "custom_texts/" \
375
- --train.max_tokens 10_000_000 \
376
- --out_dir out/custom-model
377
-
378
- # 3) Test the model
379
- litgpt chat out/custom-model/final
380
- ```
381
- </details>
382
-
383
- [Read the full pretraining docs](tutorials/pretrain.md)
384
-
385
- &nbsp;
386
-
387
- ----
388
-
389
- ## Continue pretraining an LLM
390
-
391
- <div align="center">
392
- <a target="_blank" href="https://lightning.ai/lightning-ai/studios/litgpt-continue-pretraining">
393
- <img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/run-on-studio.svg" height="36px" alt="Run on Studios"/>
394
- </a>
395
- </div>
396
-
397
- &nbsp;
398
-
399
- Continued pretraining is another way of finetuning that specializes an already pretrained model by training on custom data:
400
-
401
- <details>
402
- <summary>Show code:</summary>
403
-
404
- &nbsp;
405
-
406
- ```bash
407
- mkdir -p custom_texts
408
- curl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txt
409
- curl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt
410
-
411
- # 1) Continue pretraining a model (auto downloads weights)
412
- litgpt pretrain EleutherAI/pythia-160m \
413
- --tokenizer_dir EleutherAI/pythia-160m \
414
- --initial_checkpoint_dir EleutherAI/pythia-160m \
415
- --data TextFiles \
416
- --data.train_data_path "custom_texts/" \
417
- --train.max_tokens 10_000_000 \
418
- --out_dir out/custom-model
419
-
420
- # 2) Test the model
421
- litgpt chat out/custom-model/final
422
- ```
423
-
424
- </details>
425
-
426
- [Read the full continued pretraining docs](tutorials/pretrain.md#continued-pretraining-on-custom-data)
427
-
428
- &nbsp;
429
-
430
- ----
431
-
432
- # State-of-the-art features
433
-
434
- ✅ State-of-the-art optimizations: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, [optional CPU offloading](tutorials/oom.md#do-sharding-across-multiple-gpus), and [TPU and XLA support](extensions/xla).</br>
435
- ✅ [Pretrain](tutorials/pretrain.md), [finetune](tutorials/finetune.md), and [deploy](tutorials/inference.md)</br>
436
- ✅ Reduce compute requirements with low-precision settings: FP16, BF16, and FP16/FP32 mixed.</br>
437
- ✅ Lower memory requirements with [quantization](tutorials/quantize.md): 4-bit floats, 8-bit integers, and double quantization.</br>
438
- ✅ [Configuration files](config_hub) for great out-of-the-box performance.</br>
439
- ✅ Parameter-efficient finetuning: [LoRA](tutorials/finetune_lora.md), [QLoRA](tutorials/finetune_lora.md), [Adapter](tutorials/finetune_adapter.md), and [Adapter v2](tutorials/finetune_adapter.md).</br>
440
- ✅ [Exporting](tutorials/convert_lit_models.md) to other popular model weight formats.</br>
441
- ✅ Many popular datasets for [pretraining](tutorials/pretrain.md) and [finetuning](tutorials/prepare_dataset.md), and [support for custom datasets](tutorials/prepare_dataset.md#preparing-custom-datasets-for-instruction-finetuning).</br>
442
- ✅ Readable and easy-to-modify code to experiment with the latest research ideas.</br>
443
-
444
- &nbsp;
445
-
446
  ---
447
 
448
- # Training recipes
449
-
450
- LitGPT comes with validated recipes (YAML configs) to train models under different conditions. We've generated these recipes based on the parameters we found to perform the best for different training conditions.
451
-
452
- Browse all training recipes [here](config_hub).
453
-
454
- ### Example
455
-
456
- ```bash
457
- litgpt finetune \
458
- --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml
459
- ```
460
- <details>
461
- <summary>✅ Use configs to customize training</summary>
462
-
463
- Configs let you customize training for all granular parameters like:
464
-
465
- ```yaml
466
- # The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
467
- checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf
468
-
469
- # Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
470
- out_dir: out/finetune/qlora-llama2-7b
471
-
472
- # The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
473
- precision: bf16-true
474
-
475
- ...
476
- ```
477
- </details>
478
-
479
- <details>
480
- <summary>✅ Example: LoRA finetuning config</summary>
481
-
482
- &nbsp;
483
-
484
- ```yaml
485
- # The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
486
- checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf
487
-
488
- # Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
489
- out_dir: out/finetune/qlora-llama2-7b
490
-
491
- # The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
492
- precision: bf16-true
493
-
494
- # If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
495
- quantize: bnb.nf4
496
-
497
- # How many devices/GPUs to use. (type: Union[int, str], default: 1)
498
- devices: 1
499
-
500
- # How many nodes to use. (type: int, default: 1)
501
- num_nodes: 1
502
-
503
- # The LoRA rank. (type: int, default: 8)
504
- lora_r: 32
505
-
506
- # The LoRA alpha. (type: int, default: 16)
507
- lora_alpha: 16
508
-
509
- # The LoRA dropout value. (type: float, default: 0.05)
510
- lora_dropout: 0.05
511
-
512
- # Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
513
- lora_query: true
514
-
515
- # Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
516
- lora_key: false
517
-
518
- # Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
519
- lora_value: true
520
-
521
- # Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
522
- lora_projection: false
523
-
524
- # Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
525
- lora_mlp: false
526
-
527
- # Whether to apply LoRA to output head in GPT. (type: bool, default: False)
528
- lora_head: false
529
-
530
- # Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
531
- data:
532
- class_path: litgpt.data.Alpaca2k
533
- init_args:
534
- mask_prompt: false
535
- val_split_fraction: 0.05
536
- prompt_style: alpaca
537
- ignore_index: -100
538
- seed: 42
539
- num_workers: 4
540
- download_dir: data/alpaca2k
541
-
542
- # Training-related arguments. See ``litgpt.args.TrainArgs`` for details
543
- train:
544
 
545
- # Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
546
- save_interval: 200
547
 
548
- # Number of iterations between logging calls (type: int, default: 1)
549
- log_interval: 1
550
 
551
- # Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
552
- global_batch_size: 8
553
 
554
- # Number of samples per data-parallel rank (type: int, default: 4)
555
- micro_batch_size: 2
556
 
557
- # Number of iterations with learning rate warmup active (type: int, default: 100)
558
- lr_warmup_steps: 10
 
 
559
 
560
- # Number of epochs to train on (type: Optional[int], default: 5)
561
- epochs: 4
562
 
563
- # Total number of tokens to train on (type: Optional[int], default: null)
564
- max_tokens:
565
 
566
- # Limits the number of optimizer steps to run (type: Optional[int], default: null)
567
- max_steps:
 
 
 
568
 
569
- # Limits the length of samples (type: Optional[int], default: null)
570
- max_seq_length: 512
571
 
572
- # Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
573
- tie_embeddings:
574
-
575
- # (type: float, default: 0.0003)
576
- learning_rate: 0.0002
577
-
578
- # (type: float, default: 0.02)
579
- weight_decay: 0.0
580
-
581
- # (type: float, default: 0.9)
582
- beta1: 0.9
583
-
584
- # (type: float, default: 0.95)
585
- beta2: 0.95
586
-
587
- # (type: Optional[float], default: null)
588
- max_norm:
589
-
590
- # (type: float, default: 6e-05)
591
- min_lr: 6.0e-05
592
-
593
- # Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
594
- eval:
595
-
596
- # Number of optimizer steps between evaluation calls (type: int, default: 100)
597
- interval: 100
598
-
599
- # Number of tokens to generate (type: Optional[int], default: 100)
600
- max_new_tokens: 100
601
-
602
- # Number of iterations (type: int, default: 100)
603
- max_iters: 100
604
-
605
- # The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
606
- logger_name: csv
607
-
608
- # The random seed to use for reproducibility. (type: int, default: 1337)
609
- seed: 1337
610
- ```
611
- </details>
612
-
613
- <details>
614
- <summary>✅ Override any parameter in the CLI:</summary>
615
-
616
- ```bash
617
- litgpt finetune \
618
- --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml \
619
- --lora_r 4
620
- ```
621
- </details>
622
-
623
- &nbsp;
624
-
625
- ----
626
-
627
- # Project highlights
628
-
629
- LitGPT powers many great AI projects, initiatives, challenges and of course enterprises. Please submit a pull request to be considered for a feature.
630
-
631
- <details>
632
- <summary>📊 SAMBA: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling</summary>
633
-
634
- The [Samba](https://github.com/microsoft/Samba) project by researchers at Microsoft is built on top of the LitGPT code base and combines state space models with sliding window attention, which outperforms pure state space models.
635
-
636
- </details>
637
-
638
- <details>
639
- <summary>🏆 NeurIPS 2023 Large Language Model Efficiency Challenge: 1 LLM + 1 GPU + 1 Day</summary>
640
-
641
- The LitGPT repository was the official starter kit for the [NeurIPS 2023 LLM Efficiency Challenge](https://llm-efficiency-challenge.github.io), which is a competition focused on finetuning an existing non-instruction tuned LLM for 24 hours on a single GPU.
642
-
643
- </details>
644
-
645
- <details>
646
- <summary>🦙 TinyLlama: An Open-Source Small Language Model</summary>
647
-
648
-
649
- LitGPT powered the [TinyLlama project](https://github.com/jzhang38/TinyLlama) and [TinyLlama: An Open-Source Small Language Model](https://arxiv.org/abs/2401.02385) research paper.
650
-
651
- </details>
652
-
653
- <details>
654
- <summary>🍪 MicroLlama: MicroLlama-300M</summary>
655
-
656
- [MicroLlama](https://github.com/keeeeenw/MicroLlama) is a 300M Llama model pretrained on 50B tokens powered by TinyLlama and LitGPT.
657
- </details>
658
-
659
- <details>
660
- <summary>🔬 Pre-training Small Base LMs with Fewer Tokens</summary>
661
-
662
- The research paper ["Pre-training Small Base LMs with Fewer Tokens"](https://arxiv.org/abs/2404.08634), which utilizes LitGPT, develops smaller base language models by inheriting a few transformer blocks from larger models and training on a tiny fraction of the data used by the larger models. It demonstrates that these smaller models can perform comparably to larger models despite using significantly less training data and resources.
663
-
664
- </details>
665
-
666
- &nbsp;
667
-
668
- ----
669
-
670
- # Community
671
-
672
- We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.
673
-
674
- - [Request a feature](https://github.com/Lightning-AI/litgpt/issues)
675
- - [Submit your first contribution](https://lightning.ai/pages/community/tutorial/how-to-contribute-to-litgpt/)
676
- - [Join our Discord](https://discord.gg/VptPCZkGNa)
677
-
678
- &nbsp;
679
-
680
- # Tutorials
681
-
682
- 🚀 [Get started](tutorials/0_to_litgpt.md)</br>
683
- ⚡️ [Finetuning, incl. LoRA, QLoRA, and Adapters](tutorials/finetune.md)</br>
684
- 🤖 [Pretraining](tutorials/pretrain.md)</br>
685
- 💬 [Model evaluation](tutorials/evaluation.md)</br>
686
- 📘 [Supported and custom datasets](tutorials/prepare_dataset.md)</br>
687
- 🧹 [Quantization](tutorials/quantize.md)</br>
688
- 🤯 [Tips for dealing with out-of-memory (OOM) errors](tutorials/oom.md)</br>
689
- 🧑🏽‍💻 [Using cloud TPUs](extensions/xla)</br>
690
-
691
- &nbsp;
692
-
693
- ----
694
-
695
- ### Acknowledgments
696
-
697
- This implementation extends on [Lit-LLaMA](https://github.com/lightning-AI/lit-llama) and [nanoGPT](https://github.com/karpathy/nanoGPT), and it's **powered by [Lightning Fabric](https://lightning.ai/docs/fabric/stable/) ⚡**.
698
-
699
- - [@karpathy](https://github.com/karpathy) for [nanoGPT](https://github.com/karpathy/nanoGPT)
700
- - [@EleutherAI](https://github.com/EleutherAI) for [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) and the [Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)
701
- - [@TimDettmers](https://github.com/TimDettmers) for [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
702
- - [@Microsoft](https://github.com/microsoft) for [LoRA](https://github.com/microsoft/LoRA)
703
- - [@tridao](https://github.com/tridao) for [Flash Attention 2](https://github.com/Dao-AILab/flash-attention)
704
-
705
- ### License
706
-
707
- LitGPT is released under the [Apache 2.0](https://github.com/Lightning-AI/litgpt/blob/main/LICENSE) license.
708
-
709
- ### Citation
710
-
711
- If you use LitGPT in your research, please cite the following work:
712
 
713
  ```bibtex
714
- @misc{litgpt-2023,
715
- author = {Lightning AI},
716
- title = {LitGPT},
717
- howpublished = {\url{https://github.com/Lightning-AI/litgpt}},
718
- year = {2023},
 
 
719
  }
720
- ```
721
-
722
- &nbsp;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
+ # When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
+ This repository contains code and artifacts for **Inheritune**, an efficient training method for developing smaller, high-performing language models by inheriting knowledge from larger pre-trained models.
 
10
 
11
+ - **Paper:** [When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models](https://huggingface.co/papers/2404.08634)
12
+ - **GitHub:** [https://github.com/sanyalsunny111/LLM-Inheritune](https://github.com/sanyalsunny111/LLM-Inheritune)
13
 
14
+ ## Overview
 
15
 
16
+ Large Language Models (LLMs) often suffer from a structural inefficiency termed **attention collapse**, where attention matrices in deeper layers degenerate into near rank-one structures. These "lazy layers" are redundant and impair model efficiency.
 
17
 
18
+ **Inheritune** addresses this by:
19
+ 1. **Inheriting** potent early transformer layers from a larger pre-trained model.
20
+ 2. **Training** the inherited layers on a pre-train dataset.
21
+ 3. **Expanding** the model progressively until reaching the desired performance.
22
 
23
+ This approach enables the creation of compact models that can match or even surpass the performance of their larger counterparts while using significantly fewer layers and resources.
 
24
 
25
+ ## Repository Structure
 
26
 
27
+ The code in this project is adapted from [LitGPT](https://github.com/Lightning-AI/litgpt) and includes:
28
+ - `GPT2-experiments/`: GPT-2 training/analysis experiments for the paper.
29
+ - `lit-gpt/`: Code adapted from lit-gpt / small-LM training utilities.
30
+ - `analysis/`: Attention rank computation, softmax analysis, and plotting scripts.
31
+ - `attention-collapse-demo/`: Toy examples of attention collapse in simple settings.
32
 
33
+ ## Citation
 
34
 
35
+ If you find this work helpful, please consider citing:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ```bibtex
38
+ @article{
39
+ sanyal2025attentioncollpase,
40
+ title={When Attention Collapses: How Degenerate Layers in {LLM}s Enable Smaller, Stronger Models},
41
+ author={Sunny Sanyal and Ravid Shwartz-Ziv and Alexandros G. Dimakis and Sujay Sanghavi},
42
+ journal={Transactions on Machine Learning Research},
43
+ year={2025},
44
+ url={https://openreview.net/forum?id=2zQn0bUoPf}
45
  }
46
+ ```