Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .github/ISSUE_TEMPLATE/ask-a-question.md +9 -0
- .github/ISSUE_TEMPLATE/bug-report.yaml +56 -0
- .github/ISSUE_TEMPLATE/feature-request.md +9 -0
- .github/workflows/check-links.yml +32 -0
- .github/workflows/cpu-tests.yml +145 -0
- .lightning/workflows/tests.yaml +55 -0
- config_hub/finetune/README.md +119 -0
- config_hub/finetune/falcon-7b/lora.yaml +131 -0
- config_hub/finetune/falcon-7b/qlora.yaml +133 -0
- config_hub/finetune/gemma-2b/full.yaml +102 -0
- config_hub/finetune/gemma-2b/lora.yaml +132 -0
- config_hub/finetune/gemma-2b/qlora.yaml +132 -0
- config_hub/finetune/gemma-7b/lora.yaml +132 -0
- config_hub/finetune/gemma-7b/qlora.yaml +132 -0
- config_hub/finetune/gemma2-2b/lora.yaml +132 -0
- config_hub/finetune/gemma2-2b/qlora.yaml +132 -0
- config_hub/finetune/gemma2-9b/lora.yaml +132 -0
- config_hub/finetune/gemma2-9b/qlora.yaml +132 -0
- config_hub/finetune/llama-2-7b/full.yaml +107 -0
- config_hub/finetune/llama-2-7b/lora.yaml +131 -0
- config_hub/finetune/llama-2-7b/qlora.yaml +133 -0
- config_hub/finetune/llama-3-8b/full.yaml +107 -0
- config_hub/finetune/llama-3-8b/lora.yaml +131 -0
- config_hub/finetune/llama-3-8b/qlora.yaml +133 -0
- config_hub/finetune/llama-3.1-8b/full.yaml +107 -0
- config_hub/finetune/llama-3.1-8b/lora.yaml +131 -0
- config_hub/finetune/llama-3.1-8b/qlora.yaml +133 -0
- config_hub/finetune/llama-3.2-1B/full.yaml +107 -0
- config_hub/finetune/llama-3.2-1B/lora.yaml +131 -0
- config_hub/finetune/llama-3.2-1B/qlora.yaml +133 -0
- config_hub/finetune/llama-3.2-3B/full.yaml +107 -0
- config_hub/finetune/llama-3.2-3B/lora.yaml +131 -0
- config_hub/finetune/llama-3.2-3B/qlora.yaml +133 -0
- config_hub/finetune/mistral-7b-v0.2/lora.yaml +131 -0
- config_hub/finetune/mistral-7b-v0.2/qlora.yaml +133 -0
- config_hub/finetune/mistral-7b/lora.yaml +131 -0
- config_hub/finetune/mistral-7b/qlora.yaml +133 -0
- config_hub/finetune/openllama/full_qa.yaml +101 -0
- config_hub/finetune/phi-2/full.yaml +101 -0
- config_hub/finetune/phi-2/lora.yaml +132 -0
- config_hub/finetune/phi-2/qlora.yaml +132 -0
- config_hub/finetune/phi-3/full.yaml +98 -0
- config_hub/finetune/phi-3/lora.yaml +129 -0
- config_hub/finetune/phi-3/qlora.yaml +129 -0
- config_hub/finetune/stablelm-base-alpha-3b/full.yaml +102 -0
- config_hub/finetune/stablelm-base-alpha-3b/lora.yaml +131 -0
- config_hub/finetune/stablelm-base-alpha-3b/qlora.yaml +133 -0
- config_hub/finetune/tiny-llama/full.yaml +102 -0
- config_hub/finetune/tiny-llama/full_qa.yaml +101 -0
- config_hub/finetune/tiny-llama/lora.yaml +132 -0
.github/ISSUE_TEMPLATE/ask-a-question.md
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
name: Ask a Question
|
| 3 |
+
about: Ask and answer questions related to LitGPT
|
| 4 |
+
title: ''
|
| 5 |
+
labels: question
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
Please describe your question here.
|
.github/ISSUE_TEMPLATE/bug-report.yaml
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Bug Report
|
| 2 |
+
description: Report errors related to LitGPT
|
| 3 |
+
title: "Description"
|
| 4 |
+
labels: bug
|
| 5 |
+
body:
|
| 6 |
+
- type: markdown
|
| 7 |
+
attributes:
|
| 8 |
+
value: |
|
| 9 |
+
Thank you for taking the time to report an issue. Please fill out the details below to help us resolve it.
|
| 10 |
+
|
| 11 |
+
- type: textarea
|
| 12 |
+
id: bug_description
|
| 13 |
+
attributes:
|
| 14 |
+
label: Bug description
|
| 15 |
+
description: A description of the issue.
|
| 16 |
+
placeholder: |
|
| 17 |
+
Please provide a description of what the bug or issue is.
|
| 18 |
+
validations:
|
| 19 |
+
required: true
|
| 20 |
+
|
| 21 |
+
- type: input
|
| 22 |
+
attributes:
|
| 23 |
+
label: Reproduced in studio
|
| 24 |
+
description: >
|
| 25 |
+
Create a new Lightning Studio with code that reproduces the issue and share the link.
|
| 26 |
+
Also include all the relevant files and data required to reproduce shared issue.
|
| 27 |
+
In case the code does not crash, please add assert statements to show what is the real and expected output.
|
| 28 |
+
A simple guide on how to create such a studio can be found [here](https://www.youtube.com/watch?v=YcW-2Zt_bFg&ab_channel=LightningAI).
|
| 29 |
+
placeholder: https://lightning.ai/...
|
| 30 |
+
validations:
|
| 31 |
+
required: false
|
| 32 |
+
|
| 33 |
+
- type: dropdown
|
| 34 |
+
id: operating_system
|
| 35 |
+
attributes:
|
| 36 |
+
label: What operating system are you using?
|
| 37 |
+
description: If applicable, please select the operating system where you experienced this issue.
|
| 38 |
+
options:
|
| 39 |
+
- "Unknown"
|
| 40 |
+
- "macOS"
|
| 41 |
+
- "Linux"
|
| 42 |
+
- "Windows"
|
| 43 |
+
validations:
|
| 44 |
+
required: true
|
| 45 |
+
|
| 46 |
+
- type: textarea
|
| 47 |
+
id: version
|
| 48 |
+
attributes:
|
| 49 |
+
label: LitGPT Version
|
| 50 |
+
description: |
|
| 51 |
+
Please provide details about your LitGPT version by running the following code in your terminal:
|
| 52 |
+
```
|
| 53 |
+
pip show litgpt | grep Version:
|
| 54 |
+
```
|
| 55 |
+
validations:
|
| 56 |
+
required: false
|
.github/ISSUE_TEMPLATE/feature-request.md
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
name: Suggest a Feature
|
| 3 |
+
about: Propose a new feature or enhancement
|
| 4 |
+
title: ''
|
| 5 |
+
labels: enhancement
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
Please describe the feature or enhancement along with the intended usecase.
|
.github/workflows/check-links.yml
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Check hyperlinks
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
push:
|
| 5 |
+
branches:
|
| 6 |
+
- main
|
| 7 |
+
pull_request:
|
| 8 |
+
branches:
|
| 9 |
+
- main
|
| 10 |
+
|
| 11 |
+
jobs:
|
| 12 |
+
test:
|
| 13 |
+
runs-on: ubuntu-latest
|
| 14 |
+
|
| 15 |
+
steps:
|
| 16 |
+
- uses: actions/checkout@v4
|
| 17 |
+
|
| 18 |
+
- name: Set up Python
|
| 19 |
+
uses: actions/setup-python@v5
|
| 20 |
+
with:
|
| 21 |
+
python-version: "3.10"
|
| 22 |
+
|
| 23 |
+
- name: Install dependencies
|
| 24 |
+
run: |
|
| 25 |
+
python -m pip install --upgrade pip
|
| 26 |
+
pip install "mistune<3.1" # a newer version is incompatible with nbconvert
|
| 27 |
+
pip install pytest pytest-check-links
|
| 28 |
+
|
| 29 |
+
- name: Check links
|
| 30 |
+
run: |
|
| 31 |
+
pytest --check-links README.md --check-links-ignore "http*"
|
| 32 |
+
pytest --check-links tutorials --check-links-ignore "http*"
|
.github/workflows/cpu-tests.yml
ADDED
|
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: CPU tests
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
push:
|
| 5 |
+
branches: [main]
|
| 6 |
+
pull_request_target:
|
| 7 |
+
branches: [main]
|
| 8 |
+
types: [opened, reopened, ready_for_review, labeled, synchronize]
|
| 9 |
+
pull_request: {} # todo
|
| 10 |
+
workflow_dispatch: {}
|
| 11 |
+
|
| 12 |
+
# lock down all permissions by default
|
| 13 |
+
permissions:
|
| 14 |
+
contents: read # needed to check out code
|
| 15 |
+
checks: write # needed for test results
|
| 16 |
+
pull-requests: read # needed for PR metadata
|
| 17 |
+
actions: read # needed to use actions
|
| 18 |
+
security-events: none
|
| 19 |
+
statuses: write # needed to update commit status
|
| 20 |
+
|
| 21 |
+
concurrency:
|
| 22 |
+
group: ${{ github.workflow }}-${{ github.ref }}-${{ github.head_ref }}
|
| 23 |
+
cancel-in-progress: ${{ github.event_name == 'pull_request_target' }}
|
| 24 |
+
|
| 25 |
+
defaults:
|
| 26 |
+
run:
|
| 27 |
+
shell: bash
|
| 28 |
+
|
| 29 |
+
env:
|
| 30 |
+
HF_HOME: .cache-HF # Define HF_HOME for caching
|
| 31 |
+
TRANSFORMERS_CACHE: .cache-HF/transformers
|
| 32 |
+
DATASETS_CACHE: .cache-HF/datasets
|
| 33 |
+
HF_DATASETS_CACHE: .cache-HF/datasets
|
| 34 |
+
|
| 35 |
+
jobs:
|
| 36 |
+
testing-imports:
|
| 37 |
+
runs-on: ${{ matrix.os }}
|
| 38 |
+
if: github.event_name != 'pull_request_target'
|
| 39 |
+
strategy:
|
| 40 |
+
fail-fast: false
|
| 41 |
+
matrix:
|
| 42 |
+
os: ["ubuntu-22.04", "ubuntu-24.04", "macOS-14", "windows-2022"]
|
| 43 |
+
python-version: ["3.10"]
|
| 44 |
+
timeout-minutes: 10
|
| 45 |
+
steps:
|
| 46 |
+
- name: Checkout generic
|
| 47 |
+
uses: actions/checkout@v4
|
| 48 |
+
- uses: actions/setup-python@v5
|
| 49 |
+
with:
|
| 50 |
+
python-version: ${{ matrix.python-version }}
|
| 51 |
+
|
| 52 |
+
- name: Install minimal dependencies
|
| 53 |
+
run: |
|
| 54 |
+
pip install . -U
|
| 55 |
+
pip list
|
| 56 |
+
|
| 57 |
+
- name: Testing package imports
|
| 58 |
+
# make sure all modules are still importable with only the minimal dependencies available
|
| 59 |
+
run: |
|
| 60 |
+
modules=$(
|
| 61 |
+
find litgpt -type f -name "*.py" | \
|
| 62 |
+
sed 's/\.py$//' | sed 's/\//./g' | \
|
| 63 |
+
sed 's/.__init__//g' | xargs -I {} echo "import {};"
|
| 64 |
+
)
|
| 65 |
+
echo "$modules"
|
| 66 |
+
python -c "$modules"
|
| 67 |
+
|
| 68 |
+
pytester:
|
| 69 |
+
# skip PR trigger if secrets are not shared as for all forked PRs
|
| 70 |
+
if: |
|
| 71 |
+
github.event_name != 'pull_request' ||
|
| 72 |
+
(
|
| 73 |
+
github.event_name == 'pull_request' &&
|
| 74 |
+
contains('OWNER,MEMBER,COLLABORATOR', github.event.pull_request.author_association)
|
| 75 |
+
)
|
| 76 |
+
runs-on: ${{ matrix.os }}
|
| 77 |
+
strategy:
|
| 78 |
+
fail-fast: false
|
| 79 |
+
matrix:
|
| 80 |
+
os: ["ubuntu-22.04"]
|
| 81 |
+
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
|
| 82 |
+
include:
|
| 83 |
+
- { os: "macOS-14", python-version: "3.9" }
|
| 84 |
+
- { os: "windows-2022", python-version: "3.9" }
|
| 85 |
+
timeout-minutes: 35
|
| 86 |
+
steps:
|
| 87 |
+
- name: Checkout generic
|
| 88 |
+
uses: actions/checkout@v4
|
| 89 |
+
if: github.event_name != 'pull_request_target'
|
| 90 |
+
- name: Checkout for `pull_request_target`
|
| 91 |
+
uses: actions/checkout@v4
|
| 92 |
+
if: github.event_name == 'pull_request_target'
|
| 93 |
+
with:
|
| 94 |
+
ref: ${{ github.event.pull_request.head.sha }}
|
| 95 |
+
- uses: actions/setup-python@v5
|
| 96 |
+
with:
|
| 97 |
+
python-version: ${{ matrix.python-version }}
|
| 98 |
+
cache: "pip"
|
| 99 |
+
cache-dependency-path: pyproject.toml
|
| 100 |
+
|
| 101 |
+
# Add caching for HF models and tokenizers
|
| 102 |
+
- name: HF cache
|
| 103 |
+
uses: actions/cache@v4
|
| 104 |
+
continue-on-error: true
|
| 105 |
+
with:
|
| 106 |
+
path: .cache-HF
|
| 107 |
+
key: hf-cache_${{ runner.os }}-py${{ matrix.python-version }}
|
| 108 |
+
restore-keys: |
|
| 109 |
+
hf-cache_${{ runner.os }}-py${{ matrix.python-version }}
|
| 110 |
+
hf-cache_${{ runner.os }}-
|
| 111 |
+
hf-cache_
|
| 112 |
+
|
| 113 |
+
- name: Install dependencies
|
| 114 |
+
run: |
|
| 115 |
+
pip install '.[extra,compiler,test]' -U
|
| 116 |
+
pip list
|
| 117 |
+
|
| 118 |
+
- name: Run tests
|
| 119 |
+
env:
|
| 120 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 121 |
+
run: pytest -v litgpt/ tests/ --timeout=180 --durations=100
|
| 122 |
+
|
| 123 |
+
- name: Show cache
|
| 124 |
+
run: |
|
| 125 |
+
pip install -q py-tree
|
| 126 |
+
python -m py_tree -d 1 .cache-HF
|
| 127 |
+
|
| 128 |
+
testing-guardian:
|
| 129 |
+
runs-on: ubuntu-latest
|
| 130 |
+
needs: [pytester, testing-imports]
|
| 131 |
+
if: |
|
| 132 |
+
github.event_name == 'pull_request_target' ||
|
| 133 |
+
(
|
| 134 |
+
github.event_name == 'pull_request' &&
|
| 135 |
+
contains('OWNER,MEMBER,COLLABORATOR', github.event.pull_request.author_association)
|
| 136 |
+
)
|
| 137 |
+
steps:
|
| 138 |
+
- run: echo "${{ needs.pytester.result }}"
|
| 139 |
+
- name: failing...
|
| 140 |
+
if: needs.pytester.result == 'failure'
|
| 141 |
+
run: exit 1
|
| 142 |
+
- name: cancelled or skipped...
|
| 143 |
+
if: contains(fromJSON('["cancelled", "skipped"]'), needs.pytester.result)
|
| 144 |
+
timeout-minutes: 1
|
| 145 |
+
run: sleep 90
|
.lightning/workflows/tests.yaml
ADDED
|
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
trigger:
|
| 2 |
+
push:
|
| 3 |
+
branches: ["main"]
|
| 4 |
+
pull_request:
|
| 5 |
+
branches: ["main"]
|
| 6 |
+
|
| 7 |
+
image: "pytorchlightning/lightning-thunder:ubuntu24.04-cuda12.6.3-cudnn-fe1.10.0-py3.10-pt_2.7.1-dev"
|
| 8 |
+
machine: "L4_X_4"
|
| 9 |
+
timeout: "45" # minutes
|
| 10 |
+
parametrize:
|
| 11 |
+
matrix:
|
| 12 |
+
dependency: ["", "compiler"]
|
| 13 |
+
include: []
|
| 14 |
+
exclude: []
|
| 15 |
+
|
| 16 |
+
env:
|
| 17 |
+
SKIP_WITH_CI: "1" # skip single tests with CI
|
| 18 |
+
NCCL_DEBUG: "INFO"
|
| 19 |
+
NCCL_IGNORE_DISABLED_P2P: "1"
|
| 20 |
+
TORCH_VERSION: "2.7.1"
|
| 21 |
+
RUN_ONLY_CUDA_TESTS: "1" # run CUDA tests only
|
| 22 |
+
|
| 23 |
+
run: |
|
| 24 |
+
whereis nvidia
|
| 25 |
+
nvidia-smi
|
| 26 |
+
python --version
|
| 27 |
+
pip --version
|
| 28 |
+
pip list
|
| 29 |
+
set -ex
|
| 30 |
+
|
| 31 |
+
pip install -q '.[extra,test]' "torch==${TORCH_VERSION}" cffi -U
|
| 32 |
+
|
| 33 |
+
if [ "${dependency}" == "compiler" ]; then
|
| 34 |
+
pip uninstall -y torchvision torchaudio
|
| 35 |
+
pip install -q '.[compiler,extra,test]' "torch==${TORCH_VERSION}"
|
| 36 |
+
python -c "from thunder.executors import nvfuser_available ; assert nvfuser_available(), 'nvFuser is missing!'"
|
| 37 |
+
python -c "from thunder.executors.triton_utils import triton_version ; assert triton_version() is not None, 'triton is missing!'"
|
| 38 |
+
fi
|
| 39 |
+
|
| 40 |
+
pip list
|
| 41 |
+
python -c "import torch ; gpus = torch.cuda.device_count() ; assert gpus >= 2, f'GPU: {gpus}'"
|
| 42 |
+
python -c "from torch import __version__ as ver ; assert str(ver).split('+')[0] == '$TORCH_VERSION', f'PyTorch: installed {ver} but expected $TORCH_VERSION'"
|
| 43 |
+
|
| 44 |
+
pytest -v --durations=100
|
| 45 |
+
|
| 46 |
+
wget https://raw.githubusercontent.com/Lightning-AI/utilities/main/scripts/run_standalone_tests.sh
|
| 47 |
+
PL_RUN_STANDALONE_TESTS=1 bash run_standalone_tests.sh "tests"
|
| 48 |
+
|
| 49 |
+
if [ "${dependency}" == "compiler" ]; then
|
| 50 |
+
pip uninstall -y lightning-thunder
|
| 51 |
+
# install thunder from source, so that, thunder.tests will be available
|
| 52 |
+
pip install -U "lightning-thunder[test] @ git+https://github.com/Lightning-AI/lightning-thunder.git" "torch==${TORCH_VERSION}"
|
| 53 |
+
# without env var, it filters out all tests
|
| 54 |
+
RUN_ONLY_CUDA_TESTS=0 pytest tests/ext_thunder/test_thunder_networks.py -v
|
| 55 |
+
fi
|
config_hub/finetune/README.md
ADDED
|
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Config files
|
| 2 |
+
|
| 3 |
+
The table below lists the performances you can expect from the provided config files. Note that you can achieve lower memory consumption by lowering the micro batch size as needed. In addition, you can lower the rank (`lora_r`) in the LoRA configuration files and disable LoRA for certain layers (for example, setting `lora_projection` and other LoRA layer-specific parameters to `false`).
|
| 4 |
+
For more information, see the [Dealing with out-of-memory (OOM) errors](../../tutorials/oom.md) on lowering the memory requirements.
|
| 5 |
+
The "Cost" column refers to the on-demand compute cost on [Lightning AI Studios where these benchmarks were executed](https://lightning.ai/lightning-ai/studios/automated-benchmarks-for-litgpt).
|
| 6 |
+
All experiments were conducted using bfloat-16 precision on the Alpaca2k dataset. The "Multitask score" refers to [MMLU](https://arxiv.org/abs/2009.03300).
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
| Config | Model | Epochs | Max seq length | Micro batch size | Machine | Training runtime | Cost | Peak memory | Validation loss | Validation perplexity | Multitask score (MMLU) |
|
| 11 |
+
| --------------------------------- | ---------------------- | ------ | -------------- | ---------------- | ------- | ---------------- | ---- | ----------- | --------------- | --------------------- | --------------- |
|
| 12 |
+
| falcon-7b/lora.yaml | falcon-7b | 4 | 512 | 1 | 1xA10G | 24.84 min | $0.7 | 16.69 GB | 0.945 | 2.573 | 26.2% |
|
| 13 |
+
| falcon-7b/lora.yaml | falcon-7b | 4 | 512 | 1 | 4xA10G | 24.94 min | $2.0 | 16.69 GB | 0.945 | 2.573 | 26.4% |
|
| 14 |
+
| falcon-7b/qlora.yaml | falcon-7b | 4 | 512 | 1 | 1xA10G | 50.85 min | $1.5 | 9.44 GB | 0.993 | 2.699 | 26.3% |
|
| 15 |
+
| | | | | | | | | | | | |
|
| 16 |
+
| gemma-2b/full.yaml | gemma-2b | 1 | 512 | 1 | 4xA10G | 14.06 min | $1.1 | 17.43 GB | 1.021 | 2.777 | 32.4% |
|
| 17 |
+
| gemma-2b/lora.yaml | gemma-2b | 2 | 512 | 2 | 1xA10G | 9.41 min | $0.3 | 12.62 GB | 0.981 | 2.666 | 34.4% |
|
| 18 |
+
| gemma-2b/lora.yaml | gemma-2b | 2 | 512 | 2 | 4xA10G | 9.41 min | $0.8 | 12.62 GB | 0.981 | 2.667 | 34.0% |
|
| 19 |
+
| gemma-2b/qlora.yaml | gemma-2b | 2 | 512 | 2 | 1xA10G | 12.91 min | $0.4 | 11.58 GB | 1.085 | 2.959 | 36.4% |
|
| 20 |
+
| | | | | | | | | | | | |
|
| 21 |
+
| gemma-7b/lora.yaml | gemma-7b | 2 | 512 | 1 | 1xA10G | OOM | OOM | OOM | OOM | OOM | |
|
| 22 |
+
| gemma-7b/lora.yaml | gemma-7b | 2 | 512 | 1 | 4xA10G | OOM | OOM | OOM | OOM | OOM | |
|
| 23 |
+
| gemma-7b/qlora.yaml | gemma-7b | 2 | 512 | 1 | 1xA10G | 43.58 min | $1.3 | 17.18 GB | 0.973 | 2.646 | 62.45% |
|
| 24 |
+
| | | | | | | | | | | | |
|
| 25 |
+
| gemma2-2b/lora.yaml | gemma-2b | 2 | 512 | 2 | 1xA10G | 11.96 min | $0.4 | 14.31 GB | 0.951 | 2.589 | 23.84% |
|
| 26 |
+
| gemma2b/qlora.yaml | gemma-2b | 2 | 512 | 2 | 1xA10G | 16.06 min | $0.5 | 13.52 GB | 0.983 | 2.673 | 24.12% |
|
| 27 |
+
| | | | | | | | | | | | |
|
| 28 |
+
| gemma2-9b/lora.yaml | gemma-2-9b | 2 | 512 | 1 | 1xA10G | OOM | OOM | OOM | OOM | OOM | |
|
| 29 |
+
| gemma2-9b/lora.yaml | gemma-2-9b | 2 | 512 | 1 | 4xA10G | OOM | OOM | OOM | OOM | OOM | |
|
| 30 |
+
| gemma2-9b/qlora.yaml | gemma-2-9b | 2 | 512 | 1 | 1xA10G | 50.01 min | $4.0 | 20.92 GB | 0.852 | 2.345 | 24.2% |
|
| 31 |
+
| | | | | | | | | | | | |
|
| 32 |
+
| llama-2-7b/full.yaml | llama-2-7b | 1 | 512 | 4 | 4xA10G | OOM | OOM | OOM | OOM | OOM | |
|
| 33 |
+
| llama-2-7b/lora.yaml | llama-2-7b | 4 | 512 | 2 | 1xA10G | 32.82 min | $1.0 | 19.77 GB | 0.802 | 2.230 | 40.3% |
|
| 34 |
+
| llama-2-7b/lora.yaml | llama-2-7b | 4 | 512 | 2 | 4xA10G | 32.83 min | $2.6 | 19.77 GB | 0.802 | 2.229 | 40.2% |
|
| 35 |
+
| llama-2-7b/qlora.yaml | llama-2-7b | 4 | 512 | 2 | 1xA10G | 45.67 min | $1.4 | 13.68 GB | 0.814 | 2.258 | 38.6% |
|
| 36 |
+
| | | | | | | | | | | | |
|
| 37 |
+
| llama-3-8b/full.yaml | llama-3-8b | 1 | 512 | 4 | 4xA10G | OOM | OOM | OOM | OOM | OOM | |
|
| 38 |
+
| llama-3-8b/lora.yaml | llama-3-8b | 2 | 512 | 1 | 1xA10G | 14.79 min | $0.4 | 19.73 GB | 0.888 | 2.431 | 62.4% |
|
| 39 |
+
| llama-3-8b/lora.yaml | llama-3-8b | 2 | 512 | 1 | 4xA10G | 14.88 min | $1.2 | 19.73 GB | 0.889 | 2.432 | 62.5% |
|
| 40 |
+
| llama-3-8b/qlora.yaml | llama-3-8b | 2 | 512 | 2 | 1xA10G | 22.24 min | $0.7 | 17.41 GB | 0.939 | 2.558 | 62.2% |
|
| 41 |
+
| | | | | | | | | | | | |
|
| 42 |
+
| llama-3.1-8b/full.yaml | llama-3.1-8b | 1 | 512 | 4 | 1xA10G | OOM | OOM | OOM | OOM | OOM | OOM |
|
| 43 |
+
| llama-3.1-8b/lora.yaml | llama-3.1-8b | 2 | 512 | 1 | 1xA10G | 13.36 min | $1.1 | 19.73 GB | 0.878 | 2.406 | xx.xx |
|
| 44 |
+
| llama-3.1-8b/qlora.yaml | llama-3.1-8b | 2 | 512 | 2 | 1xA10G | 21.81 min | $0.7 | 17.41 GB | 0.928 | 2.529 | xx.xx |
|
| 45 |
+
| | | | | | | | | | | | |
|
| 46 |
+
| llama-3.2-1b/full.yaml | llama-3.2-1b | 1 | 512 | 4 | 1xA10G | 2.01 min | $0.1 | 8.70 GB | 1.442 | 4.229 | 38.21% |
|
| 47 |
+
| llama-3.2-1b/lora.yaml | llama-3.2-1b | 2 | 512 | 1 | 1xA10G | 4.17 min | $0.4 | 4.49 GB | 1.114 | 3.046 | 36.87% |
|
| 48 |
+
| llama-3.2-1b/qlora.yaml | llama-3.2-1b | 2 | 512 | 2 | 1xA10G | 6.20 min | $0.6 | 5.53 GB | 1.201 | 3.322 | 36.49% |
|
| 49 |
+
| | | | | | | | | | | | |
|
| 50 |
+
| llama-3.2-3b/full.yaml | llama-3.2-3b | 1 | 512 | 4 | 1xA10G | 4.71 min | $0.4 | 16.51 GB | 1.255 | 3.509 | 54.69% |
|
| 51 |
+
| llama-3.2-3b/lora.yaml | llama-3.2-3b | 2 | 512 | 1 | 1xA10G | 8.31 min | $0.8 | 9.67 GB | 0.973 | 2.647 | 54.77% |
|
| 52 |
+
| llama-3.2-3b/qlora.yaml | llama-3.2-3b | 2 | 512 | 2 | 1xA10G | 14.89 min | $1.4 | 10.30 GB | 1.031 | 2.804 | 55.08% |
|
| 53 |
+
| | | | | | | | | | | | |
|
| 54 |
+
| mistral-7b-v0.2/lora.yaml | mistral-7b-v0.2 | 4 | 512 | 2 | 1xA10G | 31.00 min | $0.9 | 20.66 GB | 0.801 | 2.228 | 55.7% |
|
| 55 |
+
| mistral-7b-v0.2/lora.yaml | mistral-7b-v0.2 | 4 | 512 | 2 | 4xA10G | 31.00 min | $2.5 | 20.66 GB | 0.802 | 2.229 | 55.5% |
|
| 56 |
+
| mistral-7b-v0.2/qlora.yaml | mistral-7b-v0.2 | 4 | 512 | 2 | 1xA10G | 44.75 min | $1.3 | 14.29 GB | 0.813 | 2.255 | 56.5% |
|
| 57 |
+
| | | | | | | | | | | | |
|
| 58 |
+
| mistral-7b/lora.yaml | mistral-7b | 4 | 512 | 2 | 1xA10G | 31.01 min | $0.9 | 20.66 GB | 0.794 | 2.211 | 57.9% |
|
| 59 |
+
| mistral-7b/lora.yaml | mistral-7b | 4 | 512 | 2 | 4xA10G | 31.03 min | $2.5 | 20.66 GB | 0.796 | 2.218 | 57.9% |
|
| 60 |
+
| mistral-7b/qlora.yaml | mistral-7b | 4 | 512 | 2 | 1xA10G | 44.75 min | $1.3 | 14.29 GB | 0.803 | 2.231 | 57.9% |
|
| 61 |
+
| | | | | | | | | | | | |
|
| 62 |
+
| phi-2/full.yaml | phi-2 | 1 | 512 | 4 | 4xA10G | 11.87 min | $1.0 | 14.44 GB | 1.305 | 3.688 | 38.4% |
|
| 63 |
+
| phi-2/lora.yaml | phi-2 | 1 | 512 | 4 | 1xA10G | 3.78 min | $0.1 | 13.98 GB | 0.819 | 2.269 | 53.0% |
|
| 64 |
+
| phi-2/lora.yaml | phi-2 | 1 | 512 | 4 | 4xA10G | 3.78 min | $0.3 | 13.98 GB | 0.820 | 2.271 | 52.4% |
|
| 65 |
+
| phi-2/qlora.yaml | phi-2 | 1 | 512 | 4 | 1xA10G | 4.51 min | $0.1 | 14.27 GB | 0.837 | 2.310 | 52.3% |
|
| 66 |
+
| | | | | | | | | | | | |
|
| 67 |
+
| phi-3/full.yaml | Phi-3-mini-4k-instruct | 1 | 512 | 4 | 1xA10G | 6.93 min | $0.2 | 17.01 GB | 0.714 | 2.043 | 69.81% |
|
| 68 |
+
| phi-3/lora.yaml | Phi-3-mini-4k-instruct | 1 | 512 | 4 | 1xA10G | 6.46 min | $0.2 | 19.75 GB | 0.707 | 2.028 | 69.70% |
|
| 69 |
+
| phi-3/qlora.yaml | Phi-3-mini-4k-instruct | 1 | 512 | 4 | 1xA10G | 7.47 min | $0.2 | 19.13 GB | 0.729 | 2.074 | 68.96% |
|
| 70 |
+
| | | | | | | | | | | | |
|
| 71 |
+
| stablelm-base-alpha-3b/full.yaml | stablelm-base-alpha-3b | 1 | 512 | 1 | 4xA10G | 70.13 min | $5.6 | 21.23 GB | 1.513 | 4.540 | 23.2% |
|
| 72 |
+
| stablelm-base-alpha-3b/lora.yaml | stablelm-base-alpha-3b | 4 | 512 | 1 | 1xA10G | 13.07 min | $0.4 | 8.58 GB | 1.361 | 3.900 | 25.9% |
|
| 73 |
+
| stablelm-base-alpha-3b/lora.yaml | stablelm-base-alpha-3b | 4 | 512 | 1 | 4xA10G | 13.16 min | $1.1 | 8.58 GB | 1.362 | 3.906 | 25.9% |
|
| 74 |
+
| stablelm-base-alpha-3b/qlora.yaml | stablelm-base-alpha-3b | 4 | 512 | 1 | 1xA10G | 25.86 min | $0.8 | 5.24 GB | 1.388 | 4.009 | 26.1% |
|
| 75 |
+
| | | | | | | | | | | | |
|
| 76 |
+
| tiny-llama/full.yaml | tiny-llama | 1 | 512 | 4 | 1xA10G | 2.58 min | $0.1 | 14.10 GB | 1.088 | 2.968 | 24.6% |
|
| 77 |
+
| tiny-llama/full.yaml | tiny-llama | 1 | 512 | 4 | 4xA10G | 2.57 min | $0.2 | 14.10 GB | 1.088 | 2.968 | 24.5% |
|
| 78 |
+
| tiny-llama/lora.yaml | tiny-llama | 3 | 512 | 8 | 1xA10G | 8.09 min | $0.2 | 13.50 GB | 1.039 | 2.826 | 25.5% |
|
| 79 |
+
| tiny-llama/qlora.yaml | tiny-llama | 3 | 512 | 8 | 1xA10G | 8.70 min | $0.3 | 16.24 GB | 1.056 | 2.874 | 25.3% |
|
| 80 |
+
|
| 81 |
+
*OOM = Out of memory
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
## Extending the context length
|
| 86 |
+
|
| 87 |
+
If you require a longer sequence length than the one used in a given config file, you can either edit the `max_seq_length` in the config file or pass an additional argument when running the finetuning command, for example, `--max_seq_length 4096` to override the sequence length provided in the config file.
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
## Training on GPUs without bfloat16 support
|
| 91 |
+
|
| 92 |
+
If you are training on GPUs without bfloat-16 support, you need to change the `precision` option to `16-true` (16-bit floating point precision) or `16-mixed` (16/32-bit mixed precision) training:
|
| 93 |
+
|
| 94 |
+
```bash
|
| 95 |
+
litgpt finetune lora \
|
| 96 |
+
--config config_hub/finetune/phi-2/lora.yaml \
|
| 97 |
+
--precision 16-true
|
| 98 |
+
```
|
| 99 |
+
or
|
| 100 |
+
|
| 101 |
+
```bash
|
| 102 |
+
litgpt finetune lora \
|
| 103 |
+
--config config_hub/finetune/phi-2/lora.yaml \
|
| 104 |
+
--precision 16-mixed
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
Note that `16-true` is more compute and memory-efficient, but it can sometimes lead to training convergence issues. In this case, it's recommended to use `16-mixed`.
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
## Multi-GPU experiments
|
| 111 |
+
|
| 112 |
+
All runs are single-GPU experiments, use `--devices 4` to utilize more than one GPU:
|
| 113 |
+
|
| 114 |
+
|
| 115 |
+
```bash
|
| 116 |
+
litgpt finetune lora \
|
| 117 |
+
--config config_hub/finetune/phi-2/lora.yaml \
|
| 118 |
+
--devices 4
|
| 119 |
+
```
|
config_hub/finetune/falcon-7b/lora.yaml
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/tiiuae/falcon-7b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-falcon-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
prompt_style: alpaca
|
| 52 |
+
ignore_index: -100
|
| 53 |
+
seed: 42
|
| 54 |
+
num_workers: 4
|
| 55 |
+
|
| 56 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 57 |
+
train:
|
| 58 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 59 |
+
save_interval: 200
|
| 60 |
+
|
| 61 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 62 |
+
log_interval: 1
|
| 63 |
+
|
| 64 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 65 |
+
global_batch_size: 8
|
| 66 |
+
|
| 67 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 68 |
+
micro_batch_size: 1
|
| 69 |
+
|
| 70 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 71 |
+
lr_warmup_steps: 10
|
| 72 |
+
|
| 73 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 74 |
+
epochs: 4
|
| 75 |
+
|
| 76 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 77 |
+
max_tokens:
|
| 78 |
+
|
| 79 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 80 |
+
max_steps:
|
| 81 |
+
|
| 82 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 83 |
+
max_seq_length: 512
|
| 84 |
+
|
| 85 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 86 |
+
tie_embeddings:
|
| 87 |
+
|
| 88 |
+
# (type: Optional[float], default: null)
|
| 89 |
+
max_norm:
|
| 90 |
+
|
| 91 |
+
# (type: float, default: 6e-05)
|
| 92 |
+
min_lr: 6.0e-05
|
| 93 |
+
|
| 94 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 95 |
+
eval:
|
| 96 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 97 |
+
interval: 100
|
| 98 |
+
|
| 99 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 100 |
+
max_new_tokens: 100
|
| 101 |
+
|
| 102 |
+
# Number of iterations (type: int, default: 100)
|
| 103 |
+
max_iters: 100
|
| 104 |
+
|
| 105 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 106 |
+
initial_validation: false
|
| 107 |
+
|
| 108 |
+
# Whether to evaluate on the validation set at the end the training
|
| 109 |
+
final_validation: true
|
| 110 |
+
|
| 111 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 112 |
+
logger_name: csv
|
| 113 |
+
|
| 114 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 115 |
+
seed: 1337
|
| 116 |
+
|
| 117 |
+
# Optimizer-related arguments
|
| 118 |
+
optimizer:
|
| 119 |
+
class_path: torch.optim.AdamW
|
| 120 |
+
|
| 121 |
+
init_args:
|
| 122 |
+
# (type: float, default: 0.001)
|
| 123 |
+
lr: 0.0002
|
| 124 |
+
|
| 125 |
+
# (type: float, default: 0.01)
|
| 126 |
+
weight_decay: 0.0
|
| 127 |
+
|
| 128 |
+
# (type: tuple, default: (0.9,0.999))
|
| 129 |
+
betas:
|
| 130 |
+
- 0.9
|
| 131 |
+
- 0.95
|
config_hub/finetune/falcon-7b/qlora.yaml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/tiiuae/falcon-7b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-falcon-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.05
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
download_dir: data/alpaca2k
|
| 57 |
+
|
| 58 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 59 |
+
train:
|
| 60 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 61 |
+
save_interval: 200
|
| 62 |
+
|
| 63 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 64 |
+
log_interval: 1
|
| 65 |
+
|
| 66 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 67 |
+
global_batch_size: 8
|
| 68 |
+
|
| 69 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 70 |
+
micro_batch_size: 1
|
| 71 |
+
|
| 72 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 73 |
+
lr_warmup_steps: 10
|
| 74 |
+
|
| 75 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 76 |
+
epochs: 4
|
| 77 |
+
|
| 78 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 79 |
+
max_tokens:
|
| 80 |
+
|
| 81 |
+
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
|
| 82 |
+
max_steps:
|
| 83 |
+
|
| 84 |
+
# Limits the length of samples (type: Optional[int], default: null)
|
| 85 |
+
max_seq_length: 512
|
| 86 |
+
|
| 87 |
+
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
|
| 88 |
+
tie_embeddings:
|
| 89 |
+
|
| 90 |
+
# (type: Optional[float], default: null)
|
| 91 |
+
max_norm:
|
| 92 |
+
|
| 93 |
+
# (type: float, default: 6e-05)
|
| 94 |
+
min_lr: 6.0e-05
|
| 95 |
+
|
| 96 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 97 |
+
eval:
|
| 98 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 99 |
+
interval: 100
|
| 100 |
+
|
| 101 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 102 |
+
max_new_tokens: 100
|
| 103 |
+
|
| 104 |
+
# Number of iterations (type: int, default: 100)
|
| 105 |
+
max_iters: 100
|
| 106 |
+
|
| 107 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 108 |
+
initial_validation: false
|
| 109 |
+
|
| 110 |
+
# Whether to evaluate on the validation set at the end the training
|
| 111 |
+
final_validation: true
|
| 112 |
+
|
| 113 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 114 |
+
logger_name: csv
|
| 115 |
+
|
| 116 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 117 |
+
seed: 1337
|
| 118 |
+
|
| 119 |
+
# Optimizer-related arguments
|
| 120 |
+
optimizer:
|
| 121 |
+
class_path: torch.optim.AdamW
|
| 122 |
+
|
| 123 |
+
init_args:
|
| 124 |
+
# (type: float, default: 0.001)
|
| 125 |
+
lr: 0.0002
|
| 126 |
+
|
| 127 |
+
# (type: float, default: 0.01)
|
| 128 |
+
weight_decay: 0.0
|
| 129 |
+
|
| 130 |
+
# (type: tuple, default: (0.9,0.999))
|
| 131 |
+
betas:
|
| 132 |
+
- 0.9
|
| 133 |
+
- 0.95
|
config_hub/finetune/gemma-2b/full.yaml
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/google/gemma-2b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/full-gemma-2b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 4
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 17 |
+
data:
|
| 18 |
+
class_path: litgpt.data.Alpaca2k
|
| 19 |
+
init_args:
|
| 20 |
+
mask_prompt: false
|
| 21 |
+
val_split_fraction: 0.03847
|
| 22 |
+
prompt_style: alpaca
|
| 23 |
+
ignore_index: -100
|
| 24 |
+
seed: 42
|
| 25 |
+
num_workers: 4
|
| 26 |
+
|
| 27 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 28 |
+
train:
|
| 29 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 30 |
+
save_interval: 800
|
| 31 |
+
|
| 32 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 33 |
+
log_interval: 1
|
| 34 |
+
|
| 35 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 36 |
+
global_batch_size: 16
|
| 37 |
+
|
| 38 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 39 |
+
micro_batch_size: 1
|
| 40 |
+
|
| 41 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 42 |
+
lr_warmup_steps: 100
|
| 43 |
+
|
| 44 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 45 |
+
epochs: 1
|
| 46 |
+
|
| 47 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 48 |
+
max_tokens:
|
| 49 |
+
|
| 50 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 51 |
+
max_steps: 50
|
| 52 |
+
|
| 53 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 54 |
+
max_seq_length: 512
|
| 55 |
+
|
| 56 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 57 |
+
tie_embeddings:
|
| 58 |
+
|
| 59 |
+
# (type: Optional[float], default: null)
|
| 60 |
+
max_norm:
|
| 61 |
+
|
| 62 |
+
# (type: float, default: 6e-05)
|
| 63 |
+
min_lr: 6.0e-05
|
| 64 |
+
|
| 65 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 66 |
+
eval:
|
| 67 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 68 |
+
interval: 25
|
| 69 |
+
|
| 70 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 71 |
+
max_new_tokens: 100
|
| 72 |
+
|
| 73 |
+
# Number of iterations (type: int, default: 100)
|
| 74 |
+
max_iters: 100
|
| 75 |
+
|
| 76 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 77 |
+
initial_validation: false
|
| 78 |
+
|
| 79 |
+
# Whether to evaluate on the validation set at the end the training
|
| 80 |
+
final_validation: true
|
| 81 |
+
|
| 82 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 83 |
+
logger_name: csv
|
| 84 |
+
|
| 85 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 86 |
+
seed: 1337
|
| 87 |
+
|
| 88 |
+
# Optimizer-related arguments
|
| 89 |
+
optimizer:
|
| 90 |
+
class_path: torch.optim.AdamW
|
| 91 |
+
|
| 92 |
+
init_args:
|
| 93 |
+
# (type: float, default: 0.001)
|
| 94 |
+
lr: 0.0002
|
| 95 |
+
|
| 96 |
+
# (type: float, default: 0.01)
|
| 97 |
+
weight_decay: 0.0
|
| 98 |
+
|
| 99 |
+
# (type: tuple, default: (0.9,0.999))
|
| 100 |
+
betas:
|
| 101 |
+
- 0.9
|
| 102 |
+
- 0.95
|
config_hub/finetune/gemma-2b/lora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/google/gemma-2b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-gemma-2b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 8
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.1
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 6
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 2
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 200
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 2
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 25
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/gemma-2b/qlora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/google/gemma-2b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-gemma-2b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 16
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.1
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 6
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 2
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 200
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 2
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 25
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/gemma-7b/lora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/google/gemma-7b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-gemma-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 16
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.1
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 6
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 1
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 200
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 2
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 25
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/gemma-7b/qlora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/google/gemma-7b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-gemma-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 16
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.1
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 6
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 1
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 200
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 2
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 25
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/gemma2-2b/lora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/google/gemma-2-2b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-gemma-2-2b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 8
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.1
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 6
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 2
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 200
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 2
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 25
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/gemma2-2b/qlora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/google/gemma-2-2b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-gemma-2-2b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 16
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.1
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 6
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 2
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 200
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 2
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 25
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/gemma2-9b/lora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/google/gemma-2-9b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-gemma-2-9b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 16
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.1
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 6
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 1
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 200
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 2
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 25
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/gemma2-9b/qlora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/google/gemma-2-9b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-gemma-2-9b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 16
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.1
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 6
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 1
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 200
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 2
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 25
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/llama-2-7b/full.yaml
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/finetune/full)
|
| 5 |
+
out_dir: out/finetune/full-llama2-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 4
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Path to a checkpoint directory to resume from in case training was interrupted, or ``True`` to resume
|
| 17 |
+
# from the latest checkpoint in ``out_dir``. An error will be raised if no checkpoint is found. Passing
|
| 18 |
+
# ``'auto'`` will resume from the latest checkpoint but not error if no checkpoint exists.
|
| 19 |
+
# (type: Union[bool, Literal["auto"], Path], default: False)
|
| 20 |
+
resume: false
|
| 21 |
+
|
| 22 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 23 |
+
data:
|
| 24 |
+
class_path: litgpt.data.Alpaca2k
|
| 25 |
+
init_args:
|
| 26 |
+
mask_prompt: false
|
| 27 |
+
prompt_style: alpaca
|
| 28 |
+
ignore_index: -100
|
| 29 |
+
seed: 42
|
| 30 |
+
num_workers: 4
|
| 31 |
+
|
| 32 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 33 |
+
train:
|
| 34 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 35 |
+
save_interval: 200
|
| 36 |
+
|
| 37 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 38 |
+
log_interval: 1
|
| 39 |
+
|
| 40 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 64)
|
| 41 |
+
global_batch_size: 64
|
| 42 |
+
|
| 43 |
+
# Number of samples per data-parallel rank (type: int, default: 1)
|
| 44 |
+
micro_batch_size: 4
|
| 45 |
+
|
| 46 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 47 |
+
lr_warmup_steps: 25
|
| 48 |
+
|
| 49 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 50 |
+
epochs: 1
|
| 51 |
+
|
| 52 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 53 |
+
max_tokens:
|
| 54 |
+
|
| 55 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 56 |
+
max_steps:
|
| 57 |
+
|
| 58 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 59 |
+
max_seq_length: 512
|
| 60 |
+
|
| 61 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 62 |
+
tie_embeddings:
|
| 63 |
+
|
| 64 |
+
# (type: Optional[float], default: null)
|
| 65 |
+
max_norm:
|
| 66 |
+
|
| 67 |
+
# (type: float, default: 6e-05)
|
| 68 |
+
min_lr: 6.0e-05
|
| 69 |
+
|
| 70 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 71 |
+
eval:
|
| 72 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 600)
|
| 73 |
+
interval: 25
|
| 74 |
+
|
| 75 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 76 |
+
max_new_tokens: 100
|
| 77 |
+
|
| 78 |
+
# Number of iterations (type: int, default: 100)
|
| 79 |
+
max_iters: 100
|
| 80 |
+
|
| 81 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 82 |
+
initial_validation: false
|
| 83 |
+
|
| 84 |
+
# Whether to evaluate on the validation set at the end the training
|
| 85 |
+
final_validation: true
|
| 86 |
+
|
| 87 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 88 |
+
logger_name: csv
|
| 89 |
+
|
| 90 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 91 |
+
seed: 1337
|
| 92 |
+
|
| 93 |
+
# Optimizer-related arguments
|
| 94 |
+
optimizer:
|
| 95 |
+
class_path: torch.optim.AdamW
|
| 96 |
+
|
| 97 |
+
init_args:
|
| 98 |
+
# (type: float, default: 0.001)
|
| 99 |
+
lr: 0.0002
|
| 100 |
+
|
| 101 |
+
# (type: float, default: 0.01)
|
| 102 |
+
weight_decay: 0.0
|
| 103 |
+
|
| 104 |
+
# (type: tuple, default: (0.9,0.999))
|
| 105 |
+
betas:
|
| 106 |
+
- 0.9
|
| 107 |
+
- 0.95
|
config_hub/finetune/llama-2-7b/lora.yaml
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-llama2-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
prompt_style: alpaca
|
| 52 |
+
ignore_index: -100
|
| 53 |
+
seed: 42
|
| 54 |
+
num_workers: 4
|
| 55 |
+
|
| 56 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 57 |
+
train:
|
| 58 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 59 |
+
save_interval: 200
|
| 60 |
+
|
| 61 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 62 |
+
log_interval: 1
|
| 63 |
+
|
| 64 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 65 |
+
global_batch_size: 8
|
| 66 |
+
|
| 67 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 68 |
+
micro_batch_size: 2
|
| 69 |
+
|
| 70 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 71 |
+
lr_warmup_steps: 10
|
| 72 |
+
|
| 73 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 74 |
+
epochs: 4
|
| 75 |
+
|
| 76 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 77 |
+
max_tokens:
|
| 78 |
+
|
| 79 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 80 |
+
max_steps:
|
| 81 |
+
|
| 82 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 83 |
+
max_seq_length: 512
|
| 84 |
+
|
| 85 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 86 |
+
tie_embeddings:
|
| 87 |
+
|
| 88 |
+
# (type: Optional[float], default: null)
|
| 89 |
+
max_norm:
|
| 90 |
+
|
| 91 |
+
# (type: float, default: 6e-05)
|
| 92 |
+
min_lr: 6.0e-05
|
| 93 |
+
|
| 94 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 95 |
+
eval:
|
| 96 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 97 |
+
interval: 100
|
| 98 |
+
|
| 99 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 100 |
+
max_new_tokens: 100
|
| 101 |
+
|
| 102 |
+
# Number of iterations (type: int, default: 100)
|
| 103 |
+
max_iters: 100
|
| 104 |
+
|
| 105 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 106 |
+
initial_validation: false
|
| 107 |
+
|
| 108 |
+
# Whether to evaluate on the validation set at the end the training
|
| 109 |
+
final_validation: true
|
| 110 |
+
|
| 111 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 112 |
+
logger_name: csv
|
| 113 |
+
|
| 114 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 115 |
+
seed: 1337
|
| 116 |
+
|
| 117 |
+
# Optimizer-related arguments
|
| 118 |
+
optimizer:
|
| 119 |
+
class_path: torch.optim.AdamW
|
| 120 |
+
|
| 121 |
+
init_args:
|
| 122 |
+
# (type: float, default: 0.001)
|
| 123 |
+
lr: 0.0002
|
| 124 |
+
|
| 125 |
+
# (type: float, default: 0.01)
|
| 126 |
+
weight_decay: 0.0
|
| 127 |
+
|
| 128 |
+
# (type: tuple, default: (0.9,0.999))
|
| 129 |
+
betas:
|
| 130 |
+
- 0.9
|
| 131 |
+
- 0.95
|
config_hub/finetune/llama-2-7b/qlora.yaml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-llama2-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.05
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
download_dir: data/alpaca2k
|
| 57 |
+
|
| 58 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 59 |
+
train:
|
| 60 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 61 |
+
save_interval: 200
|
| 62 |
+
|
| 63 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 64 |
+
log_interval: 1
|
| 65 |
+
|
| 66 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 67 |
+
global_batch_size: 8
|
| 68 |
+
|
| 69 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 70 |
+
micro_batch_size: 2
|
| 71 |
+
|
| 72 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 73 |
+
lr_warmup_steps: 10
|
| 74 |
+
|
| 75 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 76 |
+
epochs: 4
|
| 77 |
+
|
| 78 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 79 |
+
max_tokens:
|
| 80 |
+
|
| 81 |
+
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
|
| 82 |
+
max_steps:
|
| 83 |
+
|
| 84 |
+
# Limits the length of samples (type: Optional[int], default: null)
|
| 85 |
+
max_seq_length: 512
|
| 86 |
+
|
| 87 |
+
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
|
| 88 |
+
tie_embeddings:
|
| 89 |
+
|
| 90 |
+
# (type: Optional[float], default: null)
|
| 91 |
+
max_norm:
|
| 92 |
+
|
| 93 |
+
# (type: float, default: 6e-05)
|
| 94 |
+
min_lr: 6.0e-05
|
| 95 |
+
|
| 96 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 97 |
+
eval:
|
| 98 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 99 |
+
interval: 100
|
| 100 |
+
|
| 101 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 102 |
+
max_new_tokens: 100
|
| 103 |
+
|
| 104 |
+
# Number of iterations (type: int, default: 100)
|
| 105 |
+
max_iters: 100
|
| 106 |
+
|
| 107 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 108 |
+
initial_validation: false
|
| 109 |
+
|
| 110 |
+
# Whether to evaluate on the validation set at the end the training
|
| 111 |
+
final_validation: true
|
| 112 |
+
|
| 113 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 114 |
+
logger_name: csv
|
| 115 |
+
|
| 116 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 117 |
+
seed: 1337
|
| 118 |
+
|
| 119 |
+
# Optimizer-related arguments
|
| 120 |
+
optimizer:
|
| 121 |
+
class_path: torch.optim.AdamW
|
| 122 |
+
|
| 123 |
+
init_args:
|
| 124 |
+
# (type: float, default: 0.001)
|
| 125 |
+
lr: 0.0002
|
| 126 |
+
|
| 127 |
+
# (type: float, default: 0.01)
|
| 128 |
+
weight_decay: 0.0
|
| 129 |
+
|
| 130 |
+
# (type: tuple, default: (0.9,0.999))
|
| 131 |
+
betas:
|
| 132 |
+
- 0.9
|
| 133 |
+
- 0.95
|
config_hub/finetune/llama-3-8b/full.yaml
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Meta-Llama-3-8B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/finetune/full)
|
| 5 |
+
out_dir: out/finetune/full-llama-3-8b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 4
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Path to a checkpoint directory to resume from in case training was interrupted, or ``True`` to resume
|
| 17 |
+
# from the latest checkpoint in ``out_dir``. An error will be raised if no checkpoint is found. Passing
|
| 18 |
+
# ``'auto'`` will resume from the latest checkpoint but not error if no checkpoint exists.
|
| 19 |
+
# (type: Union[bool, Literal["auto"], Path], default: False)
|
| 20 |
+
resume: false
|
| 21 |
+
|
| 22 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 23 |
+
data:
|
| 24 |
+
class_path: litgpt.data.Alpaca2k
|
| 25 |
+
init_args:
|
| 26 |
+
mask_prompt: false
|
| 27 |
+
prompt_style: alpaca
|
| 28 |
+
ignore_index: -100
|
| 29 |
+
seed: 42
|
| 30 |
+
num_workers: 4
|
| 31 |
+
|
| 32 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 33 |
+
train:
|
| 34 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 35 |
+
save_interval: 200
|
| 36 |
+
|
| 37 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 38 |
+
log_interval: 1
|
| 39 |
+
|
| 40 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 64)
|
| 41 |
+
global_batch_size: 64
|
| 42 |
+
|
| 43 |
+
# Number of samples per data-parallel rank (type: int, default: 1)
|
| 44 |
+
micro_batch_size: 4
|
| 45 |
+
|
| 46 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 47 |
+
lr_warmup_steps: 25
|
| 48 |
+
|
| 49 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 50 |
+
epochs: 1
|
| 51 |
+
|
| 52 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 53 |
+
max_tokens:
|
| 54 |
+
|
| 55 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 56 |
+
max_steps:
|
| 57 |
+
|
| 58 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 59 |
+
max_seq_length: 512
|
| 60 |
+
|
| 61 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 62 |
+
tie_embeddings:
|
| 63 |
+
|
| 64 |
+
# (type: Optional[float], default: null)
|
| 65 |
+
max_norm:
|
| 66 |
+
|
| 67 |
+
# (type: float, default: 6e-05)
|
| 68 |
+
min_lr: 6.0e-05
|
| 69 |
+
|
| 70 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 71 |
+
eval:
|
| 72 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 600)
|
| 73 |
+
interval: 25
|
| 74 |
+
|
| 75 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 76 |
+
max_new_tokens: 100
|
| 77 |
+
|
| 78 |
+
# Number of iterations (type: int, default: 100)
|
| 79 |
+
max_iters: 100
|
| 80 |
+
|
| 81 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 82 |
+
initial_validation: false
|
| 83 |
+
|
| 84 |
+
# Whether to evaluate on the validation set at the end the training
|
| 85 |
+
final_validation: true
|
| 86 |
+
|
| 87 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 88 |
+
logger_name: csv
|
| 89 |
+
|
| 90 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 91 |
+
seed: 1337
|
| 92 |
+
|
| 93 |
+
# Optimizer-related arguments
|
| 94 |
+
optimizer:
|
| 95 |
+
class_path: torch.optim.AdamW
|
| 96 |
+
|
| 97 |
+
init_args:
|
| 98 |
+
# (type: float, default: 0.001)
|
| 99 |
+
lr: 0.0002
|
| 100 |
+
|
| 101 |
+
# (type: float, default: 0.01)
|
| 102 |
+
weight_decay: 0.1
|
| 103 |
+
|
| 104 |
+
# (type: tuple, default: (0.9,0.999))
|
| 105 |
+
betas:
|
| 106 |
+
- 0.9
|
| 107 |
+
- 0.95
|
config_hub/finetune/llama-3-8b/lora.yaml
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Meta-Llama-3-8B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-llama-3-8b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
prompt_style: alpaca
|
| 52 |
+
ignore_index: -100
|
| 53 |
+
seed: 42
|
| 54 |
+
num_workers: 4
|
| 55 |
+
|
| 56 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 57 |
+
train:
|
| 58 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 59 |
+
save_interval: 200
|
| 60 |
+
|
| 61 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 62 |
+
log_interval: 1
|
| 63 |
+
|
| 64 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 65 |
+
global_batch_size: 8
|
| 66 |
+
|
| 67 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 68 |
+
micro_batch_size: 1
|
| 69 |
+
|
| 70 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 71 |
+
lr_warmup_steps: 10
|
| 72 |
+
|
| 73 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 74 |
+
epochs: 2
|
| 75 |
+
|
| 76 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 77 |
+
max_tokens:
|
| 78 |
+
|
| 79 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 80 |
+
max_steps:
|
| 81 |
+
|
| 82 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 83 |
+
max_seq_length: 512
|
| 84 |
+
|
| 85 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 86 |
+
tie_embeddings:
|
| 87 |
+
|
| 88 |
+
# (type: Optional[float], default: null)
|
| 89 |
+
max_norm:
|
| 90 |
+
|
| 91 |
+
# (type: float, default: 6e-05)
|
| 92 |
+
min_lr: 6.0e-05
|
| 93 |
+
|
| 94 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 95 |
+
eval:
|
| 96 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 97 |
+
interval: 100
|
| 98 |
+
|
| 99 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 100 |
+
max_new_tokens: 100
|
| 101 |
+
|
| 102 |
+
# Number of iterations (type: int, default: 100)
|
| 103 |
+
max_iters: 100
|
| 104 |
+
|
| 105 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 106 |
+
initial_validation: false
|
| 107 |
+
|
| 108 |
+
# Whether to evaluate on the validation set at the end the training
|
| 109 |
+
final_validation: true
|
| 110 |
+
|
| 111 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 112 |
+
logger_name: csv
|
| 113 |
+
|
| 114 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 115 |
+
seed: 1337
|
| 116 |
+
|
| 117 |
+
# Optimizer-related arguments
|
| 118 |
+
optimizer:
|
| 119 |
+
class_path: torch.optim.AdamW
|
| 120 |
+
|
| 121 |
+
init_args:
|
| 122 |
+
# (type: float, default: 0.001)
|
| 123 |
+
lr: 0.0002
|
| 124 |
+
|
| 125 |
+
# (type: float, default: 0.01)
|
| 126 |
+
weight_decay: 0.0
|
| 127 |
+
|
| 128 |
+
# (type: tuple, default: (0.9,0.999))
|
| 129 |
+
betas:
|
| 130 |
+
- 0.9
|
| 131 |
+
- 0.95
|
config_hub/finetune/llama-3-8b/qlora.yaml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Meta-Llama-3-8B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-llama3-8b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.05
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
download_dir: data/alpaca2k
|
| 57 |
+
|
| 58 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 59 |
+
train:
|
| 60 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 61 |
+
save_interval: 200
|
| 62 |
+
|
| 63 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 64 |
+
log_interval: 1
|
| 65 |
+
|
| 66 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 67 |
+
global_batch_size: 8
|
| 68 |
+
|
| 69 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 70 |
+
micro_batch_size: 2
|
| 71 |
+
|
| 72 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 73 |
+
lr_warmup_steps: 10
|
| 74 |
+
|
| 75 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 76 |
+
epochs: 2
|
| 77 |
+
|
| 78 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 79 |
+
max_tokens:
|
| 80 |
+
|
| 81 |
+
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
|
| 82 |
+
max_steps:
|
| 83 |
+
|
| 84 |
+
# Limits the length of samples (type: Optional[int], default: null)
|
| 85 |
+
max_seq_length: 512
|
| 86 |
+
|
| 87 |
+
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
|
| 88 |
+
tie_embeddings:
|
| 89 |
+
|
| 90 |
+
# (type: Optional[float], default: null)
|
| 91 |
+
max_norm:
|
| 92 |
+
|
| 93 |
+
# (type: float, default: 6e-05)
|
| 94 |
+
min_lr: 6.0e-05
|
| 95 |
+
|
| 96 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 97 |
+
eval:
|
| 98 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 99 |
+
interval: 100
|
| 100 |
+
|
| 101 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 102 |
+
max_new_tokens: 100
|
| 103 |
+
|
| 104 |
+
# Number of iterations (type: int, default: 100)
|
| 105 |
+
max_iters: 100
|
| 106 |
+
|
| 107 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 108 |
+
initial_validation: false
|
| 109 |
+
|
| 110 |
+
# Whether to evaluate on the validation set at the end the training
|
| 111 |
+
final_validation: true
|
| 112 |
+
|
| 113 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 114 |
+
logger_name: csv
|
| 115 |
+
|
| 116 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 117 |
+
seed: 1337
|
| 118 |
+
|
| 119 |
+
# Optimizer-related arguments
|
| 120 |
+
optimizer:
|
| 121 |
+
class_path: torch.optim.AdamW
|
| 122 |
+
|
| 123 |
+
init_args:
|
| 124 |
+
# (type: float, default: 0.001)
|
| 125 |
+
lr: 0.0002
|
| 126 |
+
|
| 127 |
+
# (type: float, default: 0.01)
|
| 128 |
+
weight_decay: 0.0
|
| 129 |
+
|
| 130 |
+
# (type: tuple, default: (0.9,0.999))
|
| 131 |
+
betas:
|
| 132 |
+
- 0.9
|
| 133 |
+
- 0.95
|
config_hub/finetune/llama-3.1-8b/full.yaml
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Meta-Llama-3.1-8B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/finetune/full)
|
| 5 |
+
out_dir: out/finetune/full-llama-3.1-8b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 4
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Path to a checkpoint directory to resume from in case training was interrupted, or ``True`` to resume
|
| 17 |
+
# from the latest checkpoint in ``out_dir``. An error will be raised if no checkpoint is found. Passing
|
| 18 |
+
# ``'auto'`` will resume from the latest checkpoint but not error if no checkpoint exists.
|
| 19 |
+
# (type: Union[bool, Literal["auto"], Path], default: False)
|
| 20 |
+
resume: false
|
| 21 |
+
|
| 22 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 23 |
+
data:
|
| 24 |
+
class_path: litgpt.data.Alpaca2k
|
| 25 |
+
init_args:
|
| 26 |
+
mask_prompt: false
|
| 27 |
+
prompt_style: alpaca
|
| 28 |
+
ignore_index: -100
|
| 29 |
+
seed: 42
|
| 30 |
+
num_workers: 4
|
| 31 |
+
|
| 32 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 33 |
+
train:
|
| 34 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 35 |
+
save_interval: 200
|
| 36 |
+
|
| 37 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 38 |
+
log_interval: 1
|
| 39 |
+
|
| 40 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 64)
|
| 41 |
+
global_batch_size: 64
|
| 42 |
+
|
| 43 |
+
# Number of samples per data-parallel rank (type: int, default: 1)
|
| 44 |
+
micro_batch_size: 4
|
| 45 |
+
|
| 46 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 47 |
+
lr_warmup_steps: 25
|
| 48 |
+
|
| 49 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 50 |
+
epochs: 1
|
| 51 |
+
|
| 52 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 53 |
+
max_tokens:
|
| 54 |
+
|
| 55 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 56 |
+
max_steps:
|
| 57 |
+
|
| 58 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 59 |
+
max_seq_length: 512
|
| 60 |
+
|
| 61 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 62 |
+
tie_embeddings:
|
| 63 |
+
|
| 64 |
+
# (type: Optional[float], default: null)
|
| 65 |
+
max_norm:
|
| 66 |
+
|
| 67 |
+
# (type: float, default: 6e-05)
|
| 68 |
+
min_lr: 6.0e-05
|
| 69 |
+
|
| 70 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 71 |
+
eval:
|
| 72 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 600)
|
| 73 |
+
interval: 25
|
| 74 |
+
|
| 75 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 76 |
+
max_new_tokens: 100
|
| 77 |
+
|
| 78 |
+
# Number of iterations (type: int, default: 100)
|
| 79 |
+
max_iters: 100
|
| 80 |
+
|
| 81 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 82 |
+
initial_validation: false
|
| 83 |
+
|
| 84 |
+
# Whether to evaluate on the validation set at the end the training
|
| 85 |
+
final_validation: true
|
| 86 |
+
|
| 87 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 88 |
+
logger_name: csv
|
| 89 |
+
|
| 90 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 91 |
+
seed: 1337
|
| 92 |
+
|
| 93 |
+
# Optimizer-related arguments
|
| 94 |
+
optimizer:
|
| 95 |
+
class_path: torch.optim.AdamW
|
| 96 |
+
|
| 97 |
+
init_args:
|
| 98 |
+
# (type: float, default: 0.001)
|
| 99 |
+
lr: 0.0002
|
| 100 |
+
|
| 101 |
+
# (type: float, default: 0.01)
|
| 102 |
+
weight_decay: 0.1
|
| 103 |
+
|
| 104 |
+
# (type: tuple, default: (0.9,0.999))
|
| 105 |
+
betas:
|
| 106 |
+
- 0.9
|
| 107 |
+
- 0.95
|
config_hub/finetune/llama-3.1-8b/lora.yaml
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Meta-Llama-3.1-8B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-llama-3.1-8b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
prompt_style: alpaca
|
| 52 |
+
ignore_index: -100
|
| 53 |
+
seed: 42
|
| 54 |
+
num_workers: 4
|
| 55 |
+
|
| 56 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 57 |
+
train:
|
| 58 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 59 |
+
save_interval: 200
|
| 60 |
+
|
| 61 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 62 |
+
log_interval: 1
|
| 63 |
+
|
| 64 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 65 |
+
global_batch_size: 8
|
| 66 |
+
|
| 67 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 68 |
+
micro_batch_size: 1
|
| 69 |
+
|
| 70 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 71 |
+
lr_warmup_steps: 10
|
| 72 |
+
|
| 73 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 74 |
+
epochs: 2
|
| 75 |
+
|
| 76 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 77 |
+
max_tokens:
|
| 78 |
+
|
| 79 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 80 |
+
max_steps:
|
| 81 |
+
|
| 82 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 83 |
+
max_seq_length: 512
|
| 84 |
+
|
| 85 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 86 |
+
tie_embeddings:
|
| 87 |
+
|
| 88 |
+
# (type: Optional[float], default: null)
|
| 89 |
+
max_norm:
|
| 90 |
+
|
| 91 |
+
# (type: float, default: 6e-05)
|
| 92 |
+
min_lr: 6.0e-05
|
| 93 |
+
|
| 94 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 95 |
+
eval:
|
| 96 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 97 |
+
interval: 100
|
| 98 |
+
|
| 99 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 100 |
+
max_new_tokens: 100
|
| 101 |
+
|
| 102 |
+
# Number of iterations (type: int, default: 100)
|
| 103 |
+
max_iters: 100
|
| 104 |
+
|
| 105 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 106 |
+
initial_validation: false
|
| 107 |
+
|
| 108 |
+
# Whether to evaluate on the validation set at the end the training
|
| 109 |
+
final_validation: true
|
| 110 |
+
|
| 111 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 112 |
+
logger_name: csv
|
| 113 |
+
|
| 114 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 115 |
+
seed: 1337
|
| 116 |
+
|
| 117 |
+
# Optimizer-related arguments
|
| 118 |
+
optimizer:
|
| 119 |
+
class_path: torch.optim.AdamW
|
| 120 |
+
|
| 121 |
+
init_args:
|
| 122 |
+
# (type: float, default: 0.001)
|
| 123 |
+
lr: 0.0002
|
| 124 |
+
|
| 125 |
+
# (type: float, default: 0.01)
|
| 126 |
+
weight_decay: 0.0
|
| 127 |
+
|
| 128 |
+
# (type: tuple, default: (0.9,0.999))
|
| 129 |
+
betas:
|
| 130 |
+
- 0.9
|
| 131 |
+
- 0.95
|
config_hub/finetune/llama-3.1-8b/qlora.yaml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Meta-Llama-3.1-8B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-llama3.1-8b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.05
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
download_dir: data/alpaca2k
|
| 57 |
+
|
| 58 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 59 |
+
train:
|
| 60 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 61 |
+
save_interval: 200
|
| 62 |
+
|
| 63 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 64 |
+
log_interval: 1
|
| 65 |
+
|
| 66 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 67 |
+
global_batch_size: 8
|
| 68 |
+
|
| 69 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 70 |
+
micro_batch_size: 2
|
| 71 |
+
|
| 72 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 73 |
+
lr_warmup_steps: 10
|
| 74 |
+
|
| 75 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 76 |
+
epochs: 2
|
| 77 |
+
|
| 78 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 79 |
+
max_tokens:
|
| 80 |
+
|
| 81 |
+
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
|
| 82 |
+
max_steps:
|
| 83 |
+
|
| 84 |
+
# Limits the length of samples (type: Optional[int], default: null)
|
| 85 |
+
max_seq_length: 512
|
| 86 |
+
|
| 87 |
+
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
|
| 88 |
+
tie_embeddings:
|
| 89 |
+
|
| 90 |
+
# (type: Optional[float], default: null)
|
| 91 |
+
max_norm:
|
| 92 |
+
|
| 93 |
+
# (type: float, default: 6e-05)
|
| 94 |
+
min_lr: 6.0e-05
|
| 95 |
+
|
| 96 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 97 |
+
eval:
|
| 98 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 99 |
+
interval: 100
|
| 100 |
+
|
| 101 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 102 |
+
max_new_tokens: 100
|
| 103 |
+
|
| 104 |
+
# Number of iterations (type: int, default: 100)
|
| 105 |
+
max_iters: 100
|
| 106 |
+
|
| 107 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 108 |
+
initial_validation: false
|
| 109 |
+
|
| 110 |
+
# Whether to evaluate on the validation set at the end the training
|
| 111 |
+
final_validation: true
|
| 112 |
+
|
| 113 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 114 |
+
logger_name: csv
|
| 115 |
+
|
| 116 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 117 |
+
seed: 1337
|
| 118 |
+
|
| 119 |
+
# Optimizer-related arguments
|
| 120 |
+
optimizer:
|
| 121 |
+
class_path: torch.optim.AdamW
|
| 122 |
+
|
| 123 |
+
init_args:
|
| 124 |
+
# (type: float, default: 0.001)
|
| 125 |
+
lr: 0.0002
|
| 126 |
+
|
| 127 |
+
# (type: float, default: 0.01)
|
| 128 |
+
weight_decay: 0.0
|
| 129 |
+
|
| 130 |
+
# (type: tuple, default: (0.9,0.999))
|
| 131 |
+
betas:
|
| 132 |
+
- 0.9
|
| 133 |
+
- 0.95
|
config_hub/finetune/llama-3.2-1B/full.yaml
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Llama-3.2-1B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/finetune/full)
|
| 5 |
+
out_dir: out/finetune/full-llama-3.2-1B
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 1
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Path to a checkpoint directory to resume from in case training was interrupted, or ``True`` to resume
|
| 17 |
+
# from the latest checkpoint in ``out_dir``. An error will be raised if no checkpoint is found. Passing
|
| 18 |
+
# ``'auto'`` will resume from the latest checkpoint but not error if no checkpoint exists.
|
| 19 |
+
# (type: Union[bool, Literal["auto"], Path], default: False)
|
| 20 |
+
# resume: false
|
| 21 |
+
|
| 22 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 23 |
+
data:
|
| 24 |
+
class_path: litgpt.data.Alpaca2k
|
| 25 |
+
init_args:
|
| 26 |
+
mask_prompt: false
|
| 27 |
+
prompt_style: alpaca
|
| 28 |
+
ignore_index: -100
|
| 29 |
+
seed: 42
|
| 30 |
+
num_workers: 4
|
| 31 |
+
|
| 32 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 33 |
+
train:
|
| 34 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 35 |
+
save_interval: 200
|
| 36 |
+
|
| 37 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 38 |
+
log_interval: 1
|
| 39 |
+
|
| 40 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 64)
|
| 41 |
+
global_batch_size: 64
|
| 42 |
+
|
| 43 |
+
# Number of samples per data-parallel rank (type: int, default: 1)
|
| 44 |
+
micro_batch_size: 4
|
| 45 |
+
|
| 46 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 47 |
+
lr_warmup_steps: 25
|
| 48 |
+
|
| 49 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 50 |
+
epochs: 1
|
| 51 |
+
|
| 52 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 53 |
+
max_tokens:
|
| 54 |
+
|
| 55 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 56 |
+
max_steps:
|
| 57 |
+
|
| 58 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 59 |
+
max_seq_length: 512
|
| 60 |
+
|
| 61 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 62 |
+
tie_embeddings:
|
| 63 |
+
|
| 64 |
+
# (type: Optional[float], default: null)
|
| 65 |
+
max_norm:
|
| 66 |
+
|
| 67 |
+
# (type: float, default: 6e-05)
|
| 68 |
+
min_lr: 6.0e-05
|
| 69 |
+
|
| 70 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 71 |
+
eval:
|
| 72 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 600)
|
| 73 |
+
interval: 25
|
| 74 |
+
|
| 75 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 76 |
+
max_new_tokens: 100
|
| 77 |
+
|
| 78 |
+
# Number of iterations (type: int, default: 100)
|
| 79 |
+
max_iters: 100
|
| 80 |
+
|
| 81 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 82 |
+
initial_validation: false
|
| 83 |
+
|
| 84 |
+
# Whether to evaluate on the validation set at the end the training
|
| 85 |
+
final_validation: true
|
| 86 |
+
|
| 87 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 88 |
+
logger_name: csv
|
| 89 |
+
|
| 90 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 91 |
+
seed: 1337
|
| 92 |
+
|
| 93 |
+
# Optimizer-related arguments
|
| 94 |
+
optimizer:
|
| 95 |
+
class_path: torch.optim.AdamW
|
| 96 |
+
|
| 97 |
+
init_args:
|
| 98 |
+
# (type: float, default: 0.001)
|
| 99 |
+
lr: 0.0002
|
| 100 |
+
|
| 101 |
+
# (type: float, default: 0.01)
|
| 102 |
+
weight_decay: 0.1
|
| 103 |
+
|
| 104 |
+
# (type: tuple, default: (0.9,0.999))
|
| 105 |
+
betas:
|
| 106 |
+
- 0.9
|
| 107 |
+
- 0.95
|
config_hub/finetune/llama-3.2-1B/lora.yaml
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Llama-3.2-1B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-llama-3.2-1B
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
prompt_style: alpaca
|
| 52 |
+
ignore_index: -100
|
| 53 |
+
seed: 42
|
| 54 |
+
num_workers: 4
|
| 55 |
+
|
| 56 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 57 |
+
train:
|
| 58 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 59 |
+
save_interval: 200
|
| 60 |
+
|
| 61 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 62 |
+
log_interval: 1
|
| 63 |
+
|
| 64 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 65 |
+
global_batch_size: 8
|
| 66 |
+
|
| 67 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 68 |
+
micro_batch_size: 1
|
| 69 |
+
|
| 70 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 71 |
+
lr_warmup_steps: 10
|
| 72 |
+
|
| 73 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 74 |
+
epochs: 2
|
| 75 |
+
|
| 76 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 77 |
+
max_tokens:
|
| 78 |
+
|
| 79 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 80 |
+
max_steps:
|
| 81 |
+
|
| 82 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 83 |
+
max_seq_length: 512
|
| 84 |
+
|
| 85 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 86 |
+
tie_embeddings:
|
| 87 |
+
|
| 88 |
+
# (type: Optional[float], default: null)
|
| 89 |
+
max_norm:
|
| 90 |
+
|
| 91 |
+
# (type: float, default: 6e-05)
|
| 92 |
+
min_lr: 6.0e-05
|
| 93 |
+
|
| 94 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 95 |
+
eval:
|
| 96 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 97 |
+
interval: 100
|
| 98 |
+
|
| 99 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 100 |
+
max_new_tokens: 100
|
| 101 |
+
|
| 102 |
+
# Number of iterations (type: int, default: 100)
|
| 103 |
+
max_iters: 100
|
| 104 |
+
|
| 105 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 106 |
+
initial_validation: false
|
| 107 |
+
|
| 108 |
+
# Whether to evaluate on the validation set at the end the training
|
| 109 |
+
final_validation: true
|
| 110 |
+
|
| 111 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 112 |
+
logger_name: csv
|
| 113 |
+
|
| 114 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 115 |
+
seed: 1337
|
| 116 |
+
|
| 117 |
+
# Optimizer-related arguments
|
| 118 |
+
optimizer:
|
| 119 |
+
class_path: torch.optim.AdamW
|
| 120 |
+
|
| 121 |
+
init_args:
|
| 122 |
+
# (type: float, default: 0.001)
|
| 123 |
+
lr: 0.0002
|
| 124 |
+
|
| 125 |
+
# (type: float, default: 0.01)
|
| 126 |
+
weight_decay: 0.0
|
| 127 |
+
|
| 128 |
+
# (type: tuple, default: (0.9,0.999))
|
| 129 |
+
betas:
|
| 130 |
+
- 0.9
|
| 131 |
+
- 0.95
|
config_hub/finetune/llama-3.2-1B/qlora.yaml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Llama-3.2-1B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-llama3.2-1b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.05
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
download_dir: data/alpaca2k
|
| 57 |
+
|
| 58 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 59 |
+
train:
|
| 60 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 61 |
+
save_interval: 200
|
| 62 |
+
|
| 63 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 64 |
+
log_interval: 1
|
| 65 |
+
|
| 66 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 67 |
+
global_batch_size: 8
|
| 68 |
+
|
| 69 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 70 |
+
micro_batch_size: 2
|
| 71 |
+
|
| 72 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 73 |
+
lr_warmup_steps: 10
|
| 74 |
+
|
| 75 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 76 |
+
epochs: 2
|
| 77 |
+
|
| 78 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 79 |
+
max_tokens:
|
| 80 |
+
|
| 81 |
+
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
|
| 82 |
+
max_steps:
|
| 83 |
+
|
| 84 |
+
# Limits the length of samples (type: Optional[int], default: null)
|
| 85 |
+
max_seq_length: 512
|
| 86 |
+
|
| 87 |
+
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
|
| 88 |
+
tie_embeddings:
|
| 89 |
+
|
| 90 |
+
# (type: Optional[float], default: null)
|
| 91 |
+
max_norm:
|
| 92 |
+
|
| 93 |
+
# (type: float, default: 6e-05)
|
| 94 |
+
min_lr: 6.0e-05
|
| 95 |
+
|
| 96 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 97 |
+
eval:
|
| 98 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 99 |
+
interval: 100
|
| 100 |
+
|
| 101 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 102 |
+
max_new_tokens: 100
|
| 103 |
+
|
| 104 |
+
# Number of iterations (type: int, default: 100)
|
| 105 |
+
max_iters: 100
|
| 106 |
+
|
| 107 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 108 |
+
initial_validation: false
|
| 109 |
+
|
| 110 |
+
# Whether to evaluate on the validation set at the end the training
|
| 111 |
+
final_validation: true
|
| 112 |
+
|
| 113 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 114 |
+
logger_name: csv
|
| 115 |
+
|
| 116 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 117 |
+
seed: 1337
|
| 118 |
+
|
| 119 |
+
# Optimizer-related arguments
|
| 120 |
+
optimizer:
|
| 121 |
+
class_path: torch.optim.AdamW
|
| 122 |
+
|
| 123 |
+
init_args:
|
| 124 |
+
# (type: float, default: 0.001)
|
| 125 |
+
lr: 0.0002
|
| 126 |
+
|
| 127 |
+
# (type: float, default: 0.01)
|
| 128 |
+
weight_decay: 0.0
|
| 129 |
+
|
| 130 |
+
# (type: tuple, default: (0.9,0.999))
|
| 131 |
+
betas:
|
| 132 |
+
- 0.9
|
| 133 |
+
- 0.95
|
config_hub/finetune/llama-3.2-3B/full.yaml
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Llama-3.2-3B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/finetune/full)
|
| 5 |
+
out_dir: out/finetune/full-llama-3.2-3B
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 1
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Path to a checkpoint directory to resume from in case training was interrupted, or ``True`` to resume
|
| 17 |
+
# from the latest checkpoint in ``out_dir``. An error will be raised if no checkpoint is found. Passing
|
| 18 |
+
# ``'auto'`` will resume from the latest checkpoint but not error if no checkpoint exists.
|
| 19 |
+
# (type: Union[bool, Literal["auto"], Path], default: False)
|
| 20 |
+
# resume: false
|
| 21 |
+
|
| 22 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 23 |
+
data:
|
| 24 |
+
class_path: litgpt.data.Alpaca2k
|
| 25 |
+
init_args:
|
| 26 |
+
mask_prompt: false
|
| 27 |
+
prompt_style: alpaca
|
| 28 |
+
ignore_index: -100
|
| 29 |
+
seed: 42
|
| 30 |
+
num_workers: 4
|
| 31 |
+
|
| 32 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 33 |
+
train:
|
| 34 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 35 |
+
save_interval: 200
|
| 36 |
+
|
| 37 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 38 |
+
log_interval: 1
|
| 39 |
+
|
| 40 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 64)
|
| 41 |
+
global_batch_size: 64
|
| 42 |
+
|
| 43 |
+
# Number of samples per data-parallel rank (type: int, default: 1)
|
| 44 |
+
micro_batch_size: 4
|
| 45 |
+
|
| 46 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 47 |
+
lr_warmup_steps: 25
|
| 48 |
+
|
| 49 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 50 |
+
epochs: 1
|
| 51 |
+
|
| 52 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 53 |
+
max_tokens:
|
| 54 |
+
|
| 55 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 56 |
+
max_steps:
|
| 57 |
+
|
| 58 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 59 |
+
max_seq_length: 512
|
| 60 |
+
|
| 61 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 62 |
+
tie_embeddings:
|
| 63 |
+
|
| 64 |
+
# (type: Optional[float], default: null)
|
| 65 |
+
max_norm:
|
| 66 |
+
|
| 67 |
+
# (type: float, default: 6e-05)
|
| 68 |
+
min_lr: 6.0e-05
|
| 69 |
+
|
| 70 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 71 |
+
eval:
|
| 72 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 600)
|
| 73 |
+
interval: 25
|
| 74 |
+
|
| 75 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 76 |
+
max_new_tokens: 100
|
| 77 |
+
|
| 78 |
+
# Number of iterations (type: int, default: 100)
|
| 79 |
+
max_iters: 100
|
| 80 |
+
|
| 81 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 82 |
+
initial_validation: false
|
| 83 |
+
|
| 84 |
+
# Whether to evaluate on the validation set at the end the training
|
| 85 |
+
final_validation: true
|
| 86 |
+
|
| 87 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 88 |
+
logger_name: csv
|
| 89 |
+
|
| 90 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 91 |
+
seed: 1337
|
| 92 |
+
|
| 93 |
+
# Optimizer-related arguments
|
| 94 |
+
optimizer:
|
| 95 |
+
class_path: torch.optim.AdamW
|
| 96 |
+
|
| 97 |
+
init_args:
|
| 98 |
+
# (type: float, default: 0.001)
|
| 99 |
+
lr: 0.0002
|
| 100 |
+
|
| 101 |
+
# (type: float, default: 0.01)
|
| 102 |
+
weight_decay: 0.1
|
| 103 |
+
|
| 104 |
+
# (type: tuple, default: (0.9,0.999))
|
| 105 |
+
betas:
|
| 106 |
+
- 0.9
|
| 107 |
+
- 0.95
|
config_hub/finetune/llama-3.2-3B/lora.yaml
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Llama-3.2-3B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-llama-3.2-3B
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
prompt_style: alpaca
|
| 52 |
+
ignore_index: -100
|
| 53 |
+
seed: 42
|
| 54 |
+
num_workers: 4
|
| 55 |
+
|
| 56 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 57 |
+
train:
|
| 58 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 59 |
+
save_interval: 200
|
| 60 |
+
|
| 61 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 62 |
+
log_interval: 1
|
| 63 |
+
|
| 64 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 65 |
+
global_batch_size: 8
|
| 66 |
+
|
| 67 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 68 |
+
micro_batch_size: 1
|
| 69 |
+
|
| 70 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 71 |
+
lr_warmup_steps: 10
|
| 72 |
+
|
| 73 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 74 |
+
epochs: 2
|
| 75 |
+
|
| 76 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 77 |
+
max_tokens:
|
| 78 |
+
|
| 79 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 80 |
+
max_steps:
|
| 81 |
+
|
| 82 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 83 |
+
max_seq_length: 512
|
| 84 |
+
|
| 85 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 86 |
+
tie_embeddings:
|
| 87 |
+
|
| 88 |
+
# (type: Optional[float], default: null)
|
| 89 |
+
max_norm:
|
| 90 |
+
|
| 91 |
+
# (type: float, default: 6e-05)
|
| 92 |
+
min_lr: 6.0e-05
|
| 93 |
+
|
| 94 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 95 |
+
eval:
|
| 96 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 97 |
+
interval: 100
|
| 98 |
+
|
| 99 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 100 |
+
max_new_tokens: 100
|
| 101 |
+
|
| 102 |
+
# Number of iterations (type: int, default: 100)
|
| 103 |
+
max_iters: 100
|
| 104 |
+
|
| 105 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 106 |
+
initial_validation: false
|
| 107 |
+
|
| 108 |
+
# Whether to evaluate on the validation set at the end the training
|
| 109 |
+
final_validation: true
|
| 110 |
+
|
| 111 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 112 |
+
logger_name: csv
|
| 113 |
+
|
| 114 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 115 |
+
seed: 1337
|
| 116 |
+
|
| 117 |
+
# Optimizer-related arguments
|
| 118 |
+
optimizer:
|
| 119 |
+
class_path: torch.optim.AdamW
|
| 120 |
+
|
| 121 |
+
init_args:
|
| 122 |
+
# (type: float, default: 0.001)
|
| 123 |
+
lr: 0.0002
|
| 124 |
+
|
| 125 |
+
# (type: float, default: 0.01)
|
| 126 |
+
weight_decay: 0.0
|
| 127 |
+
|
| 128 |
+
# (type: tuple, default: (0.9,0.999))
|
| 129 |
+
betas:
|
| 130 |
+
- 0.9
|
| 131 |
+
- 0.95
|
config_hub/finetune/llama-3.2-3B/qlora.yaml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/meta-llama/Llama-3.2-3B
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-llama3.2-3b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.05
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
download_dir: data/alpaca2k
|
| 57 |
+
|
| 58 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 59 |
+
train:
|
| 60 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 61 |
+
save_interval: 200
|
| 62 |
+
|
| 63 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 64 |
+
log_interval: 1
|
| 65 |
+
|
| 66 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 67 |
+
global_batch_size: 8
|
| 68 |
+
|
| 69 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 70 |
+
micro_batch_size: 2
|
| 71 |
+
|
| 72 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 73 |
+
lr_warmup_steps: 10
|
| 74 |
+
|
| 75 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 76 |
+
epochs: 2
|
| 77 |
+
|
| 78 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 79 |
+
max_tokens:
|
| 80 |
+
|
| 81 |
+
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
|
| 82 |
+
max_steps:
|
| 83 |
+
|
| 84 |
+
# Limits the length of samples (type: Optional[int], default: null)
|
| 85 |
+
max_seq_length: 512
|
| 86 |
+
|
| 87 |
+
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
|
| 88 |
+
tie_embeddings:
|
| 89 |
+
|
| 90 |
+
# (type: Optional[float], default: null)
|
| 91 |
+
max_norm:
|
| 92 |
+
|
| 93 |
+
# (type: float, default: 6e-05)
|
| 94 |
+
min_lr: 6.0e-05
|
| 95 |
+
|
| 96 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 97 |
+
eval:
|
| 98 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 99 |
+
interval: 100
|
| 100 |
+
|
| 101 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 102 |
+
max_new_tokens: 100
|
| 103 |
+
|
| 104 |
+
# Number of iterations (type: int, default: 100)
|
| 105 |
+
max_iters: 100
|
| 106 |
+
|
| 107 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 108 |
+
initial_validation: false
|
| 109 |
+
|
| 110 |
+
# Whether to evaluate on the validation set at the end the training
|
| 111 |
+
final_validation: true
|
| 112 |
+
|
| 113 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 114 |
+
logger_name: csv
|
| 115 |
+
|
| 116 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 117 |
+
seed: 1337
|
| 118 |
+
|
| 119 |
+
# Optimizer-related arguments
|
| 120 |
+
optimizer:
|
| 121 |
+
class_path: torch.optim.AdamW
|
| 122 |
+
|
| 123 |
+
init_args:
|
| 124 |
+
# (type: float, default: 0.001)
|
| 125 |
+
lr: 0.0002
|
| 126 |
+
|
| 127 |
+
# (type: float, default: 0.01)
|
| 128 |
+
weight_decay: 0.0
|
| 129 |
+
|
| 130 |
+
# (type: tuple, default: (0.9,0.999))
|
| 131 |
+
betas:
|
| 132 |
+
- 0.9
|
| 133 |
+
- 0.95
|
config_hub/finetune/mistral-7b-v0.2/lora.yaml
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/unsloth/Mistral-7B-v0.2
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-mistral-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
prompt_style: alpaca
|
| 52 |
+
ignore_index: -100
|
| 53 |
+
seed: 42
|
| 54 |
+
num_workers: 4
|
| 55 |
+
|
| 56 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 57 |
+
train:
|
| 58 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 59 |
+
save_interval: 200
|
| 60 |
+
|
| 61 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 62 |
+
log_interval: 1
|
| 63 |
+
|
| 64 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 65 |
+
global_batch_size: 8
|
| 66 |
+
|
| 67 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 68 |
+
micro_batch_size: 2
|
| 69 |
+
|
| 70 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 71 |
+
lr_warmup_steps: 10
|
| 72 |
+
|
| 73 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 74 |
+
epochs: 4
|
| 75 |
+
|
| 76 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 77 |
+
max_tokens:
|
| 78 |
+
|
| 79 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 80 |
+
max_steps:
|
| 81 |
+
|
| 82 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 83 |
+
max_seq_length: 512
|
| 84 |
+
|
| 85 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 86 |
+
tie_embeddings:
|
| 87 |
+
|
| 88 |
+
# (type: Optional[float], default: null)
|
| 89 |
+
max_norm:
|
| 90 |
+
|
| 91 |
+
# (type: float, default: 6e-05)
|
| 92 |
+
min_lr: 6.0e-05
|
| 93 |
+
|
| 94 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 95 |
+
eval:
|
| 96 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 97 |
+
interval: 100
|
| 98 |
+
|
| 99 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 100 |
+
max_new_tokens: 100
|
| 101 |
+
|
| 102 |
+
# Number of iterations (type: int, default: 100)
|
| 103 |
+
max_iters: 100
|
| 104 |
+
|
| 105 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 106 |
+
initial_validation: false
|
| 107 |
+
|
| 108 |
+
# Whether to evaluate on the validation set at the end the training
|
| 109 |
+
final_validation: true
|
| 110 |
+
|
| 111 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 112 |
+
logger_name: csv
|
| 113 |
+
|
| 114 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 115 |
+
seed: 1337
|
| 116 |
+
|
| 117 |
+
# Optimizer-related arguments
|
| 118 |
+
optimizer:
|
| 119 |
+
class_path: torch.optim.AdamW
|
| 120 |
+
|
| 121 |
+
init_args:
|
| 122 |
+
# (type: float, default: 0.001)
|
| 123 |
+
lr: 0.0002
|
| 124 |
+
|
| 125 |
+
# (type: float, default: 0.01)
|
| 126 |
+
weight_decay: 0.0
|
| 127 |
+
|
| 128 |
+
# (type: tuple, default: (0.9,0.999))
|
| 129 |
+
betas:
|
| 130 |
+
- 0.9
|
| 131 |
+
- 0.95
|
config_hub/finetune/mistral-7b-v0.2/qlora.yaml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/unsloth/Mistral-7B-v0.2
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-mistral-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.05
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
download_dir: data/alpaca2k
|
| 57 |
+
|
| 58 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 59 |
+
train:
|
| 60 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 61 |
+
save_interval: 200
|
| 62 |
+
|
| 63 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 64 |
+
log_interval: 1
|
| 65 |
+
|
| 66 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 67 |
+
global_batch_size: 8
|
| 68 |
+
|
| 69 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 70 |
+
micro_batch_size: 2
|
| 71 |
+
|
| 72 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 73 |
+
lr_warmup_steps: 10
|
| 74 |
+
|
| 75 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 76 |
+
epochs: 4
|
| 77 |
+
|
| 78 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 79 |
+
max_tokens:
|
| 80 |
+
|
| 81 |
+
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
|
| 82 |
+
max_steps:
|
| 83 |
+
|
| 84 |
+
# Limits the length of samples (type: Optional[int], default: null)
|
| 85 |
+
max_seq_length: 512
|
| 86 |
+
|
| 87 |
+
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
|
| 88 |
+
tie_embeddings:
|
| 89 |
+
|
| 90 |
+
# (type: Optional[float], default: null)
|
| 91 |
+
max_norm:
|
| 92 |
+
|
| 93 |
+
# (type: float, default: 6e-05)
|
| 94 |
+
min_lr: 6.0e-05
|
| 95 |
+
|
| 96 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 97 |
+
eval:
|
| 98 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 99 |
+
interval: 100
|
| 100 |
+
|
| 101 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 102 |
+
max_new_tokens: 100
|
| 103 |
+
|
| 104 |
+
# Number of iterations (type: int, default: 100)
|
| 105 |
+
max_iters: 100
|
| 106 |
+
|
| 107 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 108 |
+
initial_validation: false
|
| 109 |
+
|
| 110 |
+
# Whether to evaluate on the validation set at the end the training
|
| 111 |
+
final_validation: true
|
| 112 |
+
|
| 113 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 114 |
+
logger_name: csv
|
| 115 |
+
|
| 116 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 117 |
+
seed: 1337
|
| 118 |
+
|
| 119 |
+
# Optimizer-related arguments
|
| 120 |
+
optimizer:
|
| 121 |
+
class_path: torch.optim.AdamW
|
| 122 |
+
|
| 123 |
+
init_args:
|
| 124 |
+
# (type: float, default: 0.001)
|
| 125 |
+
lr: 0.0002
|
| 126 |
+
|
| 127 |
+
# (type: float, default: 0.01)
|
| 128 |
+
weight_decay: 0.0
|
| 129 |
+
|
| 130 |
+
# (type: tuple, default: (0.9,0.999))
|
| 131 |
+
betas:
|
| 132 |
+
- 0.9
|
| 133 |
+
- 0.95
|
config_hub/finetune/mistral-7b/lora.yaml
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/mistralai/Mistral-7B-v0.1
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-mistral-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
prompt_style: alpaca
|
| 52 |
+
ignore_index: -100
|
| 53 |
+
seed: 42
|
| 54 |
+
num_workers: 4
|
| 55 |
+
|
| 56 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 57 |
+
train:
|
| 58 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 59 |
+
save_interval: 200
|
| 60 |
+
|
| 61 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 62 |
+
log_interval: 1
|
| 63 |
+
|
| 64 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 65 |
+
global_batch_size: 8
|
| 66 |
+
|
| 67 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 68 |
+
micro_batch_size: 2
|
| 69 |
+
|
| 70 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 71 |
+
lr_warmup_steps: 10
|
| 72 |
+
|
| 73 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 74 |
+
epochs: 4
|
| 75 |
+
|
| 76 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 77 |
+
max_tokens:
|
| 78 |
+
|
| 79 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 80 |
+
max_steps:
|
| 81 |
+
|
| 82 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 83 |
+
max_seq_length: 512
|
| 84 |
+
|
| 85 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 86 |
+
tie_embeddings:
|
| 87 |
+
|
| 88 |
+
# (type: Optional[float], default: null)
|
| 89 |
+
max_norm:
|
| 90 |
+
|
| 91 |
+
# (type: float, default: 6e-05)
|
| 92 |
+
min_lr: 6.0e-05
|
| 93 |
+
|
| 94 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 95 |
+
eval:
|
| 96 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 97 |
+
interval: 100
|
| 98 |
+
|
| 99 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 100 |
+
max_new_tokens: 100
|
| 101 |
+
|
| 102 |
+
# Number of iterations (type: int, default: 100)
|
| 103 |
+
max_iters: 100
|
| 104 |
+
|
| 105 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 106 |
+
initial_validation: false
|
| 107 |
+
|
| 108 |
+
# Whether to evaluate on the validation set at the end the training
|
| 109 |
+
final_validation: true
|
| 110 |
+
|
| 111 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 112 |
+
logger_name: csv
|
| 113 |
+
|
| 114 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 115 |
+
seed: 1337
|
| 116 |
+
|
| 117 |
+
# Optimizer-related arguments
|
| 118 |
+
optimizer:
|
| 119 |
+
class_path: torch.optim.AdamW
|
| 120 |
+
|
| 121 |
+
init_args:
|
| 122 |
+
# (type: float, default: 0.001)
|
| 123 |
+
lr: 0.0002
|
| 124 |
+
|
| 125 |
+
# (type: float, default: 0.01)
|
| 126 |
+
weight_decay: 0.0
|
| 127 |
+
|
| 128 |
+
# (type: tuple, default: (0.9,0.999))
|
| 129 |
+
betas:
|
| 130 |
+
- 0.9
|
| 131 |
+
- 0.95
|
config_hub/finetune/mistral-7b/qlora.yaml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/mistralai/Mistral-7B-v0.1
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-mistral-7b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.05
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
download_dir: data/alpaca2k
|
| 57 |
+
|
| 58 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 59 |
+
train:
|
| 60 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 61 |
+
save_interval: 200
|
| 62 |
+
|
| 63 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 64 |
+
log_interval: 1
|
| 65 |
+
|
| 66 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 67 |
+
global_batch_size: 8
|
| 68 |
+
|
| 69 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 70 |
+
micro_batch_size: 2
|
| 71 |
+
|
| 72 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 73 |
+
lr_warmup_steps: 10
|
| 74 |
+
|
| 75 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 76 |
+
epochs: 4
|
| 77 |
+
|
| 78 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 79 |
+
max_tokens:
|
| 80 |
+
|
| 81 |
+
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
|
| 82 |
+
max_steps:
|
| 83 |
+
|
| 84 |
+
# Limits the length of samples (type: Optional[int], default: null)
|
| 85 |
+
max_seq_length: 512
|
| 86 |
+
|
| 87 |
+
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
|
| 88 |
+
tie_embeddings:
|
| 89 |
+
|
| 90 |
+
# (type: Optional[float], default: null)
|
| 91 |
+
max_norm:
|
| 92 |
+
|
| 93 |
+
# (type: float, default: 6e-05)
|
| 94 |
+
min_lr: 6.0e-05
|
| 95 |
+
|
| 96 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 97 |
+
eval:
|
| 98 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 99 |
+
interval: 100
|
| 100 |
+
|
| 101 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 102 |
+
max_new_tokens: 100
|
| 103 |
+
|
| 104 |
+
# Number of iterations (type: int, default: 100)
|
| 105 |
+
max_iters: 100
|
| 106 |
+
|
| 107 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 108 |
+
initial_validation: false
|
| 109 |
+
|
| 110 |
+
# Whether to evaluate on the validation set at the end the training
|
| 111 |
+
final_validation: true
|
| 112 |
+
|
| 113 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 114 |
+
logger_name: csv
|
| 115 |
+
|
| 116 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 117 |
+
seed: 1337
|
| 118 |
+
|
| 119 |
+
# Optimizer-related arguments
|
| 120 |
+
optimizer:
|
| 121 |
+
class_path: torch.optim.AdamW
|
| 122 |
+
|
| 123 |
+
init_args:
|
| 124 |
+
# (type: float, default: 0.001)
|
| 125 |
+
lr: 0.0002
|
| 126 |
+
|
| 127 |
+
# (type: float, default: 0.01)
|
| 128 |
+
weight_decay: 0.0
|
| 129 |
+
|
| 130 |
+
# (type: tuple, default: (0.9,0.999))
|
| 131 |
+
betas:
|
| 132 |
+
- 0.9
|
| 133 |
+
- 0.95
|
config_hub/finetune/openllama/full_qa.yaml
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir:
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir:
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 1
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 17 |
+
data:
|
| 18 |
+
class_path: litgpt.data.JSON
|
| 19 |
+
init_args:
|
| 20 |
+
mask_prompt: false
|
| 21 |
+
val_split_fraction: 0.02
|
| 22 |
+
ignore_index: -100
|
| 23 |
+
seed: 42
|
| 24 |
+
num_workers: 4
|
| 25 |
+
|
| 26 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 27 |
+
train:
|
| 28 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 29 |
+
save_interval: 800
|
| 30 |
+
|
| 31 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 32 |
+
log_interval: 1
|
| 33 |
+
|
| 34 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 35 |
+
global_batch_size: 32
|
| 36 |
+
|
| 37 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 38 |
+
micro_batch_size: 4
|
| 39 |
+
|
| 40 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 41 |
+
lr_warmup_steps: 1000
|
| 42 |
+
|
| 43 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 44 |
+
epochs: 1
|
| 45 |
+
|
| 46 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 47 |
+
max_tokens:
|
| 48 |
+
|
| 49 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 50 |
+
max_steps:
|
| 51 |
+
|
| 52 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 53 |
+
max_seq_length: 512
|
| 54 |
+
|
| 55 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 56 |
+
tie_embeddings:
|
| 57 |
+
|
| 58 |
+
# (type: Optional[float], default: null)
|
| 59 |
+
max_norm:
|
| 60 |
+
|
| 61 |
+
# (type: float, default: 6e-05)
|
| 62 |
+
min_lr: 6.0e-05
|
| 63 |
+
|
| 64 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 65 |
+
eval:
|
| 66 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 67 |
+
interval: 25
|
| 68 |
+
|
| 69 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 70 |
+
max_new_tokens: 100
|
| 71 |
+
|
| 72 |
+
# Number of iterations (type: int, default: 100)
|
| 73 |
+
max_iters: 100
|
| 74 |
+
|
| 75 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 76 |
+
initial_validation: false
|
| 77 |
+
|
| 78 |
+
# Whether to evaluate on the validation set at the end the training
|
| 79 |
+
final_validation: true
|
| 80 |
+
|
| 81 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 82 |
+
logger_name: csv
|
| 83 |
+
|
| 84 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 85 |
+
seed: 1337
|
| 86 |
+
|
| 87 |
+
# Optimizer-related arguments
|
| 88 |
+
optimizer:
|
| 89 |
+
class_path: torch.optim.AdamW
|
| 90 |
+
|
| 91 |
+
init_args:
|
| 92 |
+
# (type: float, default: 0.001)
|
| 93 |
+
lr: 0.0002
|
| 94 |
+
|
| 95 |
+
# (type: float, default: 0.01)
|
| 96 |
+
weight_decay: 0.0
|
| 97 |
+
|
| 98 |
+
# (type: tuple, default: (0.9,0.999))
|
| 99 |
+
betas:
|
| 100 |
+
- 0.9
|
| 101 |
+
- 0.95
|
config_hub/finetune/phi-2/full.yaml
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/microsoft/phi-2
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/finetune/full)
|
| 5 |
+
out_dir: out/finetune/full-phi-2
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 2
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 17 |
+
data:
|
| 18 |
+
class_path: litgpt.data.Alpaca2k
|
| 19 |
+
init_args:
|
| 20 |
+
mask_prompt: false
|
| 21 |
+
prompt_style: alpaca
|
| 22 |
+
ignore_index: -100
|
| 23 |
+
seed: 42
|
| 24 |
+
num_workers: 4
|
| 25 |
+
|
| 26 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 27 |
+
train:
|
| 28 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 29 |
+
save_interval: 200
|
| 30 |
+
|
| 31 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 32 |
+
log_interval: 1
|
| 33 |
+
|
| 34 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 64)
|
| 35 |
+
global_batch_size: 8
|
| 36 |
+
|
| 37 |
+
# Number of samples per data-parallel rank (type: int, default: 1)
|
| 38 |
+
micro_batch_size: 4
|
| 39 |
+
|
| 40 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 41 |
+
lr_warmup_steps: 200
|
| 42 |
+
|
| 43 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 44 |
+
epochs: 1
|
| 45 |
+
|
| 46 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 47 |
+
max_tokens:
|
| 48 |
+
|
| 49 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 50 |
+
max_steps: 100
|
| 51 |
+
|
| 52 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 53 |
+
max_seq_length: 512
|
| 54 |
+
|
| 55 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 56 |
+
tie_embeddings:
|
| 57 |
+
|
| 58 |
+
# (type: Optional[float], default: null)
|
| 59 |
+
max_norm:
|
| 60 |
+
|
| 61 |
+
# (type: float, default: 6e-05)
|
| 62 |
+
min_lr: 6.0e-05
|
| 63 |
+
|
| 64 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 65 |
+
eval:
|
| 66 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 600)
|
| 67 |
+
interval: 25
|
| 68 |
+
|
| 69 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 70 |
+
max_new_tokens: 100
|
| 71 |
+
|
| 72 |
+
# Number of iterations (type: int, default: 100)
|
| 73 |
+
max_iters: 100
|
| 74 |
+
|
| 75 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 76 |
+
initial_validation: false
|
| 77 |
+
|
| 78 |
+
# Whether to evaluate on the validation set at the end the training
|
| 79 |
+
final_validation: true
|
| 80 |
+
|
| 81 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 82 |
+
logger_name: csv
|
| 83 |
+
|
| 84 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 85 |
+
seed: 1337
|
| 86 |
+
|
| 87 |
+
# Optimizer-related arguments
|
| 88 |
+
optimizer:
|
| 89 |
+
class_path: torch.optim.AdamW
|
| 90 |
+
|
| 91 |
+
init_args:
|
| 92 |
+
# (type: float, default: 0.001)
|
| 93 |
+
lr: 0.0002
|
| 94 |
+
|
| 95 |
+
# (type: float, default: 0.01)
|
| 96 |
+
weight_decay: 0.1
|
| 97 |
+
|
| 98 |
+
# (type: tuple, default: (0.9,0.999))
|
| 99 |
+
betas:
|
| 100 |
+
- 0.9
|
| 101 |
+
- 0.95
|
config_hub/finetune/phi-2/lora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/microsoft/phi-2
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-phi-2
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 8
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 8
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 4
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 10
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 1
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 100
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/phi-2/qlora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/microsoft/phi-2
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-phi-2
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 8
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 8
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 4
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 10
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 1
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 100
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|
config_hub/finetune/phi-3/full.yaml
ADDED
|
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/microsoft/Phi-3-mini-4k-instruct
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/finetune/full)
|
| 5 |
+
out_dir: out/finetune/full-phi-3
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 1
|
| 12 |
+
|
| 13 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 14 |
+
data:
|
| 15 |
+
class_path: litgpt.data.Alpaca2k
|
| 16 |
+
init_args:
|
| 17 |
+
mask_prompt: false
|
| 18 |
+
prompt_style: alpaca
|
| 19 |
+
ignore_index: -100
|
| 20 |
+
seed: 42
|
| 21 |
+
num_workers: 4
|
| 22 |
+
|
| 23 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 24 |
+
train:
|
| 25 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 26 |
+
save_interval: 200
|
| 27 |
+
|
| 28 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 29 |
+
log_interval: 1
|
| 30 |
+
|
| 31 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 64)
|
| 32 |
+
global_batch_size: 8
|
| 33 |
+
|
| 34 |
+
# Number of samples per data-parallel rank (type: int, default: 1)
|
| 35 |
+
micro_batch_size: 4
|
| 36 |
+
|
| 37 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 38 |
+
lr_warmup_steps: 200
|
| 39 |
+
|
| 40 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 41 |
+
epochs: 1
|
| 42 |
+
|
| 43 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 44 |
+
max_tokens:
|
| 45 |
+
|
| 46 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 47 |
+
max_steps:
|
| 48 |
+
|
| 49 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 50 |
+
max_seq_length: 512
|
| 51 |
+
|
| 52 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 53 |
+
tie_embeddings:
|
| 54 |
+
|
| 55 |
+
# (type: Optional[float], default: null)
|
| 56 |
+
max_norm:
|
| 57 |
+
|
| 58 |
+
# (type: float, default: 6e-05)
|
| 59 |
+
min_lr: 6.0e-05
|
| 60 |
+
|
| 61 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 62 |
+
eval:
|
| 63 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 600)
|
| 64 |
+
interval: 25
|
| 65 |
+
|
| 66 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 67 |
+
max_new_tokens: 100
|
| 68 |
+
|
| 69 |
+
# Number of iterations (type: int, default: 100)
|
| 70 |
+
max_iters: 100
|
| 71 |
+
|
| 72 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 73 |
+
initial_validation: false
|
| 74 |
+
|
| 75 |
+
# Whether to evaluate on the validation set at the end the training
|
| 76 |
+
final_validation: true
|
| 77 |
+
|
| 78 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 79 |
+
logger_name: csv
|
| 80 |
+
|
| 81 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 82 |
+
seed: 1337
|
| 83 |
+
|
| 84 |
+
# Optimizer-related arguments
|
| 85 |
+
optimizer:
|
| 86 |
+
class_path: torch.optim.AdamW
|
| 87 |
+
|
| 88 |
+
init_args:
|
| 89 |
+
# (type: float, default: 0.001)
|
| 90 |
+
lr: 0.0002
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 0.01)
|
| 93 |
+
weight_decay: 0.1
|
| 94 |
+
|
| 95 |
+
# (type: tuple, default: (0.9,0.999))
|
| 96 |
+
betas:
|
| 97 |
+
- 0.9
|
| 98 |
+
- 0.95
|
config_hub/finetune/phi-3/lora.yaml
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/microsoft/Phi-3-mini-4k-instruct
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-phi-3
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# The LoRA rank. (type: int, default: 8)
|
| 17 |
+
lora_r: 8
|
| 18 |
+
|
| 19 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 20 |
+
lora_alpha: 16
|
| 21 |
+
|
| 22 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 23 |
+
lora_dropout: 0.05
|
| 24 |
+
|
| 25 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 26 |
+
lora_query: true
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 29 |
+
lora_key: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 32 |
+
lora_value: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 35 |
+
lora_projection: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_mlp: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 41 |
+
lora_head: true
|
| 42 |
+
|
| 43 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 44 |
+
data:
|
| 45 |
+
class_path: litgpt.data.Alpaca2k
|
| 46 |
+
init_args:
|
| 47 |
+
mask_prompt: false
|
| 48 |
+
val_split_fraction: 0.03847
|
| 49 |
+
prompt_style: alpaca
|
| 50 |
+
ignore_index: -100
|
| 51 |
+
seed: 42
|
| 52 |
+
num_workers: 4
|
| 53 |
+
|
| 54 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 55 |
+
train:
|
| 56 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 57 |
+
save_interval: 800
|
| 58 |
+
|
| 59 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 60 |
+
log_interval: 1
|
| 61 |
+
|
| 62 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 63 |
+
global_batch_size: 8
|
| 64 |
+
|
| 65 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 66 |
+
micro_batch_size: 4
|
| 67 |
+
|
| 68 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 69 |
+
lr_warmup_steps: 10
|
| 70 |
+
|
| 71 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 72 |
+
epochs: 1
|
| 73 |
+
|
| 74 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 75 |
+
max_tokens:
|
| 76 |
+
|
| 77 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 78 |
+
max_steps:
|
| 79 |
+
|
| 80 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 81 |
+
max_seq_length: 512
|
| 82 |
+
|
| 83 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 84 |
+
tie_embeddings:
|
| 85 |
+
|
| 86 |
+
# (type: Optional[float], default: null)
|
| 87 |
+
max_norm:
|
| 88 |
+
|
| 89 |
+
# (type: float, default: 6e-05)
|
| 90 |
+
min_lr: 6.0e-05
|
| 91 |
+
|
| 92 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 93 |
+
eval:
|
| 94 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 95 |
+
interval: 100
|
| 96 |
+
|
| 97 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 98 |
+
max_new_tokens: 100
|
| 99 |
+
|
| 100 |
+
# Number of iterations (type: int, default: 100)
|
| 101 |
+
max_iters: 100
|
| 102 |
+
|
| 103 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 104 |
+
initial_validation: false
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the end the training
|
| 107 |
+
final_validation: true
|
| 108 |
+
|
| 109 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 110 |
+
logger_name: csv
|
| 111 |
+
|
| 112 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 113 |
+
seed: 1337
|
| 114 |
+
|
| 115 |
+
# Optimizer-related arguments
|
| 116 |
+
optimizer:
|
| 117 |
+
class_path: torch.optim.AdamW
|
| 118 |
+
|
| 119 |
+
init_args:
|
| 120 |
+
# (type: float, default: 0.001)
|
| 121 |
+
lr: 0.0002
|
| 122 |
+
|
| 123 |
+
# (type: float, default: 0.01)
|
| 124 |
+
weight_decay: 0.0
|
| 125 |
+
|
| 126 |
+
# (type: tuple, default: (0.9,0.999))
|
| 127 |
+
betas:
|
| 128 |
+
- 0.9
|
| 129 |
+
- 0.95
|
config_hub/finetune/phi-3/qlora.yaml
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/microsoft/Phi-3-mini-4k-instruct
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-phi-3
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# The LoRA rank. (type: int, default: 8)
|
| 17 |
+
lora_r: 8
|
| 18 |
+
|
| 19 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 20 |
+
lora_alpha: 16
|
| 21 |
+
|
| 22 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 23 |
+
lora_dropout: 0.05
|
| 24 |
+
|
| 25 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 26 |
+
lora_query: true
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 29 |
+
lora_key: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 32 |
+
lora_value: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 35 |
+
lora_projection: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_mlp: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 41 |
+
lora_head: true
|
| 42 |
+
|
| 43 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 44 |
+
data:
|
| 45 |
+
class_path: litgpt.data.Alpaca2k
|
| 46 |
+
init_args:
|
| 47 |
+
mask_prompt: false
|
| 48 |
+
val_split_fraction: 0.03847
|
| 49 |
+
prompt_style: alpaca
|
| 50 |
+
ignore_index: -100
|
| 51 |
+
seed: 42
|
| 52 |
+
num_workers: 4
|
| 53 |
+
|
| 54 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 55 |
+
train:
|
| 56 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 57 |
+
save_interval: 800
|
| 58 |
+
|
| 59 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 60 |
+
log_interval: 1
|
| 61 |
+
|
| 62 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 63 |
+
global_batch_size: 8
|
| 64 |
+
|
| 65 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 66 |
+
micro_batch_size: 4
|
| 67 |
+
|
| 68 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 69 |
+
lr_warmup_steps: 10
|
| 70 |
+
|
| 71 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 72 |
+
epochs: 1
|
| 73 |
+
|
| 74 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 75 |
+
max_tokens:
|
| 76 |
+
|
| 77 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 78 |
+
max_steps:
|
| 79 |
+
|
| 80 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 81 |
+
max_seq_length: 512
|
| 82 |
+
|
| 83 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 84 |
+
tie_embeddings:
|
| 85 |
+
|
| 86 |
+
# (type: Optional[float], default: null)
|
| 87 |
+
max_norm:
|
| 88 |
+
|
| 89 |
+
# (type: float, default: 6e-05)
|
| 90 |
+
min_lr: 6.0e-05
|
| 91 |
+
|
| 92 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 93 |
+
eval:
|
| 94 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 95 |
+
interval: 100
|
| 96 |
+
|
| 97 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 98 |
+
max_new_tokens: 100
|
| 99 |
+
|
| 100 |
+
# Number of iterations (type: int, default: 100)
|
| 101 |
+
max_iters: 100
|
| 102 |
+
|
| 103 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 104 |
+
initial_validation: false
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the end the training
|
| 107 |
+
final_validation: true
|
| 108 |
+
|
| 109 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 110 |
+
logger_name: csv
|
| 111 |
+
|
| 112 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 113 |
+
seed: 1337
|
| 114 |
+
|
| 115 |
+
# Optimizer-related arguments
|
| 116 |
+
optimizer:
|
| 117 |
+
class_path: torch.optim.AdamW
|
| 118 |
+
|
| 119 |
+
init_args:
|
| 120 |
+
# (type: float, default: 0.001)
|
| 121 |
+
lr: 0.0002
|
| 122 |
+
|
| 123 |
+
# (type: float, default: 0.01)
|
| 124 |
+
weight_decay: 0.0
|
| 125 |
+
|
| 126 |
+
# (type: tuple, default: (0.9,0.999))
|
| 127 |
+
betas:
|
| 128 |
+
- 0.9
|
| 129 |
+
- 0.95
|
config_hub/finetune/stablelm-base-alpha-3b/full.yaml
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/stabilityai/stablelm-base-alpha-3b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/full-stablelm-base-alpha-3b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 2
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 17 |
+
data:
|
| 18 |
+
class_path: litgpt.data.Alpaca2k
|
| 19 |
+
init_args:
|
| 20 |
+
mask_prompt: false
|
| 21 |
+
val_split_fraction: 0.03847
|
| 22 |
+
prompt_style: alpaca
|
| 23 |
+
ignore_index: -100
|
| 24 |
+
seed: 42
|
| 25 |
+
num_workers: 4
|
| 26 |
+
|
| 27 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 28 |
+
train:
|
| 29 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 30 |
+
save_interval: 800
|
| 31 |
+
|
| 32 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 33 |
+
log_interval: 1
|
| 34 |
+
|
| 35 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 36 |
+
global_batch_size: 8
|
| 37 |
+
|
| 38 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 39 |
+
micro_batch_size: 1
|
| 40 |
+
|
| 41 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 42 |
+
lr_warmup_steps: 1000
|
| 43 |
+
|
| 44 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 45 |
+
epochs: 1
|
| 46 |
+
|
| 47 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 48 |
+
max_tokens:
|
| 49 |
+
|
| 50 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 51 |
+
max_steps:
|
| 52 |
+
|
| 53 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 54 |
+
max_seq_length: 512
|
| 55 |
+
|
| 56 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 57 |
+
tie_embeddings:
|
| 58 |
+
|
| 59 |
+
# (type: Optional[float], default: null)
|
| 60 |
+
max_norm:
|
| 61 |
+
|
| 62 |
+
# (type: float, default: 6e-05)
|
| 63 |
+
min_lr: 6.0e-05
|
| 64 |
+
|
| 65 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 66 |
+
eval:
|
| 67 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 68 |
+
interval: 25
|
| 69 |
+
|
| 70 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 71 |
+
max_new_tokens: 100
|
| 72 |
+
|
| 73 |
+
# Number of iterations (type: int, default: 100)
|
| 74 |
+
max_iters: 100
|
| 75 |
+
|
| 76 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 77 |
+
initial_validation: false
|
| 78 |
+
|
| 79 |
+
# Whether to evaluate on the validation set at the end the training
|
| 80 |
+
final_validation: true
|
| 81 |
+
|
| 82 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 83 |
+
logger_name: csv
|
| 84 |
+
|
| 85 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 86 |
+
seed: 1337
|
| 87 |
+
|
| 88 |
+
# Optimizer-related arguments
|
| 89 |
+
optimizer:
|
| 90 |
+
class_path: torch.optim.AdamW
|
| 91 |
+
|
| 92 |
+
init_args:
|
| 93 |
+
# (type: float, default: 0.001)
|
| 94 |
+
lr: 0.0002
|
| 95 |
+
|
| 96 |
+
# (type: float, default: 0.01)
|
| 97 |
+
weight_decay: 0.1
|
| 98 |
+
|
| 99 |
+
# (type: tuple, default: (0.9,0.999))
|
| 100 |
+
betas:
|
| 101 |
+
- 0.9
|
| 102 |
+
- 0.95
|
config_hub/finetune/stablelm-base-alpha-3b/lora.yaml
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/stabilityai/stablelm-base-alpha-3b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-stablelm-base-alpha-3b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
prompt_style: alpaca
|
| 52 |
+
ignore_index: -100
|
| 53 |
+
seed: 42
|
| 54 |
+
num_workers: 4
|
| 55 |
+
|
| 56 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 57 |
+
train:
|
| 58 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 59 |
+
save_interval: 200
|
| 60 |
+
|
| 61 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 62 |
+
log_interval: 1
|
| 63 |
+
|
| 64 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 65 |
+
global_batch_size: 8
|
| 66 |
+
|
| 67 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 68 |
+
micro_batch_size: 1
|
| 69 |
+
|
| 70 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 71 |
+
lr_warmup_steps: 10
|
| 72 |
+
|
| 73 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 74 |
+
epochs: 4
|
| 75 |
+
|
| 76 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 77 |
+
max_tokens:
|
| 78 |
+
|
| 79 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 80 |
+
max_steps:
|
| 81 |
+
|
| 82 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 83 |
+
max_seq_length: 512
|
| 84 |
+
|
| 85 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 86 |
+
tie_embeddings:
|
| 87 |
+
|
| 88 |
+
# (type: Optional[float], default: null)
|
| 89 |
+
max_norm:
|
| 90 |
+
|
| 91 |
+
# (type: float, default: 6e-05)
|
| 92 |
+
min_lr: 6.0e-05
|
| 93 |
+
|
| 94 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 95 |
+
eval:
|
| 96 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 97 |
+
interval: 100
|
| 98 |
+
|
| 99 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 100 |
+
max_new_tokens: 100
|
| 101 |
+
|
| 102 |
+
# Number of iterations (type: int, default: 100)
|
| 103 |
+
max_iters: 100
|
| 104 |
+
|
| 105 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 106 |
+
initial_validation: false
|
| 107 |
+
|
| 108 |
+
# Whether to evaluate on the validation set at the end the training
|
| 109 |
+
final_validation: true
|
| 110 |
+
|
| 111 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 112 |
+
logger_name: csv
|
| 113 |
+
|
| 114 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 115 |
+
seed: 1337
|
| 116 |
+
|
| 117 |
+
# Optimizer-related arguments
|
| 118 |
+
optimizer:
|
| 119 |
+
class_path: torch.optim.AdamW
|
| 120 |
+
|
| 121 |
+
init_args:
|
| 122 |
+
# (type: float, default: 0.001)
|
| 123 |
+
lr: 0.0002
|
| 124 |
+
|
| 125 |
+
# (type: float, default: 0.01)
|
| 126 |
+
weight_decay: 0.0
|
| 127 |
+
|
| 128 |
+
# (type: tuple, default: (0.9,0.999))
|
| 129 |
+
betas:
|
| 130 |
+
- 0.9
|
| 131 |
+
- 0.95
|
config_hub/finetune/stablelm-base-alpha-3b/qlora.yaml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/stabilityai/stablelm-base-alpha-3b
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/qlora-stablelm-base-alpha-3b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize: bnb.nf4
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: false
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: false
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: false
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: false
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.05
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
download_dir: data/alpaca2k
|
| 57 |
+
|
| 58 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 59 |
+
train:
|
| 60 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 61 |
+
save_interval: 200
|
| 62 |
+
|
| 63 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 64 |
+
log_interval: 1
|
| 65 |
+
|
| 66 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 67 |
+
global_batch_size: 8
|
| 68 |
+
|
| 69 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 70 |
+
micro_batch_size: 1
|
| 71 |
+
|
| 72 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 73 |
+
lr_warmup_steps: 10
|
| 74 |
+
|
| 75 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 76 |
+
epochs: 4
|
| 77 |
+
|
| 78 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 79 |
+
max_tokens:
|
| 80 |
+
|
| 81 |
+
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
|
| 82 |
+
max_steps:
|
| 83 |
+
|
| 84 |
+
# Limits the length of samples (type: Optional[int], default: null)
|
| 85 |
+
max_seq_length: 512
|
| 86 |
+
|
| 87 |
+
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
|
| 88 |
+
tie_embeddings:
|
| 89 |
+
|
| 90 |
+
# (type: Optional[float], default: null)
|
| 91 |
+
max_norm:
|
| 92 |
+
|
| 93 |
+
# (type: float, default: 6e-05)
|
| 94 |
+
min_lr: 6.0e-05
|
| 95 |
+
|
| 96 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 97 |
+
eval:
|
| 98 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 99 |
+
interval: 100
|
| 100 |
+
|
| 101 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 102 |
+
max_new_tokens: 100
|
| 103 |
+
|
| 104 |
+
# Number of iterations (type: int, default: 100)
|
| 105 |
+
max_iters: 100
|
| 106 |
+
|
| 107 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 108 |
+
initial_validation: false
|
| 109 |
+
|
| 110 |
+
# Whether to evaluate on the validation set at the end the training
|
| 111 |
+
final_validation: true
|
| 112 |
+
|
| 113 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 114 |
+
logger_name: csv
|
| 115 |
+
|
| 116 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 117 |
+
seed: 1337
|
| 118 |
+
|
| 119 |
+
# Optimizer-related arguments
|
| 120 |
+
optimizer:
|
| 121 |
+
class_path: torch.optim.AdamW
|
| 122 |
+
|
| 123 |
+
init_args:
|
| 124 |
+
# (type: float, default: 0.001)
|
| 125 |
+
lr: 0.0002
|
| 126 |
+
|
| 127 |
+
# (type: float, default: 0.01)
|
| 128 |
+
weight_decay: 0.0
|
| 129 |
+
|
| 130 |
+
# (type: tuple, default: (0.9,0.999))
|
| 131 |
+
betas:
|
| 132 |
+
- 0.9
|
| 133 |
+
- 0.95
|
config_hub/finetune/tiny-llama/full.yaml
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/full-tiny-llama-1.1b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 1
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 17 |
+
data:
|
| 18 |
+
class_path: litgpt.data.Alpaca2k
|
| 19 |
+
init_args:
|
| 20 |
+
mask_prompt: false
|
| 21 |
+
val_split_fraction: 0.03847
|
| 22 |
+
prompt_style: alpaca
|
| 23 |
+
ignore_index: -100
|
| 24 |
+
seed: 42
|
| 25 |
+
num_workers: 4
|
| 26 |
+
|
| 27 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 28 |
+
train:
|
| 29 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 30 |
+
save_interval: 800
|
| 31 |
+
|
| 32 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 33 |
+
log_interval: 1
|
| 34 |
+
|
| 35 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 36 |
+
global_batch_size: 32
|
| 37 |
+
|
| 38 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 39 |
+
micro_batch_size: 4
|
| 40 |
+
|
| 41 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 42 |
+
lr_warmup_steps: 1000
|
| 43 |
+
|
| 44 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 45 |
+
epochs: 1
|
| 46 |
+
|
| 47 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 48 |
+
max_tokens:
|
| 49 |
+
|
| 50 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 51 |
+
max_steps:
|
| 52 |
+
|
| 53 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 54 |
+
max_seq_length: 512
|
| 55 |
+
|
| 56 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 57 |
+
tie_embeddings:
|
| 58 |
+
|
| 59 |
+
# (type: Optional[float], default: null)
|
| 60 |
+
max_norm:
|
| 61 |
+
|
| 62 |
+
# (type: float, default: 6e-05)
|
| 63 |
+
min_lr: 6.0e-05
|
| 64 |
+
|
| 65 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 66 |
+
eval:
|
| 67 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 68 |
+
interval: 25
|
| 69 |
+
|
| 70 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 71 |
+
max_new_tokens: 100
|
| 72 |
+
|
| 73 |
+
# Number of iterations (type: int, default: 100)
|
| 74 |
+
max_iters: 100
|
| 75 |
+
|
| 76 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 77 |
+
initial_validation: false
|
| 78 |
+
|
| 79 |
+
# Whether to evaluate on the validation set at the end the training
|
| 80 |
+
final_validation: true
|
| 81 |
+
|
| 82 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 83 |
+
logger_name: csv
|
| 84 |
+
|
| 85 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 86 |
+
seed: 1337
|
| 87 |
+
|
| 88 |
+
# Optimizer-related arguments
|
| 89 |
+
optimizer:
|
| 90 |
+
class_path: torch.optim.AdamW
|
| 91 |
+
|
| 92 |
+
init_args:
|
| 93 |
+
# (type: float, default: 0.001)
|
| 94 |
+
lr: 0.0002
|
| 95 |
+
|
| 96 |
+
# (type: float, default: 0.01)
|
| 97 |
+
weight_decay: 0.0
|
| 98 |
+
|
| 99 |
+
# (type: tuple, default: (0.9,0.999))
|
| 100 |
+
betas:
|
| 101 |
+
- 0.9
|
| 102 |
+
- 0.95
|
config_hub/finetune/tiny-llama/full_qa.yaml
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/full-tiny-llama-1.1b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 11 |
+
devices: 1
|
| 12 |
+
|
| 13 |
+
# How many nodes to use. (type: int, default: 1)
|
| 14 |
+
num_nodes: 1
|
| 15 |
+
|
| 16 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 17 |
+
data:
|
| 18 |
+
class_path: litgpt.data.JSON
|
| 19 |
+
init_args:
|
| 20 |
+
mask_prompt: false
|
| 21 |
+
val_split_fraction: 0.02
|
| 22 |
+
ignore_index: -100
|
| 23 |
+
seed: 42
|
| 24 |
+
num_workers: 4
|
| 25 |
+
|
| 26 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 27 |
+
train:
|
| 28 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 29 |
+
save_interval: 800
|
| 30 |
+
|
| 31 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 32 |
+
log_interval: 1
|
| 33 |
+
|
| 34 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 35 |
+
global_batch_size: 32
|
| 36 |
+
|
| 37 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 38 |
+
micro_batch_size: 4
|
| 39 |
+
|
| 40 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 41 |
+
lr_warmup_steps: 1000
|
| 42 |
+
|
| 43 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 44 |
+
epochs: 1
|
| 45 |
+
|
| 46 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 47 |
+
max_tokens:
|
| 48 |
+
|
| 49 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 50 |
+
max_steps:
|
| 51 |
+
|
| 52 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 53 |
+
max_seq_length: 512
|
| 54 |
+
|
| 55 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 56 |
+
tie_embeddings:
|
| 57 |
+
|
| 58 |
+
# (type: Optional[float], default: null)
|
| 59 |
+
max_norm:
|
| 60 |
+
|
| 61 |
+
# (type: float, default: 6e-05)
|
| 62 |
+
min_lr: 6.0e-05
|
| 63 |
+
|
| 64 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 65 |
+
eval:
|
| 66 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 67 |
+
interval: 25
|
| 68 |
+
|
| 69 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 70 |
+
max_new_tokens: 100
|
| 71 |
+
|
| 72 |
+
# Number of iterations (type: int, default: 100)
|
| 73 |
+
max_iters: 100
|
| 74 |
+
|
| 75 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 76 |
+
initial_validation: false
|
| 77 |
+
|
| 78 |
+
# Whether to evaluate on the validation set at the end the training
|
| 79 |
+
final_validation: true
|
| 80 |
+
|
| 81 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 82 |
+
logger_name: csv
|
| 83 |
+
|
| 84 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 85 |
+
seed: 1337
|
| 86 |
+
|
| 87 |
+
# Optimizer-related arguments
|
| 88 |
+
optimizer:
|
| 89 |
+
class_path: torch.optim.AdamW
|
| 90 |
+
|
| 91 |
+
init_args:
|
| 92 |
+
# (type: float, default: 0.001)
|
| 93 |
+
lr: 0.0002
|
| 94 |
+
|
| 95 |
+
# (type: float, default: 0.01)
|
| 96 |
+
weight_decay: 0.0
|
| 97 |
+
|
| 98 |
+
# (type: tuple, default: (0.9,0.999))
|
| 99 |
+
betas:
|
| 100 |
+
- 0.9
|
| 101 |
+
- 0.95
|
config_hub/finetune/tiny-llama/lora.yaml
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
|
| 2 |
+
checkpoint_dir: checkpoints/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
|
| 3 |
+
|
| 4 |
+
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
|
| 5 |
+
out_dir: out/finetune/lora-tiny-llama-1.1b
|
| 6 |
+
|
| 7 |
+
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
|
| 8 |
+
precision: bf16-true
|
| 9 |
+
|
| 10 |
+
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
|
| 11 |
+
quantize:
|
| 12 |
+
|
| 13 |
+
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
|
| 14 |
+
devices: 1
|
| 15 |
+
|
| 16 |
+
# How many nodes to use. (type: int, default: 1)
|
| 17 |
+
num_nodes: 1
|
| 18 |
+
|
| 19 |
+
# The LoRA rank. (type: int, default: 8)
|
| 20 |
+
lora_r: 32
|
| 21 |
+
|
| 22 |
+
# The LoRA alpha. (type: int, default: 16)
|
| 23 |
+
lora_alpha: 16
|
| 24 |
+
|
| 25 |
+
# The LoRA dropout value. (type: float, default: 0.05)
|
| 26 |
+
lora_dropout: 0.05
|
| 27 |
+
|
| 28 |
+
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
|
| 29 |
+
lora_query: true
|
| 30 |
+
|
| 31 |
+
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
|
| 32 |
+
lora_key: true
|
| 33 |
+
|
| 34 |
+
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
|
| 35 |
+
lora_value: true
|
| 36 |
+
|
| 37 |
+
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
|
| 38 |
+
lora_projection: true
|
| 39 |
+
|
| 40 |
+
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
|
| 41 |
+
lora_mlp: true
|
| 42 |
+
|
| 43 |
+
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
|
| 44 |
+
lora_head: true
|
| 45 |
+
|
| 46 |
+
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
|
| 47 |
+
data:
|
| 48 |
+
class_path: litgpt.data.Alpaca2k
|
| 49 |
+
init_args:
|
| 50 |
+
mask_prompt: false
|
| 51 |
+
val_split_fraction: 0.03847
|
| 52 |
+
prompt_style: alpaca
|
| 53 |
+
ignore_index: -100
|
| 54 |
+
seed: 42
|
| 55 |
+
num_workers: 4
|
| 56 |
+
|
| 57 |
+
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
|
| 58 |
+
train:
|
| 59 |
+
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
|
| 60 |
+
save_interval: 800
|
| 61 |
+
|
| 62 |
+
# Number of iterations between logging calls (type: int, default: 1)
|
| 63 |
+
log_interval: 1
|
| 64 |
+
|
| 65 |
+
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
|
| 66 |
+
global_batch_size: 8
|
| 67 |
+
|
| 68 |
+
# Number of samples per data-parallel rank (type: int, default: 4)
|
| 69 |
+
micro_batch_size: 8
|
| 70 |
+
|
| 71 |
+
# Number of iterations with learning rate warmup active (type: int, default: 100)
|
| 72 |
+
lr_warmup_steps: 10
|
| 73 |
+
|
| 74 |
+
# Number of epochs to train on (type: Optional[int], default: 5)
|
| 75 |
+
epochs: 3
|
| 76 |
+
|
| 77 |
+
# Total number of tokens to train on (type: Optional[int], default: null)
|
| 78 |
+
max_tokens:
|
| 79 |
+
|
| 80 |
+
# Limits the number of optimizer steps to run. (type: Optional[int], default: null)
|
| 81 |
+
max_steps:
|
| 82 |
+
|
| 83 |
+
# Limits the length of samples. Off by default (type: Optional[int], default: null)
|
| 84 |
+
max_seq_length: 512
|
| 85 |
+
|
| 86 |
+
# Whether to tie the embedding weights with the language modeling head weights. (type: Optional[bool], default: null)
|
| 87 |
+
tie_embeddings:
|
| 88 |
+
|
| 89 |
+
# (type: Optional[float], default: null)
|
| 90 |
+
max_norm:
|
| 91 |
+
|
| 92 |
+
# (type: float, default: 6e-05)
|
| 93 |
+
min_lr: 6.0e-05
|
| 94 |
+
|
| 95 |
+
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
|
| 96 |
+
eval:
|
| 97 |
+
# Number of optimizer steps between evaluation calls (type: int, default: 100)
|
| 98 |
+
interval: 100
|
| 99 |
+
|
| 100 |
+
# Number of tokens to generate (type: Optional[int], default: 100)
|
| 101 |
+
max_new_tokens: 100
|
| 102 |
+
|
| 103 |
+
# Number of iterations (type: int, default: 100)
|
| 104 |
+
max_iters: 100
|
| 105 |
+
|
| 106 |
+
# Whether to evaluate on the validation set at the beginning of the training
|
| 107 |
+
initial_validation: false
|
| 108 |
+
|
| 109 |
+
# Whether to evaluate on the validation set at the end the training
|
| 110 |
+
final_validation: true
|
| 111 |
+
|
| 112 |
+
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
|
| 113 |
+
logger_name: csv
|
| 114 |
+
|
| 115 |
+
# The random seed to use for reproducibility. (type: int, default: 1337)
|
| 116 |
+
seed: 1337
|
| 117 |
+
|
| 118 |
+
# Optimizer-related arguments
|
| 119 |
+
optimizer:
|
| 120 |
+
class_path: torch.optim.AdamW
|
| 121 |
+
|
| 122 |
+
init_args:
|
| 123 |
+
# (type: float, default: 0.001)
|
| 124 |
+
lr: 0.0002
|
| 125 |
+
|
| 126 |
+
# (type: float, default: 0.01)
|
| 127 |
+
weight_decay: 0.0
|
| 128 |
+
|
| 129 |
+
# (type: tuple, default: (0.9,0.999))
|
| 130 |
+
betas:
|
| 131 |
+
- 0.9
|
| 132 |
+
- 0.95
|