sgl / README.md
xiaohaoWillX's picture
Add files using upload-large-folder tool
60e7f1f verified
# SGL-new
This repository is a cleaned, submission-oriented copy of the SGL codebase for TextVQA large-only experiments:
1. `InternVL2-2B` large-only
2. `InternVL2-8B` large-only
3. `InternVL2-26B` large-only
4. `2B vision + 1B mlp1 + 1B language model` hybrid checkpoint large-only
5. `2B vision + 8B mlp1 + 8B language model` hybrid checkpoint large-only
6. `2B vision + 26B mlp1 + 26B language model` hybrid checkpoint large-only
The repository does **not** include checkpoints or datasets. The intended workflow is:
1. create an environment
2. place checkpoints under `checkpoints/`
3. prepare TextVQA data under `data/`
4. optionally build the hybrid checkpoint
5. run one of the experiment launch scripts
## 1. Repository Structure
Main experiment scripts:
- `textvqa2B-largeonly.sh`
- `textvqa8B-largeonly.sh`
- `textvqa26B-largeonly.sh`
- `textvqaHybrid-2Bvision-1Bllm-largeonly.sh`
- `textvqaHybrid-2Bvision-8Bllm-largeonly.sh`
- `textvqaHybrid-2Bvision-26Bllm-largeonly.sh`
- `run_textvqa_three_largeonly.sh`
- `run_textvqa_five_largeonly.sh`
- `train_textvqaHybrid-2Bvision-26Bllm-mlp.sh`
Core evaluation code:
- `eval/vqa/run_single_model_native.py`
Native single-model helpers:
- `eval/vqa/run_single_model_native.py`
- `eval/vqa/run_full_textvqa_native.sh`
Utility scripts:
- `tools/prepare_textvqa_for_sgl.py`
- `tools/build_hybrid_checkpoint.py`
- `build_hybrid_checkpoint_2bvision_1bllm.sh`
- `tools/hybrid_single_infer.py`
- `tools/train_hybrid_textvqa_mlp.py`
- `build_hybrid_checkpoint_2bvision_26bllm.sh`
Environment helper:
- `setup_sgl_2b_env.sh`
## 2. Environment Setup
This repo expects Python 3.10 and a CUDA-enabled PyTorch installation.
### Option A: manual setup
```bash
conda create -y -n sgl-new python=3.10
conda activate sgl-new
pip install --upgrade pip
# Install torch/torchvision matching your CUDA version.
# Example for CUDA 12.1:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
```
### Option B: helper script
```bash
bash setup_sgl_2b_env.sh sgl-new
conda activate sgl-new
# Then install torch/torchvision matching your CUDA version.
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
```
### Notes
- `flash-attn` is optional. The code can run without it, but may be slower.
- The large-only launchers now call Python directly and optionally shard a model with `device_map`.
- If `transformers` or `torch` versions are changed substantially, verify that `InternVL` remote-code loading still works.
## 3. Checkpoint Layout
Create a directory:
```bash
mkdir -p checkpoints
```
Place checkpoints under `checkpoints/` with these names:
- `checkpoints/models--OpenGVLab--InternVL2-1B`
- `checkpoints/models--OpenGVLab--InternVL2-2B`
- `checkpoints/models--OpenGVLab--InternVL2-8B`
- `checkpoints/models--OpenGVLab--InternVL2-26B`
The hybrid checkpoint will be created at:
- `checkpoints/InternVL2-1B_2Bvision_hybrid`
- `checkpoints/InternVL2-8B_2Bvision_hybrid`
- `checkpoints/InternVL2-26B_2Bvision_hybrid`
If you want to use a different checkpoint layout, override `CHECKPOINT_ROOT` or `CHECKPOINT` when launching.
## 4. TextVQA Data Preparation
This repo expects SGL-style TextVQA files under:
- `data/textvqa/textvqa_train.jsonl`
- `data/textvqa/textvqa_val.jsonl`
- `data/textvqa/textvqa_val_questions.json`
- `data/textvqa/textvqa_val_annotations.json`
The repo does **not** ship the dataset.
### 4.1 Download the official TextVQA data
Prepare:
- `TextVQA_0.5.1_train.json`
- `TextVQA_0.5.1_val.json`
- `TextVQA_0.5.1_test.json`
- training/validation images
- test images
Place them under:
```text
data/textvqa_official/
├── TextVQA_0.5.1_train.json
├── TextVQA_0.5.1_val.json
├── TextVQA_0.5.1_test.json
├── train_images/
└── test_images/
```
### 4.2 Convert official data to SGL format
From the repo root:
```bash
python tools/prepare_textvqa_for_sgl.py \
--official-root data/textvqa_official \
--output-root data/textvqa
```
This script:
- creates `data/textvqa/*.jsonl`
- creates `textvqa_val_questions.json`
- creates `textvqa_val_annotations.json`
- symlinks `train_images` and `test_images` into `data/textvqa/`
## 5. Building Hybrid Checkpoints
### 5.1 2B vision + 1B LLM hybrid
The hybrid experiment means:
- `vision_model` from `InternVL2-2B`
- `mlp1` from `InternVL2-1B`
- `language_model` from `InternVL2-1B`
Use the convenience wrapper:
```bash
bash build_hybrid_checkpoint_2bvision_1bllm.sh
```
Equivalent manual command:
```bash
python tools/build_hybrid_checkpoint.py \
--base-checkpoint checkpoints/models--OpenGVLab--InternVL2-1B \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--output-dir checkpoints/InternVL2-1B_2Bvision_hybrid
```
### 5.2 2B vision + 8B LLM hybrid
The hybrid experiment means:
- `vision_model` from `InternVL2-2B`
- `mlp1` from `InternVL2-8B`
- `language_model` from `InternVL2-8B`
In this repo, the reproducible builder is:
- `tools/build_hybrid_checkpoint.py`
Run:
```bash
python tools/build_hybrid_checkpoint.py \
--base-checkpoint checkpoints/models--OpenGVLab--InternVL2-8B \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--output-dir checkpoints/InternVL2-8B_2Bvision_hybrid
```
This script starts from the 8B checkpoint, replaces its `vision_model` weights with the 2B `vision_model`, and saves a new merged checkpoint.
### 5.3 2B vision + 26B LLM hybrid
Use the convenience wrapper:
```bash
bash build_hybrid_checkpoint_2bvision_26bllm.sh
```
Equivalent manual command:
```bash
python tools/build_hybrid_checkpoint.py \
--base-checkpoint checkpoints/models--OpenGVLab--InternVL2-26B \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--output-dir checkpoints/InternVL2-26B_2Bvision_hybrid
```
## 6. How the Experiments Map to Code
### 6.1 InternVL2-2B large-only
Launcher:
- `textvqa2B-largeonly.sh`
Core code path:
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
Default checkpoint:
- `checkpoints/models--OpenGVLab--InternVL2-2B`
Run:
```bash
bash textvqa2B-largeonly.sh
```
Optional overrides:
```bash
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=1 \
bash textvqa2B-largeonly.sh
```
### 6.2 InternVL2-8B large-only
Launcher:
- `textvqa8B-largeonly.sh`
Core code path:
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
Default checkpoint:
- `checkpoints/models--OpenGVLab--InternVL2-8B`
Run:
```bash
bash textvqa8B-largeonly.sh
```
Optional overrides:
```bash
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=1 \
bash textvqa8B-largeonly.sh
```
### 6.3 InternVL2-26B large-only
Launcher:
- `textvqa26B-largeonly.sh`
Core code path:
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
Default checkpoint:
- `checkpoints/models--OpenGVLab--InternVL2-26B`
Run:
```bash
bash textvqa26B-largeonly.sh
```
Optional overrides:
```bash
CUDA_VISIBLE_DEVICES=0,1 \
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=2 \
bash textvqa26B-largeonly.sh
```
### 6.4 2B vision + 1B mlp1 + 1B language model large-only
Launcher:
- `textvqaHybrid-2Bvision-1Bllm-largeonly.sh`
Core code path:
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
Hybrid builder:
- `build_hybrid_checkpoint_2bvision_1bllm.sh`
- `tools/build_hybrid_checkpoint.py`
Default checkpoint:
- `checkpoints/InternVL2-1B_2Bvision_hybrid`
Run:
```bash
bash textvqaHybrid-2Bvision-1Bllm-largeonly.sh
```
Optional overrides:
```bash
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=1 \
bash textvqaHybrid-2Bvision-1Bllm-largeonly.sh
```
### 6.5 2B vision + 8B mlp1 + 8B language model large-only
Launcher:
- `textvqaHybrid-2Bvision-8Bllm-largeonly.sh`
Core code path:
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
Hybrid builder:
- `tools/build_hybrid_checkpoint.py`
Default checkpoint:
- `checkpoints/InternVL2-8B_2Bvision_hybrid`
Run:
```bash
bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh
```
Optional overrides:
```bash
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=1 \
bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh
```
### 6.6 2B vision + 26B mlp1 + 26B language model large-only
Launcher:
- `textvqaHybrid-2Bvision-26Bllm-largeonly.sh`
Core code path:
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
Hybrid builder:
- `build_hybrid_checkpoint_2bvision_26bllm.sh`
- `tools/build_hybrid_checkpoint.py`
Default checkpoint:
- `checkpoints/InternVL2-26B_2Bvision_hybrid`
Run:
```bash
bash textvqaHybrid-2Bvision-26Bllm-largeonly.sh
```
Optional overrides:
```bash
CUDA_VISIBLE_DEVICES=0,1 \
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=2 \
bash textvqaHybrid-2Bvision-26Bllm-largeonly.sh
```
### 6.7 Optional CoT-style reasoning
The native and hybrid inference entry points now support optional reasoning modes:
- `--reasoning-mode none`: default single-pass decoding
- `--reasoning-mode prompt`: adds an internal "think step by step" instruction in one pass
- `--reasoning-mode two_pass`: first generates explicit reasoning, then compresses it into the final short answer
If you do not set `REASONING_MODE` or `--reasoning-mode`, the code stays on the original normal inference path.
For the hybrid TextVQA launchers, use environment variables:
```bash
REASONING_MODE=two_pass \
REASONING_MAX_NEW_TOKENS=64 \
SAVE_REASONING=1 \
bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh
```
For the shared-vision launcher:
```bash
REASONING_MODE=two_pass \
REASONING_MAX_NEW_TOKENS=64 \
SAVE_REASONING=1 \
bash textvqaSharedVision-2Bguide-8Btext.sh
```
To let the small guide model produce a short text hint for the large decoder:
```bash
GUIDE_TEXT_MODE=short_rationale \
GUIDE_TEXT_MAX_NEW_TOKENS=12 \
bash textvqaSharedVision-2Bguide-8Btext.sh
```
To force a short CoT on the guide branch so its generation changes the visual-token attention scores:
```bash
GUIDE_REASONING_MODE=short_cot \
GUIDE_REASONING_MAX_NEW_TOKENS=1024 \
bash textvqaSharedVision-2Bguide-8Btext.sh
```
Both options can be enabled together.
For single-image hybrid debugging:
```bash
python tools/hybrid_single_infer.py \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--language-checkpoint checkpoints/models--OpenGVLab--InternVL2-8B \
--image-path /path/to/image.jpg \
--prompt "What is the brand name on the sign?" \
--reasoning-mode two_pass \
--reasoning-max-new-tokens 64 \
--answer-format-prompt "Answer the question using a single word or phrase."
```
## 7. Running Sequential Launchers
Use:
```bash
bash run_textvqa_three_largeonly.sh
```
Default output root:
- `outputs/textvqa_three_largeonly`
This script runs:
1. 2B
2. 8B
3. hybrid 2B-vision + 8B-LLM
each with its own output subdirectory and launcher log.
To run all five experiments, use:
```bash
bash run_textvqa_five_largeonly.sh
```
This script adds:
1. 26B
2. hybrid 2B-vision + 26B-LLM
## 8. Minimal Hybrid Fine-Tuning On TextVQA
For a lightweight experiment, this repo also includes a minimal script that:
1. builds `2B vision + 26B mlp1 + 26B language_model`
2. freezes everything except `mlp1`
3. trains on TextVQA jsonl
4. runs validation inference immediately after training
Launcher:
- `train_textvqaHybrid-2Bvision-26Bllm-mlp.sh`
Core code:
- `tools/train_hybrid_textvqa_mlp.py`
Default demo dataset:
- `/home/yf/snap/SGL_yf/data/textvqa_demo_backup/textvqa_train.jsonl`
- `/home/yf/snap/SGL_yf/data/textvqa_demo_backup/textvqa_val.jsonl`
Run:
```bash
bash train_textvqaHybrid-2Bvision-26Bllm-mlp.sh
```
Important assumptions:
- `UPSTREAM_SGL_ROOT` defaults to `/home/yf/snap/SGL` because this script reuses the upstream `internvl` package.
- The default launcher expects local checkpoints at:
- `/root/model_ckpts/models--OpenGVLab--InternVL2-2B`
- `/root/model_ckpts/models--OpenGVLab--InternVL2-26B`
- The minimal implementation currently supports `batch_size=1`.
## 9. Native Single-Model Inference Utilities
These are not required for the main large-only experiments, but they are included because they are useful for debugging and single-sample inspection.
### Single sample or single question
Code:
- `eval/vqa/run_single_model_native.py`
Example:
```bash
python eval/vqa/run_single_model_native.py \
--checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--mode single \
--image-path /path/to/image.jpg \
--prompt "What is written on the sign?" \
--max-new-tokens 32 \
--dynamic
```
### Full TextVQA native evaluation for 2B and 8B
Code:
- `eval/vqa/run_full_textvqa_native.sh`
Example:
```bash
bash eval/vqa/run_full_textvqa_native.sh outputs/native_eval
```
## 10. Hybrid Single-Sample Debugging Utility
Code:
- `tools/hybrid_single_infer.py`
Example:
```bash
python tools/hybrid_single_infer.py \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--language-checkpoint checkpoints/models--OpenGVLab--InternVL2-8B \
--image-path /path/to/image.jpg \
--prompt "What is written on the sign?" \
--dynamic
```
This script does **not** require a saved hybrid checkpoint. It builds the hybrid model in memory for single-sample inspection.
## 11. Output Files
The large-only evaluation script writes outputs under the launcher-provided output directory.
Typical files include one JSON results file per run inside the launcher-provided output directory.
## 12. Minimal Reproduction Checklist
For someone receiving this repository, the minimal steps are:
1. create a Python environment
2. install `torch`, `torchvision`, and `requirements.txt`
3. download `InternVL2-2B`, `InternVL2-8B`, and optionally `InternVL2-26B` into `checkpoints/`
4. download official TextVQA into `data/textvqa_official/`
5. run `python tools/prepare_textvqa_for_sgl.py`
6. run `python tools/build_hybrid_checkpoint.py`
7. run one of:
- `bash textvqa2B-largeonly.sh`
- `bash textvqa8B-largeonly.sh`
- `bash textvqa26B-largeonly.sh`
- `bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh`
- `bash textvqaHybrid-2Bvision-26Bllm-largeonly.sh`
## 13. Important Assumptions
- The code assumes CUDA is available for model inference.
- The code assumes TextVQA data is prepared under `data/textvqa/`.
- The code assumes checkpoints are available under `checkpoints/` unless overridden.
- All large-only experiments use the same evaluation implementation:
`eval/vqa/run_single_model_native.py --mode textvqa_eval`
- `InternVL2-26B` and the `2B vision + 26B LLM` hybrid usually require multiple visible GPUs.