The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
SGL-new
This repository is a cleaned, submission-oriented copy of the SGL codebase for TextVQA large-only experiments:
InternVL2-2Blarge-onlyInternVL2-8Blarge-onlyInternVL2-26Blarge-only2B vision + 1B mlp1 + 1B language modelhybrid checkpoint large-only2B vision + 8B mlp1 + 8B language modelhybrid checkpoint large-only2B vision + 26B mlp1 + 26B language modelhybrid checkpoint large-only
The repository does not include checkpoints or datasets. The intended workflow is:
- create an environment
- place checkpoints under
checkpoints/ - prepare TextVQA data under
data/ - optionally build the hybrid checkpoint
- run one of the experiment launch scripts
1. Repository Structure
Main experiment scripts:
textvqa2B-largeonly.shtextvqa8B-largeonly.shtextvqa26B-largeonly.shtextvqaHybrid-2Bvision-1Bllm-largeonly.shtextvqaHybrid-2Bvision-8Bllm-largeonly.shtextvqaHybrid-2Bvision-26Bllm-largeonly.shrun_textvqa_three_largeonly.shrun_textvqa_five_largeonly.shtrain_textvqaHybrid-2Bvision-26Bllm-mlp.sh
Core evaluation code:
eval/vqa/run_single_model_native.py
Native single-model helpers:
eval/vqa/run_single_model_native.pyeval/vqa/run_full_textvqa_native.sh
Utility scripts:
tools/prepare_textvqa_for_sgl.pytools/build_hybrid_checkpoint.pybuild_hybrid_checkpoint_2bvision_1bllm.shtools/hybrid_single_infer.pytools/train_hybrid_textvqa_mlp.pybuild_hybrid_checkpoint_2bvision_26bllm.sh
Environment helper:
setup_sgl_2b_env.sh
2. Environment Setup
This repo expects Python 3.10 and a CUDA-enabled PyTorch installation.
Option A: manual setup
conda create -y -n sgl-new python=3.10
conda activate sgl-new
pip install --upgrade pip
# Install torch/torchvision matching your CUDA version.
# Example for CUDA 12.1:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
Option B: helper script
bash setup_sgl_2b_env.sh sgl-new
conda activate sgl-new
# Then install torch/torchvision matching your CUDA version.
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
Notes
flash-attnis optional. The code can run without it, but may be slower.- The large-only launchers now call Python directly and optionally shard a model with
device_map. - If
transformersortorchversions are changed substantially, verify thatInternVLremote-code loading still works.
3. Checkpoint Layout
Create a directory:
mkdir -p checkpoints
Place checkpoints under checkpoints/ with these names:
checkpoints/models--OpenGVLab--InternVL2-1Bcheckpoints/models--OpenGVLab--InternVL2-2Bcheckpoints/models--OpenGVLab--InternVL2-8Bcheckpoints/models--OpenGVLab--InternVL2-26B
The hybrid checkpoint will be created at:
checkpoints/InternVL2-1B_2Bvision_hybridcheckpoints/InternVL2-8B_2Bvision_hybridcheckpoints/InternVL2-26B_2Bvision_hybrid
If you want to use a different checkpoint layout, override CHECKPOINT_ROOT or CHECKPOINT when launching.
4. TextVQA Data Preparation
This repo expects SGL-style TextVQA files under:
data/textvqa/textvqa_train.jsonldata/textvqa/textvqa_val.jsonldata/textvqa/textvqa_val_questions.jsondata/textvqa/textvqa_val_annotations.json
The repo does not ship the dataset.
4.1 Download the official TextVQA data
Prepare:
TextVQA_0.5.1_train.jsonTextVQA_0.5.1_val.jsonTextVQA_0.5.1_test.json- training/validation images
- test images
Place them under:
data/textvqa_official/
βββ TextVQA_0.5.1_train.json
βββ TextVQA_0.5.1_val.json
βββ TextVQA_0.5.1_test.json
βββ train_images/
βββ test_images/
4.2 Convert official data to SGL format
From the repo root:
python tools/prepare_textvqa_for_sgl.py \
--official-root data/textvqa_official \
--output-root data/textvqa
This script:
- creates
data/textvqa/*.jsonl - creates
textvqa_val_questions.json - creates
textvqa_val_annotations.json - symlinks
train_imagesandtest_imagesintodata/textvqa/
5. Building Hybrid Checkpoints
5.1 2B vision + 1B LLM hybrid
The hybrid experiment means:
vision_modelfromInternVL2-2Bmlp1fromInternVL2-1Blanguage_modelfromInternVL2-1B
Use the convenience wrapper:
bash build_hybrid_checkpoint_2bvision_1bllm.sh
Equivalent manual command:
python tools/build_hybrid_checkpoint.py \
--base-checkpoint checkpoints/models--OpenGVLab--InternVL2-1B \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--output-dir checkpoints/InternVL2-1B_2Bvision_hybrid
5.2 2B vision + 8B LLM hybrid
The hybrid experiment means:
vision_modelfromInternVL2-2Bmlp1fromInternVL2-8Blanguage_modelfromInternVL2-8B
In this repo, the reproducible builder is:
tools/build_hybrid_checkpoint.py
Run:
python tools/build_hybrid_checkpoint.py \
--base-checkpoint checkpoints/models--OpenGVLab--InternVL2-8B \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--output-dir checkpoints/InternVL2-8B_2Bvision_hybrid
This script starts from the 8B checkpoint, replaces its vision_model weights with the 2B vision_model, and saves a new merged checkpoint.
5.3 2B vision + 26B LLM hybrid
Use the convenience wrapper:
bash build_hybrid_checkpoint_2bvision_26bllm.sh
Equivalent manual command:
python tools/build_hybrid_checkpoint.py \
--base-checkpoint checkpoints/models--OpenGVLab--InternVL2-26B \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--output-dir checkpoints/InternVL2-26B_2Bvision_hybrid
6. How the Experiments Map to Code
6.1 InternVL2-2B large-only
Launcher:
textvqa2B-largeonly.sh
Core code path:
eval/vqa/run_single_model_native.py --mode textvqa_eval
Default checkpoint:
checkpoints/models--OpenGVLab--InternVL2-2B
Run:
bash textvqa2B-largeonly.sh
Optional overrides:
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=1 \
bash textvqa2B-largeonly.sh
6.2 InternVL2-8B large-only
Launcher:
textvqa8B-largeonly.sh
Core code path:
eval/vqa/run_single_model_native.py --mode textvqa_eval
Default checkpoint:
checkpoints/models--OpenGVLab--InternVL2-8B
Run:
bash textvqa8B-largeonly.sh
Optional overrides:
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=1 \
bash textvqa8B-largeonly.sh
6.3 InternVL2-26B large-only
Launcher:
textvqa26B-largeonly.sh
Core code path:
eval/vqa/run_single_model_native.py --mode textvqa_eval
Default checkpoint:
checkpoints/models--OpenGVLab--InternVL2-26B
Run:
bash textvqa26B-largeonly.sh
Optional overrides:
CUDA_VISIBLE_DEVICES=0,1 \
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=2 \
bash textvqa26B-largeonly.sh
6.4 2B vision + 1B mlp1 + 1B language model large-only
Launcher:
textvqaHybrid-2Bvision-1Bllm-largeonly.sh
Core code path:
eval/vqa/run_single_model_native.py --mode textvqa_eval
Hybrid builder:
build_hybrid_checkpoint_2bvision_1bllm.shtools/build_hybrid_checkpoint.py
Default checkpoint:
checkpoints/InternVL2-1B_2Bvision_hybrid
Run:
bash textvqaHybrid-2Bvision-1Bllm-largeonly.sh
Optional overrides:
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=1 \
bash textvqaHybrid-2Bvision-1Bllm-largeonly.sh
6.5 2B vision + 8B mlp1 + 8B language model large-only
Launcher:
textvqaHybrid-2Bvision-8Bllm-largeonly.sh
Core code path:
eval/vqa/run_single_model_native.py --mode textvqa_eval
Hybrid builder:
tools/build_hybrid_checkpoint.py
Default checkpoint:
checkpoints/InternVL2-8B_2Bvision_hybrid
Run:
bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh
Optional overrides:
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=1 \
bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh
6.6 2B vision + 26B mlp1 + 26B language model large-only
Launcher:
textvqaHybrid-2Bvision-26Bllm-largeonly.sh
Core code path:
eval/vqa/run_single_model_native.py --mode textvqa_eval
Hybrid builder:
build_hybrid_checkpoint_2bvision_26bllm.shtools/build_hybrid_checkpoint.py
Default checkpoint:
checkpoints/InternVL2-26B_2Bvision_hybrid
Run:
bash textvqaHybrid-2Bvision-26Bllm-largeonly.sh
Optional overrides:
CUDA_VISIBLE_DEVICES=0,1 \
CHECKPOINT_ROOT=/path/to/checkpoints \
OUT_DIR=/path/to/output \
GPUS_PER_MODEL=2 \
bash textvqaHybrid-2Bvision-26Bllm-largeonly.sh
6.7 Optional CoT-style reasoning
The native and hybrid inference entry points now support optional reasoning modes:
--reasoning-mode none: default single-pass decoding--reasoning-mode prompt: adds an internal "think step by step" instruction in one pass--reasoning-mode two_pass: first generates explicit reasoning, then compresses it into the final short answer
If you do not set REASONING_MODE or --reasoning-mode, the code stays on the original normal inference path.
For the hybrid TextVQA launchers, use environment variables:
REASONING_MODE=two_pass \
REASONING_MAX_NEW_TOKENS=64 \
SAVE_REASONING=1 \
bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh
For the shared-vision launcher:
REASONING_MODE=two_pass \
REASONING_MAX_NEW_TOKENS=64 \
SAVE_REASONING=1 \
bash textvqaSharedVision-2Bguide-8Btext.sh
To let the small guide model produce a short text hint for the large decoder:
GUIDE_TEXT_MODE=short_rationale \
GUIDE_TEXT_MAX_NEW_TOKENS=12 \
bash textvqaSharedVision-2Bguide-8Btext.sh
To force a short CoT on the guide branch so its generation changes the visual-token attention scores:
GUIDE_REASONING_MODE=short_cot \
GUIDE_REASONING_MAX_NEW_TOKENS=1024 \
bash textvqaSharedVision-2Bguide-8Btext.sh
Both options can be enabled together.
For single-image hybrid debugging:
python tools/hybrid_single_infer.py \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--language-checkpoint checkpoints/models--OpenGVLab--InternVL2-8B \
--image-path /path/to/image.jpg \
--prompt "What is the brand name on the sign?" \
--reasoning-mode two_pass \
--reasoning-max-new-tokens 64 \
--answer-format-prompt "Answer the question using a single word or phrase."
7. Running Sequential Launchers
Use:
bash run_textvqa_three_largeonly.sh
Default output root:
outputs/textvqa_three_largeonly
This script runs:
- 2B
- 8B
- hybrid 2B-vision + 8B-LLM
each with its own output subdirectory and launcher log.
To run all five experiments, use:
bash run_textvqa_five_largeonly.sh
This script adds:
- 26B
- hybrid 2B-vision + 26B-LLM
8. Minimal Hybrid Fine-Tuning On TextVQA
For a lightweight experiment, this repo also includes a minimal script that:
- builds
2B vision + 26B mlp1 + 26B language_model - freezes everything except
mlp1 - trains on TextVQA jsonl
- runs validation inference immediately after training
Launcher:
train_textvqaHybrid-2Bvision-26Bllm-mlp.sh
Core code:
tools/train_hybrid_textvqa_mlp.py
Default demo dataset:
/home/yf/snap/SGL_yf/data/textvqa_demo_backup/textvqa_train.jsonl/home/yf/snap/SGL_yf/data/textvqa_demo_backup/textvqa_val.jsonl
Run:
bash train_textvqaHybrid-2Bvision-26Bllm-mlp.sh
Important assumptions:
UPSTREAM_SGL_ROOTdefaults to/home/yf/snap/SGLbecause this script reuses the upstreaminternvlpackage.- The default launcher expects local checkpoints at:
/root/model_ckpts/models--OpenGVLab--InternVL2-2B/root/model_ckpts/models--OpenGVLab--InternVL2-26B
- The minimal implementation currently supports
batch_size=1.
9. Native Single-Model Inference Utilities
These are not required for the main large-only experiments, but they are included because they are useful for debugging and single-sample inspection.
Single sample or single question
Code:
eval/vqa/run_single_model_native.py
Example:
python eval/vqa/run_single_model_native.py \
--checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--mode single \
--image-path /path/to/image.jpg \
--prompt "What is written on the sign?" \
--max-new-tokens 32 \
--dynamic
Full TextVQA native evaluation for 2B and 8B
Code:
eval/vqa/run_full_textvqa_native.sh
Example:
bash eval/vqa/run_full_textvqa_native.sh outputs/native_eval
10. Hybrid Single-Sample Debugging Utility
Code:
tools/hybrid_single_infer.py
Example:
python tools/hybrid_single_infer.py \
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
--language-checkpoint checkpoints/models--OpenGVLab--InternVL2-8B \
--image-path /path/to/image.jpg \
--prompt "What is written on the sign?" \
--dynamic
This script does not require a saved hybrid checkpoint. It builds the hybrid model in memory for single-sample inspection.
11. Output Files
The large-only evaluation script writes outputs under the launcher-provided output directory.
Typical files include one JSON results file per run inside the launcher-provided output directory.
12. Minimal Reproduction Checklist
For someone receiving this repository, the minimal steps are:
- create a Python environment
- install
torch,torchvision, andrequirements.txt - download
InternVL2-2B,InternVL2-8B, and optionallyInternVL2-26Bintocheckpoints/ - download official TextVQA into
data/textvqa_official/ - run
python tools/prepare_textvqa_for_sgl.py - run
python tools/build_hybrid_checkpoint.py - run one of:
bash textvqa2B-largeonly.shbash textvqa8B-largeonly.shbash textvqa26B-largeonly.shbash textvqaHybrid-2Bvision-8Bllm-largeonly.shbash textvqaHybrid-2Bvision-26Bllm-largeonly.sh
13. Important Assumptions
- The code assumes CUDA is available for model inference.
- The code assumes TextVQA data is prepared under
data/textvqa/. - The code assumes checkpoints are available under
checkpoints/unless overridden. - All large-only experiments use the same evaluation implementation:
eval/vqa/run_single_model_native.py --mode textvqa_eval InternVL2-26Band the2B vision + 26B LLMhybrid usually require multiple visible GPUs.
- Downloads last month
- 34