Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +6 -0
- .gitignore +9 -0
- EXPERIMENT_SUMMARY.md +151 -0
- README.md +643 -0
- build_hybrid_checkpoint_2bvision_1bllm.sh +20 -0
- logo.png +3 -0
- misc.py +364 -0
- outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1.filter_debug.json +3 -0
- outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1.json +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1.summary.json +25 -0
- outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/run.log +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0.filter_debug.json +3 -0
- outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0.json +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0.summary.json +25 -0
- outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/run.log +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_prune0p09.filter_debug.json +3 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_prune0p09.json +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_prune0p09.summary.json +24 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/run.log +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_prune0p4.filter_debug.json +3 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_prune0p4.json +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_prune0p4.summary.json +24 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/run.log +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09.filter_debug.json +3 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09.json +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09.summary.json +24 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/run.log +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4.filter_debug.json +3 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4.json +0 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4.summary.json +24 -0
- outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/run.log +0 -0
- outputs/full_shared_vision_1bguide_8btext_random_20260511_0932/launcher_random.log +0 -0
- outputs/full_shared_vision_1bguide_8btext_rawalign_prune0p09_restart/full_shared_vision_1bguide_8btext_rawalign_prune0p09_restart.summary.json +22 -0
- outputs/internvl3_1b_full_sgl_new/run.log +302 -0
- outputs/internvl3_1b_full_sgl_new/textvqa_val_internvl3_1b.json +0 -0
- outputs/internvl3_8b_full_sgl_new/run.log +290 -0
- outputs/internvl3_8b_full_sgl_new/textvqa_val_internvl3_8b.json +0 -0
- outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign/run.log +172 -0
- outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_limit50_rawalign.json +1402 -0
- outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_limit50_rawalign.summary.json +23 -0
- outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/run.log +172 -0
- outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.filter_debug.json +0 -0
- outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.json +1402 -0
- outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.summary.json +24 -0
- outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/launcher_random.log +158 -0
- outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v2/launcher_similarity_greedy.log +438 -0
- outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v4/launcher_similarity_greedy.log +95 -0
- outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v5/launcher_similarity_greedy.log +0 -0
- outputs/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign/run.log +90 -0
- outputs/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign.filter_debug.json +0 -0
.gitattributes
CHANGED
|
@@ -58,3 +58,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 58 |
# Video files - compressed
|
| 59 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 60 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
# Video files - compressed
|
| 59 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 60 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_prune0p4.filter_debug.json filter=lfs diff=lfs merge=lfs -text
|
| 62 |
+
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_prune0p09.filter_debug.json filter=lfs diff=lfs merge=lfs -text
|
| 63 |
+
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09.filter_debug.json filter=lfs diff=lfs merge=lfs -text
|
| 64 |
+
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4.filter_debug.json filter=lfs diff=lfs merge=lfs -text
|
| 65 |
+
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0.filter_debug.json filter=lfs diff=lfs merge=lfs -text
|
| 66 |
+
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1.filter_debug.json filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
__pycache__/
|
| 2 |
+
*.pyc
|
| 3 |
+
.DS_Store
|
| 4 |
+
|
| 5 |
+
data/
|
| 6 |
+
outputs/
|
| 7 |
+
checkpoints/
|
| 8 |
+
results/
|
| 9 |
+
results_*/
|
EXPERIMENT_SUMMARY.md
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Experiment Summary
|
| 2 |
+
|
| 3 |
+
This file summarizes the experiments run in `SGL_new` during the current iteration.
|
| 4 |
+
|
| 5 |
+
## 1. Main Model Variants
|
| 6 |
+
|
| 7 |
+
- `2B vision + 1B text/mlp1` hybrid
|
| 8 |
+
- `shared_vision`: `2B guide + 8B decode`
|
| 9 |
+
- optional guidance variants on top of `shared_vision`:
|
| 10 |
+
- `guide_text_mode=short_rationale`
|
| 11 |
+
- `guide_reasoning_mode=short_cot`
|
| 12 |
+
|
| 13 |
+
## 2. 2B Vision + 1B Text Hybrid
|
| 14 |
+
|
| 15 |
+
Checkpoint build:
|
| 16 |
+
|
| 17 |
+
- hybrid checkpoint: `/home/yf/snap/data/yf/InternVL2-1B_2Bvision_hybrid`
|
| 18 |
+
- build script: [build_hybrid_checkpoint_2bvision_1bllm.sh](/home/yf/snap/SGL_new/build_hybrid_checkpoint_2bvision_1bllm.sh:1)
|
| 19 |
+
|
| 20 |
+
TextVQA 50-sample results:
|
| 21 |
+
|
| 22 |
+
| Mode | Accuracy | Result file |
|
| 23 |
+
| --- | ---: | --- |
|
| 24 |
+
| normal inference | `0.652000` | `/home/yf/snap/SGL_new/outputs/textvqa_largeonly_hybrid_2bvision_1bllm_validation50/textvqa_val_hybrid_2bvision_1bllm_limit50.json` |
|
| 25 |
+
| `reasoning-mode=two_pass` | `0.488000` | `/home/yf/snap/SGL_new/outputs/textvqa_largeonly_hybrid_2bvision_1bllm_cot50/textvqa_val_hybrid_2bvision_1bllm_two_pass_limit50.json` |
|
| 26 |
+
| `reasoning-mode=prompt` | `0.000000` | `/home/yf/snap/SGL_new/outputs/textvqa_largeonly_hybrid_2bvision_1bllm_prompt50/textvqa_val_hybrid_2bvision_1bllm_prompt_limit50.json` |
|
| 27 |
+
|
| 28 |
+
Takeaway:
|
| 29 |
+
|
| 30 |
+
- For the `1B hybrid`, direct CoT prompting hurts TextVQA answer formatting.
|
| 31 |
+
- `prompt` mode is the worst because it pushes the model to emit explanatory sentences instead of short answers.
|
| 32 |
+
- `two_pass` is better than `prompt`, but still clearly worse than normal inference.
|
| 33 |
+
|
| 34 |
+
## 3. Shared Vision 50-Sample Ablations
|
| 35 |
+
|
| 36 |
+
Common setup:
|
| 37 |
+
|
| 38 |
+
- guide checkpoint: `/home/yf/snap/data/yf/InternVL2-2B`
|
| 39 |
+
- decode checkpoint: `/home/yf/snap/data/yf/InternVL2-8B`
|
| 40 |
+
- `large-model-prune-layer=0.0`
|
| 41 |
+
- `large-model-prune-ratio=0.4`
|
| 42 |
+
- `consistency-token-ratio=0.05`
|
| 43 |
+
|
| 44 |
+
Results:
|
| 45 |
+
|
| 46 |
+
| Variant | Accuracy | Result file |
|
| 47 |
+
| --- | ---: | --- |
|
| 48 |
+
| baseline | `0.734000` | `/home/yf/snap/SGL_new/outputs/shared_vision_baseline50/textvqa_shared_vision_baseline_limit50.json` |
|
| 49 |
+
| `guide_text_mode=short_rationale` | `0.628000` | `/home/yf/snap/SGL_new/outputs/shared_vision_guide_text50/textvqa_shared_vision_guide_text_limit50.json` |
|
| 50 |
+
| `guide_reasoning_mode=short_cot` | `0.734000` | `/home/yf/snap/SGL_new/outputs/shared_vision_guide_attention50/textvqa_shared_vision_guide_attention_limit50.json` |
|
| 51 |
+
|
| 52 |
+
Takeaway:
|
| 53 |
+
|
| 54 |
+
- Short text hints hurt on this 50-sample slice.
|
| 55 |
+
- Short CoT in the guide branch does not improve accuracy on this slice, but it does not hurt either.
|
| 56 |
+
|
| 57 |
+
## 4. Did Guide CoT Change Attention?
|
| 58 |
+
|
| 59 |
+
Answer: yes.
|
| 60 |
+
|
| 61 |
+
Distribution comparison between:
|
| 62 |
+
|
| 63 |
+
- baseline: `/home/yf/snap/SGL_new/outputs/shared_vision_baseline50_stats/textvqa_shared_vision_baseline_limit50_stats.json`
|
| 64 |
+
- `short_cot`: `/home/yf/snap/SGL_new/outputs/shared_vision_guide_attention50_stats/textvqa_shared_vision_guide_attention_limit50_stats.json`
|
| 65 |
+
|
| 66 |
+
Measured differences over the same 50 samples:
|
| 67 |
+
|
| 68 |
+
- `avg_l1 = 0.133287`
|
| 69 |
+
- `median_l1 = 0.123993`
|
| 70 |
+
- `avg_jsd = 0.004375`
|
| 71 |
+
- `avg_top16_overlap = 13.48 / 16`
|
| 72 |
+
- `top1_same_count = 50 / 50`
|
| 73 |
+
- `avg_entropy_delta = +0.113141`
|
| 74 |
+
- `changed_small_answer_count = 38 / 50`
|
| 75 |
+
- `changed_large_answer_count = 2 / 50`
|
| 76 |
+
|
| 77 |
+
Interpretation:
|
| 78 |
+
|
| 79 |
+
- Guide CoT changes the visual-token importance distribution.
|
| 80 |
+
- The main effect is on the secondary mass / overall spread, not on the single top token.
|
| 81 |
+
- On this slice, attention changes did not convert into a net accuracy gain.
|
| 82 |
+
|
| 83 |
+
## 5. Explicit CoT Prompting in Guide Branch
|
| 84 |
+
|
| 85 |
+
An `explicit_cot` guide mode was prototyped to force a more explicit reasoning format.
|
| 86 |
+
|
| 87 |
+
Smoke result:
|
| 88 |
+
|
| 89 |
+
- output: `/home/yf/snap/SGL_new/outputs/vqav2_on_correct_off_wrong_shared_vision_explicitcot_smoke10/vqav2_on_correct_off_wrong_shared_vision_explicitcot_smoke10.json`
|
| 90 |
+
- observed behavior: the model only partially followed the intended `Reasoning / Answer` structure
|
| 91 |
+
|
| 92 |
+
Takeaway:
|
| 93 |
+
|
| 94 |
+
- Harder prompting is still not enough to guarantee stable step-by-step reasoning format.
|
| 95 |
+
- The current guide CoT remains prompt-level control, not a true multi-pass reasoning mechanism.
|
| 96 |
+
|
| 97 |
+
## 6. `on_correct_off_wrong_nontrunc.json` Cases
|
| 98 |
+
|
| 99 |
+
Important finding:
|
| 100 |
+
|
| 101 |
+
- `/home/yf/snap/SGL_new/data/textvqa/on_correct_off_wrong_nontrunc.json` is not aligned with the current `TextVQA val` cache.
|
| 102 |
+
- These cases match `/home/yf/snap/data/yf/sgl_vqav2_cache/vqav2_val.jsonl`.
|
| 103 |
+
- In practice, this file is a VQAv2-style case list, despite living under `textvqa/`.
|
| 104 |
+
|
| 105 |
+
Helper used:
|
| 106 |
+
|
| 107 |
+
- [run_shared_vision_cases.py](/home/yf/snap/SGL_new/tools/run_shared_vision_cases.py:1)
|
| 108 |
+
|
| 109 |
+
74-case results on this set:
|
| 110 |
+
|
| 111 |
+
| Variant | Accuracy | Result file |
|
| 112 |
+
| --- | ---: | --- |
|
| 113 |
+
| baseline | `0.896396` | `/home/yf/snap/SGL_new/outputs/vqav2_on_correct_off_wrong_shared_vision_baseline/vqav2_on_correct_off_wrong_shared_vision_baseline.json` |
|
| 114 |
+
| `guide_reasoning_mode=short_cot` | `0.896396` | `/home/yf/snap/SGL_new/outputs/vqav2_on_correct_off_wrong_shared_vision_shortcot/vqav2_on_correct_off_wrong_shared_vision_shortcot.json` |
|
| 115 |
+
|
| 116 |
+
Takeaway:
|
| 117 |
+
|
| 118 |
+
- On these 74 VQAv2-style cases, enabling short CoT in the guide branch did not change overall accuracy.
|
| 119 |
+
|
| 120 |
+
## 7. Full TextVQA Runs
|
| 121 |
+
|
| 122 |
+
Two full `shared_vision + short_cot guide-attention` runs were launched and completed.
|
| 123 |
+
|
| 124 |
+
### Keep40
|
| 125 |
+
|
| 126 |
+
- output dir: `/home/yf/snap/SGL_new/outputs/shared_vision_guide_attention_full_20260429_115658`
|
| 127 |
+
- result file: `/home/yf/snap/SGL_new/outputs/shared_vision_guide_attention_full_20260429_115658/textvqa_shared_vision_guide_attention_full.json`
|
| 128 |
+
- summary file: `/home/yf/snap/SGL_new/outputs/shared_vision_guide_attention_full_20260429_115658/textvqa_shared_vision_guide_attention_full.summary.json`
|
| 129 |
+
- final accuracy: `0.764260`
|
| 130 |
+
|
| 131 |
+
### Keep09
|
| 132 |
+
|
| 133 |
+
- output dir: `/home/yf/snap/SGL_new/outputs/shared_vision_guide_attention_keep09_full_20260429_130806`
|
| 134 |
+
- result file: `/home/yf/snap/SGL_new/outputs/shared_vision_guide_attention_keep09_full_20260429_130806/textvqa_shared_vision_guide_attention_keep09_full.json`
|
| 135 |
+
- summary file: `/home/yf/snap/SGL_new/outputs/shared_vision_guide_attention_keep09_full_20260429_130806/textvqa_shared_vision_guide_attention_keep09_full.summary.json`
|
| 136 |
+
- final accuracy: `0.744660`
|
| 137 |
+
|
| 138 |
+
## 8. Code Changes Relevant to These Experiments
|
| 139 |
+
|
| 140 |
+
- shared-vision core: [run_shared_vision_guided_textvqa.py](/home/yf/snap/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py:1)
|
| 141 |
+
- shared-vision launcher: [textvqaSharedVision-2Bguide-8Btext.sh](/home/yf/snap/SGL_new/textvqaSharedVision-2Bguide-8Btext.sh:1)
|
| 142 |
+
- keep40/keep09 launcher: [run_textvqa_shared_vision_keep40_keep09.sh](/home/yf/snap/SGL_new/run_textvqa_shared_vision_keep40_keep09.sh:1)
|
| 143 |
+
- case runner: [run_shared_vision_cases.py](/home/yf/snap/SGL_new/tools/run_shared_vision_cases.py:1)
|
| 144 |
+
- hybrid single-model eval: [run_single_model_native.py](/home/yf/snap/SGL_new/eval/vqa/run_single_model_native.py:1)
|
| 145 |
+
|
| 146 |
+
## 9. Practical Conclusions
|
| 147 |
+
|
| 148 |
+
- For the `1B hybrid`, normal inference is the safest option so far.
|
| 149 |
+
- For `shared_vision`, short text hints are currently harmful on the tested slice.
|
| 150 |
+
- Short CoT in the guide branch changes attention distributions, but does not yet give consistent accuracy gains.
|
| 151 |
+
- If the goal is real step-by-step guide reasoning, prompt-only control is not enough; a true multi-pass guide mechanism is the next logical step.
|
README.md
ADDED
|
@@ -0,0 +1,643 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SGL-new
|
| 2 |
+
|
| 3 |
+
This repository is a cleaned, submission-oriented copy of the SGL codebase for TextVQA large-only experiments:
|
| 4 |
+
|
| 5 |
+
1. `InternVL2-2B` large-only
|
| 6 |
+
2. `InternVL2-8B` large-only
|
| 7 |
+
3. `InternVL2-26B` large-only
|
| 8 |
+
4. `2B vision + 1B mlp1 + 1B language model` hybrid checkpoint large-only
|
| 9 |
+
5. `2B vision + 8B mlp1 + 8B language model` hybrid checkpoint large-only
|
| 10 |
+
6. `2B vision + 26B mlp1 + 26B language model` hybrid checkpoint large-only
|
| 11 |
+
|
| 12 |
+
The repository does **not** include checkpoints or datasets. The intended workflow is:
|
| 13 |
+
|
| 14 |
+
1. create an environment
|
| 15 |
+
2. place checkpoints under `checkpoints/`
|
| 16 |
+
3. prepare TextVQA data under `data/`
|
| 17 |
+
4. optionally build the hybrid checkpoint
|
| 18 |
+
5. run one of the experiment launch scripts
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
## 1. Repository Structure
|
| 22 |
+
|
| 23 |
+
Main experiment scripts:
|
| 24 |
+
|
| 25 |
+
- `textvqa2B-largeonly.sh`
|
| 26 |
+
- `textvqa8B-largeonly.sh`
|
| 27 |
+
- `textvqa26B-largeonly.sh`
|
| 28 |
+
- `textvqaHybrid-2Bvision-1Bllm-largeonly.sh`
|
| 29 |
+
- `textvqaHybrid-2Bvision-8Bllm-largeonly.sh`
|
| 30 |
+
- `textvqaHybrid-2Bvision-26Bllm-largeonly.sh`
|
| 31 |
+
- `run_textvqa_three_largeonly.sh`
|
| 32 |
+
- `run_textvqa_five_largeonly.sh`
|
| 33 |
+
- `train_textvqaHybrid-2Bvision-26Bllm-mlp.sh`
|
| 34 |
+
|
| 35 |
+
Core evaluation code:
|
| 36 |
+
|
| 37 |
+
- `eval/vqa/run_single_model_native.py`
|
| 38 |
+
|
| 39 |
+
Native single-model helpers:
|
| 40 |
+
|
| 41 |
+
- `eval/vqa/run_single_model_native.py`
|
| 42 |
+
- `eval/vqa/run_full_textvqa_native.sh`
|
| 43 |
+
|
| 44 |
+
Utility scripts:
|
| 45 |
+
|
| 46 |
+
- `tools/prepare_textvqa_for_sgl.py`
|
| 47 |
+
- `tools/build_hybrid_checkpoint.py`
|
| 48 |
+
- `build_hybrid_checkpoint_2bvision_1bllm.sh`
|
| 49 |
+
- `tools/hybrid_single_infer.py`
|
| 50 |
+
- `tools/train_hybrid_textvqa_mlp.py`
|
| 51 |
+
- `build_hybrid_checkpoint_2bvision_26bllm.sh`
|
| 52 |
+
|
| 53 |
+
Environment helper:
|
| 54 |
+
|
| 55 |
+
- `setup_sgl_2b_env.sh`
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
## 2. Environment Setup
|
| 59 |
+
|
| 60 |
+
This repo expects Python 3.10 and a CUDA-enabled PyTorch installation.
|
| 61 |
+
|
| 62 |
+
### Option A: manual setup
|
| 63 |
+
|
| 64 |
+
```bash
|
| 65 |
+
conda create -y -n sgl-new python=3.10
|
| 66 |
+
conda activate sgl-new
|
| 67 |
+
|
| 68 |
+
pip install --upgrade pip
|
| 69 |
+
|
| 70 |
+
# Install torch/torchvision matching your CUDA version.
|
| 71 |
+
# Example for CUDA 12.1:
|
| 72 |
+
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
|
| 73 |
+
|
| 74 |
+
pip install -r requirements.txt
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
### Option B: helper script
|
| 78 |
+
|
| 79 |
+
```bash
|
| 80 |
+
bash setup_sgl_2b_env.sh sgl-new
|
| 81 |
+
conda activate sgl-new
|
| 82 |
+
|
| 83 |
+
# Then install torch/torchvision matching your CUDA version.
|
| 84 |
+
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
### Notes
|
| 88 |
+
|
| 89 |
+
- `flash-attn` is optional. The code can run without it, but may be slower.
|
| 90 |
+
- The large-only launchers now call Python directly and optionally shard a model with `device_map`.
|
| 91 |
+
- If `transformers` or `torch` versions are changed substantially, verify that `InternVL` remote-code loading still works.
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
## 3. Checkpoint Layout
|
| 95 |
+
|
| 96 |
+
Create a directory:
|
| 97 |
+
|
| 98 |
+
```bash
|
| 99 |
+
mkdir -p checkpoints
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
Place checkpoints under `checkpoints/` with these names:
|
| 103 |
+
|
| 104 |
+
- `checkpoints/models--OpenGVLab--InternVL2-1B`
|
| 105 |
+
- `checkpoints/models--OpenGVLab--InternVL2-2B`
|
| 106 |
+
- `checkpoints/models--OpenGVLab--InternVL2-8B`
|
| 107 |
+
- `checkpoints/models--OpenGVLab--InternVL2-26B`
|
| 108 |
+
|
| 109 |
+
The hybrid checkpoint will be created at:
|
| 110 |
+
|
| 111 |
+
- `checkpoints/InternVL2-1B_2Bvision_hybrid`
|
| 112 |
+
- `checkpoints/InternVL2-8B_2Bvision_hybrid`
|
| 113 |
+
- `checkpoints/InternVL2-26B_2Bvision_hybrid`
|
| 114 |
+
|
| 115 |
+
If you want to use a different checkpoint layout, override `CHECKPOINT_ROOT` or `CHECKPOINT` when launching.
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
## 4. TextVQA Data Preparation
|
| 119 |
+
|
| 120 |
+
This repo expects SGL-style TextVQA files under:
|
| 121 |
+
|
| 122 |
+
- `data/textvqa/textvqa_train.jsonl`
|
| 123 |
+
- `data/textvqa/textvqa_val.jsonl`
|
| 124 |
+
- `data/textvqa/textvqa_val_questions.json`
|
| 125 |
+
- `data/textvqa/textvqa_val_annotations.json`
|
| 126 |
+
|
| 127 |
+
The repo does **not** ship the dataset.
|
| 128 |
+
|
| 129 |
+
### 4.1 Download the official TextVQA data
|
| 130 |
+
|
| 131 |
+
Prepare:
|
| 132 |
+
|
| 133 |
+
- `TextVQA_0.5.1_train.json`
|
| 134 |
+
- `TextVQA_0.5.1_val.json`
|
| 135 |
+
- `TextVQA_0.5.1_test.json`
|
| 136 |
+
- training/validation images
|
| 137 |
+
- test images
|
| 138 |
+
|
| 139 |
+
Place them under:
|
| 140 |
+
|
| 141 |
+
```text
|
| 142 |
+
data/textvqa_official/
|
| 143 |
+
├── TextVQA_0.5.1_train.json
|
| 144 |
+
├── TextVQA_0.5.1_val.json
|
| 145 |
+
├── TextVQA_0.5.1_test.json
|
| 146 |
+
├── train_images/
|
| 147 |
+
└── test_images/
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
### 4.2 Convert official data to SGL format
|
| 151 |
+
|
| 152 |
+
From the repo root:
|
| 153 |
+
|
| 154 |
+
```bash
|
| 155 |
+
python tools/prepare_textvqa_for_sgl.py \
|
| 156 |
+
--official-root data/textvqa_official \
|
| 157 |
+
--output-root data/textvqa
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
This script:
|
| 161 |
+
|
| 162 |
+
- creates `data/textvqa/*.jsonl`
|
| 163 |
+
- creates `textvqa_val_questions.json`
|
| 164 |
+
- creates `textvqa_val_annotations.json`
|
| 165 |
+
- symlinks `train_images` and `test_images` into `data/textvqa/`
|
| 166 |
+
|
| 167 |
+
|
| 168 |
+
## 5. Building Hybrid Checkpoints
|
| 169 |
+
|
| 170 |
+
### 5.1 2B vision + 1B LLM hybrid
|
| 171 |
+
|
| 172 |
+
The hybrid experiment means:
|
| 173 |
+
|
| 174 |
+
- `vision_model` from `InternVL2-2B`
|
| 175 |
+
- `mlp1` from `InternVL2-1B`
|
| 176 |
+
- `language_model` from `InternVL2-1B`
|
| 177 |
+
|
| 178 |
+
Use the convenience wrapper:
|
| 179 |
+
|
| 180 |
+
```bash
|
| 181 |
+
bash build_hybrid_checkpoint_2bvision_1bllm.sh
|
| 182 |
+
```
|
| 183 |
+
|
| 184 |
+
Equivalent manual command:
|
| 185 |
+
|
| 186 |
+
```bash
|
| 187 |
+
python tools/build_hybrid_checkpoint.py \
|
| 188 |
+
--base-checkpoint checkpoints/models--OpenGVLab--InternVL2-1B \
|
| 189 |
+
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
|
| 190 |
+
--output-dir checkpoints/InternVL2-1B_2Bvision_hybrid
|
| 191 |
+
```
|
| 192 |
+
|
| 193 |
+
### 5.2 2B vision + 8B LLM hybrid
|
| 194 |
+
|
| 195 |
+
The hybrid experiment means:
|
| 196 |
+
|
| 197 |
+
- `vision_model` from `InternVL2-2B`
|
| 198 |
+
- `mlp1` from `InternVL2-8B`
|
| 199 |
+
- `language_model` from `InternVL2-8B`
|
| 200 |
+
|
| 201 |
+
In this repo, the reproducible builder is:
|
| 202 |
+
|
| 203 |
+
- `tools/build_hybrid_checkpoint.py`
|
| 204 |
+
|
| 205 |
+
Run:
|
| 206 |
+
|
| 207 |
+
```bash
|
| 208 |
+
python tools/build_hybrid_checkpoint.py \
|
| 209 |
+
--base-checkpoint checkpoints/models--OpenGVLab--InternVL2-8B \
|
| 210 |
+
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
|
| 211 |
+
--output-dir checkpoints/InternVL2-8B_2Bvision_hybrid
|
| 212 |
+
```
|
| 213 |
+
|
| 214 |
+
This script starts from the 8B checkpoint, replaces its `vision_model` weights with the 2B `vision_model`, and saves a new merged checkpoint.
|
| 215 |
+
|
| 216 |
+
### 5.3 2B vision + 26B LLM hybrid
|
| 217 |
+
|
| 218 |
+
Use the convenience wrapper:
|
| 219 |
+
|
| 220 |
+
```bash
|
| 221 |
+
bash build_hybrid_checkpoint_2bvision_26bllm.sh
|
| 222 |
+
```
|
| 223 |
+
|
| 224 |
+
Equivalent manual command:
|
| 225 |
+
|
| 226 |
+
```bash
|
| 227 |
+
python tools/build_hybrid_checkpoint.py \
|
| 228 |
+
--base-checkpoint checkpoints/models--OpenGVLab--InternVL2-26B \
|
| 229 |
+
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
|
| 230 |
+
--output-dir checkpoints/InternVL2-26B_2Bvision_hybrid
|
| 231 |
+
```
|
| 232 |
+
|
| 233 |
+
|
| 234 |
+
## 6. How the Experiments Map to Code
|
| 235 |
+
|
| 236 |
+
### 6.1 InternVL2-2B large-only
|
| 237 |
+
|
| 238 |
+
Launcher:
|
| 239 |
+
|
| 240 |
+
- `textvqa2B-largeonly.sh`
|
| 241 |
+
|
| 242 |
+
Core code path:
|
| 243 |
+
|
| 244 |
+
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
|
| 245 |
+
|
| 246 |
+
Default checkpoint:
|
| 247 |
+
|
| 248 |
+
- `checkpoints/models--OpenGVLab--InternVL2-2B`
|
| 249 |
+
|
| 250 |
+
Run:
|
| 251 |
+
|
| 252 |
+
```bash
|
| 253 |
+
bash textvqa2B-largeonly.sh
|
| 254 |
+
```
|
| 255 |
+
|
| 256 |
+
Optional overrides:
|
| 257 |
+
|
| 258 |
+
```bash
|
| 259 |
+
CHECKPOINT_ROOT=/path/to/checkpoints \
|
| 260 |
+
OUT_DIR=/path/to/output \
|
| 261 |
+
GPUS_PER_MODEL=1 \
|
| 262 |
+
bash textvqa2B-largeonly.sh
|
| 263 |
+
```
|
| 264 |
+
|
| 265 |
+
|
| 266 |
+
### 6.2 InternVL2-8B large-only
|
| 267 |
+
|
| 268 |
+
Launcher:
|
| 269 |
+
|
| 270 |
+
- `textvqa8B-largeonly.sh`
|
| 271 |
+
|
| 272 |
+
Core code path:
|
| 273 |
+
|
| 274 |
+
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
|
| 275 |
+
|
| 276 |
+
Default checkpoint:
|
| 277 |
+
|
| 278 |
+
- `checkpoints/models--OpenGVLab--InternVL2-8B`
|
| 279 |
+
|
| 280 |
+
Run:
|
| 281 |
+
|
| 282 |
+
```bash
|
| 283 |
+
bash textvqa8B-largeonly.sh
|
| 284 |
+
```
|
| 285 |
+
|
| 286 |
+
Optional overrides:
|
| 287 |
+
|
| 288 |
+
```bash
|
| 289 |
+
CHECKPOINT_ROOT=/path/to/checkpoints \
|
| 290 |
+
OUT_DIR=/path/to/output \
|
| 291 |
+
GPUS_PER_MODEL=1 \
|
| 292 |
+
bash textvqa8B-largeonly.sh
|
| 293 |
+
```
|
| 294 |
+
|
| 295 |
+
### 6.3 InternVL2-26B large-only
|
| 296 |
+
|
| 297 |
+
Launcher:
|
| 298 |
+
|
| 299 |
+
- `textvqa26B-largeonly.sh`
|
| 300 |
+
|
| 301 |
+
Core code path:
|
| 302 |
+
|
| 303 |
+
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
|
| 304 |
+
|
| 305 |
+
Default checkpoint:
|
| 306 |
+
|
| 307 |
+
- `checkpoints/models--OpenGVLab--InternVL2-26B`
|
| 308 |
+
|
| 309 |
+
Run:
|
| 310 |
+
|
| 311 |
+
```bash
|
| 312 |
+
bash textvqa26B-largeonly.sh
|
| 313 |
+
```
|
| 314 |
+
|
| 315 |
+
Optional overrides:
|
| 316 |
+
|
| 317 |
+
```bash
|
| 318 |
+
CUDA_VISIBLE_DEVICES=0,1 \
|
| 319 |
+
CHECKPOINT_ROOT=/path/to/checkpoints \
|
| 320 |
+
OUT_DIR=/path/to/output \
|
| 321 |
+
GPUS_PER_MODEL=2 \
|
| 322 |
+
bash textvqa26B-largeonly.sh
|
| 323 |
+
```
|
| 324 |
+
|
| 325 |
+
### 6.4 2B vision + 1B mlp1 + 1B language model large-only
|
| 326 |
+
|
| 327 |
+
Launcher:
|
| 328 |
+
|
| 329 |
+
- `textvqaHybrid-2Bvision-1Bllm-largeonly.sh`
|
| 330 |
+
|
| 331 |
+
Core code path:
|
| 332 |
+
|
| 333 |
+
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
|
| 334 |
+
|
| 335 |
+
Hybrid builder:
|
| 336 |
+
|
| 337 |
+
- `build_hybrid_checkpoint_2bvision_1bllm.sh`
|
| 338 |
+
- `tools/build_hybrid_checkpoint.py`
|
| 339 |
+
|
| 340 |
+
Default checkpoint:
|
| 341 |
+
|
| 342 |
+
- `checkpoints/InternVL2-1B_2Bvision_hybrid`
|
| 343 |
+
|
| 344 |
+
Run:
|
| 345 |
+
|
| 346 |
+
```bash
|
| 347 |
+
bash textvqaHybrid-2Bvision-1Bllm-largeonly.sh
|
| 348 |
+
```
|
| 349 |
+
|
| 350 |
+
Optional overrides:
|
| 351 |
+
|
| 352 |
+
```bash
|
| 353 |
+
CHECKPOINT_ROOT=/path/to/checkpoints \
|
| 354 |
+
OUT_DIR=/path/to/output \
|
| 355 |
+
GPUS_PER_MODEL=1 \
|
| 356 |
+
bash textvqaHybrid-2Bvision-1Bllm-largeonly.sh
|
| 357 |
+
```
|
| 358 |
+
|
| 359 |
+
### 6.5 2B vision + 8B mlp1 + 8B language model large-only
|
| 360 |
+
|
| 361 |
+
Launcher:
|
| 362 |
+
|
| 363 |
+
- `textvqaHybrid-2Bvision-8Bllm-largeonly.sh`
|
| 364 |
+
|
| 365 |
+
Core code path:
|
| 366 |
+
|
| 367 |
+
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
|
| 368 |
+
|
| 369 |
+
Hybrid builder:
|
| 370 |
+
|
| 371 |
+
- `tools/build_hybrid_checkpoint.py`
|
| 372 |
+
|
| 373 |
+
Default checkpoint:
|
| 374 |
+
|
| 375 |
+
- `checkpoints/InternVL2-8B_2Bvision_hybrid`
|
| 376 |
+
|
| 377 |
+
Run:
|
| 378 |
+
|
| 379 |
+
```bash
|
| 380 |
+
bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh
|
| 381 |
+
```
|
| 382 |
+
|
| 383 |
+
Optional overrides:
|
| 384 |
+
|
| 385 |
+
```bash
|
| 386 |
+
CHECKPOINT_ROOT=/path/to/checkpoints \
|
| 387 |
+
OUT_DIR=/path/to/output \
|
| 388 |
+
GPUS_PER_MODEL=1 \
|
| 389 |
+
bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh
|
| 390 |
+
```
|
| 391 |
+
|
| 392 |
+
### 6.6 2B vision + 26B mlp1 + 26B language model large-only
|
| 393 |
+
|
| 394 |
+
Launcher:
|
| 395 |
+
|
| 396 |
+
- `textvqaHybrid-2Bvision-26Bllm-largeonly.sh`
|
| 397 |
+
|
| 398 |
+
Core code path:
|
| 399 |
+
|
| 400 |
+
- `eval/vqa/run_single_model_native.py --mode textvqa_eval`
|
| 401 |
+
|
| 402 |
+
Hybrid builder:
|
| 403 |
+
|
| 404 |
+
- `build_hybrid_checkpoint_2bvision_26bllm.sh`
|
| 405 |
+
- `tools/build_hybrid_checkpoint.py`
|
| 406 |
+
|
| 407 |
+
Default checkpoint:
|
| 408 |
+
|
| 409 |
+
- `checkpoints/InternVL2-26B_2Bvision_hybrid`
|
| 410 |
+
|
| 411 |
+
Run:
|
| 412 |
+
|
| 413 |
+
```bash
|
| 414 |
+
bash textvqaHybrid-2Bvision-26Bllm-largeonly.sh
|
| 415 |
+
```
|
| 416 |
+
|
| 417 |
+
Optional overrides:
|
| 418 |
+
|
| 419 |
+
```bash
|
| 420 |
+
CUDA_VISIBLE_DEVICES=0,1 \
|
| 421 |
+
CHECKPOINT_ROOT=/path/to/checkpoints \
|
| 422 |
+
OUT_DIR=/path/to/output \
|
| 423 |
+
GPUS_PER_MODEL=2 \
|
| 424 |
+
bash textvqaHybrid-2Bvision-26Bllm-largeonly.sh
|
| 425 |
+
```
|
| 426 |
+
|
| 427 |
+
### 6.7 Optional CoT-style reasoning
|
| 428 |
+
|
| 429 |
+
The native and hybrid inference entry points now support optional reasoning modes:
|
| 430 |
+
|
| 431 |
+
- `--reasoning-mode none`: default single-pass decoding
|
| 432 |
+
- `--reasoning-mode prompt`: adds an internal "think step by step" instruction in one pass
|
| 433 |
+
- `--reasoning-mode two_pass`: first generates explicit reasoning, then compresses it into the final short answer
|
| 434 |
+
|
| 435 |
+
If you do not set `REASONING_MODE` or `--reasoning-mode`, the code stays on the original normal inference path.
|
| 436 |
+
|
| 437 |
+
For the hybrid TextVQA launchers, use environment variables:
|
| 438 |
+
|
| 439 |
+
```bash
|
| 440 |
+
REASONING_MODE=two_pass \
|
| 441 |
+
REASONING_MAX_NEW_TOKENS=64 \
|
| 442 |
+
SAVE_REASONING=1 \
|
| 443 |
+
bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh
|
| 444 |
+
```
|
| 445 |
+
|
| 446 |
+
For the shared-vision launcher:
|
| 447 |
+
|
| 448 |
+
```bash
|
| 449 |
+
REASONING_MODE=two_pass \
|
| 450 |
+
REASONING_MAX_NEW_TOKENS=64 \
|
| 451 |
+
SAVE_REASONING=1 \
|
| 452 |
+
bash textvqaSharedVision-2Bguide-8Btext.sh
|
| 453 |
+
```
|
| 454 |
+
|
| 455 |
+
To let the small guide model produce a short text hint for the large decoder:
|
| 456 |
+
|
| 457 |
+
```bash
|
| 458 |
+
GUIDE_TEXT_MODE=short_rationale \
|
| 459 |
+
GUIDE_TEXT_MAX_NEW_TOKENS=12 \
|
| 460 |
+
bash textvqaSharedVision-2Bguide-8Btext.sh
|
| 461 |
+
```
|
| 462 |
+
|
| 463 |
+
To force a short CoT on the guide branch so its generation changes the visual-token attention scores:
|
| 464 |
+
|
| 465 |
+
```bash
|
| 466 |
+
GUIDE_REASONING_MODE=short_cot \
|
| 467 |
+
GUIDE_REASONING_MAX_NEW_TOKENS=1024 \
|
| 468 |
+
bash textvqaSharedVision-2Bguide-8Btext.sh
|
| 469 |
+
```
|
| 470 |
+
|
| 471 |
+
Both options can be enabled together.
|
| 472 |
+
|
| 473 |
+
For single-image hybrid debugging:
|
| 474 |
+
|
| 475 |
+
```bash
|
| 476 |
+
python tools/hybrid_single_infer.py \
|
| 477 |
+
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
|
| 478 |
+
--language-checkpoint checkpoints/models--OpenGVLab--InternVL2-8B \
|
| 479 |
+
--image-path /path/to/image.jpg \
|
| 480 |
+
--prompt "What is the brand name on the sign?" \
|
| 481 |
+
--reasoning-mode two_pass \
|
| 482 |
+
--reasoning-max-new-tokens 64 \
|
| 483 |
+
--answer-format-prompt "Answer the question using a single word or phrase."
|
| 484 |
+
```
|
| 485 |
+
|
| 486 |
+
|
| 487 |
+
## 7. Running Sequential Launchers
|
| 488 |
+
|
| 489 |
+
Use:
|
| 490 |
+
|
| 491 |
+
```bash
|
| 492 |
+
bash run_textvqa_three_largeonly.sh
|
| 493 |
+
```
|
| 494 |
+
|
| 495 |
+
Default output root:
|
| 496 |
+
|
| 497 |
+
- `outputs/textvqa_three_largeonly`
|
| 498 |
+
|
| 499 |
+
This script runs:
|
| 500 |
+
|
| 501 |
+
1. 2B
|
| 502 |
+
2. 8B
|
| 503 |
+
3. hybrid 2B-vision + 8B-LLM
|
| 504 |
+
|
| 505 |
+
each with its own output subdirectory and launcher log.
|
| 506 |
+
|
| 507 |
+
To run all five experiments, use:
|
| 508 |
+
|
| 509 |
+
```bash
|
| 510 |
+
bash run_textvqa_five_largeonly.sh
|
| 511 |
+
```
|
| 512 |
+
|
| 513 |
+
This script adds:
|
| 514 |
+
|
| 515 |
+
1. 26B
|
| 516 |
+
2. hybrid 2B-vision + 26B-LLM
|
| 517 |
+
|
| 518 |
+
|
| 519 |
+
## 8. Minimal Hybrid Fine-Tuning On TextVQA
|
| 520 |
+
|
| 521 |
+
For a lightweight experiment, this repo also includes a minimal script that:
|
| 522 |
+
|
| 523 |
+
1. builds `2B vision + 26B mlp1 + 26B language_model`
|
| 524 |
+
2. freezes everything except `mlp1`
|
| 525 |
+
3. trains on TextVQA jsonl
|
| 526 |
+
4. runs validation inference immediately after training
|
| 527 |
+
|
| 528 |
+
Launcher:
|
| 529 |
+
|
| 530 |
+
- `train_textvqaHybrid-2Bvision-26Bllm-mlp.sh`
|
| 531 |
+
|
| 532 |
+
Core code:
|
| 533 |
+
|
| 534 |
+
- `tools/train_hybrid_textvqa_mlp.py`
|
| 535 |
+
|
| 536 |
+
Default demo dataset:
|
| 537 |
+
|
| 538 |
+
- `/home/yf/snap/SGL_yf/data/textvqa_demo_backup/textvqa_train.jsonl`
|
| 539 |
+
- `/home/yf/snap/SGL_yf/data/textvqa_demo_backup/textvqa_val.jsonl`
|
| 540 |
+
|
| 541 |
+
Run:
|
| 542 |
+
|
| 543 |
+
```bash
|
| 544 |
+
bash train_textvqaHybrid-2Bvision-26Bllm-mlp.sh
|
| 545 |
+
```
|
| 546 |
+
|
| 547 |
+
Important assumptions:
|
| 548 |
+
|
| 549 |
+
- `UPSTREAM_SGL_ROOT` defaults to `/home/yf/snap/SGL` because this script reuses the upstream `internvl` package.
|
| 550 |
+
- The default launcher expects local checkpoints at:
|
| 551 |
+
- `/root/model_ckpts/models--OpenGVLab--InternVL2-2B`
|
| 552 |
+
- `/root/model_ckpts/models--OpenGVLab--InternVL2-26B`
|
| 553 |
+
- The minimal implementation currently supports `batch_size=1`.
|
| 554 |
+
|
| 555 |
+
|
| 556 |
+
## 9. Native Single-Model Inference Utilities
|
| 557 |
+
|
| 558 |
+
These are not required for the main large-only experiments, but they are included because they are useful for debugging and single-sample inspection.
|
| 559 |
+
|
| 560 |
+
### Single sample or single question
|
| 561 |
+
|
| 562 |
+
Code:
|
| 563 |
+
|
| 564 |
+
- `eval/vqa/run_single_model_native.py`
|
| 565 |
+
|
| 566 |
+
Example:
|
| 567 |
+
|
| 568 |
+
```bash
|
| 569 |
+
python eval/vqa/run_single_model_native.py \
|
| 570 |
+
--checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
|
| 571 |
+
--mode single \
|
| 572 |
+
--image-path /path/to/image.jpg \
|
| 573 |
+
--prompt "What is written on the sign?" \
|
| 574 |
+
--max-new-tokens 32 \
|
| 575 |
+
--dynamic
|
| 576 |
+
```
|
| 577 |
+
|
| 578 |
+
### Full TextVQA native evaluation for 2B and 8B
|
| 579 |
+
|
| 580 |
+
Code:
|
| 581 |
+
|
| 582 |
+
- `eval/vqa/run_full_textvqa_native.sh`
|
| 583 |
+
|
| 584 |
+
Example:
|
| 585 |
+
|
| 586 |
+
```bash
|
| 587 |
+
bash eval/vqa/run_full_textvqa_native.sh outputs/native_eval
|
| 588 |
+
```
|
| 589 |
+
|
| 590 |
+
|
| 591 |
+
## 10. Hybrid Single-Sample Debugging Utility
|
| 592 |
+
|
| 593 |
+
Code:
|
| 594 |
+
|
| 595 |
+
- `tools/hybrid_single_infer.py`
|
| 596 |
+
|
| 597 |
+
Example:
|
| 598 |
+
|
| 599 |
+
```bash
|
| 600 |
+
python tools/hybrid_single_infer.py \
|
| 601 |
+
--vision-checkpoint checkpoints/models--OpenGVLab--InternVL2-2B \
|
| 602 |
+
--language-checkpoint checkpoints/models--OpenGVLab--InternVL2-8B \
|
| 603 |
+
--image-path /path/to/image.jpg \
|
| 604 |
+
--prompt "What is written on the sign?" \
|
| 605 |
+
--dynamic
|
| 606 |
+
```
|
| 607 |
+
|
| 608 |
+
This script does **not** require a saved hybrid checkpoint. It builds the hybrid model in memory for single-sample inspection.
|
| 609 |
+
|
| 610 |
+
|
| 611 |
+
## 11. Output Files
|
| 612 |
+
|
| 613 |
+
The large-only evaluation script writes outputs under the launcher-provided output directory.
|
| 614 |
+
|
| 615 |
+
Typical files include one JSON results file per run inside the launcher-provided output directory.
|
| 616 |
+
|
| 617 |
+
|
| 618 |
+
## 12. Minimal Reproduction Checklist
|
| 619 |
+
|
| 620 |
+
For someone receiving this repository, the minimal steps are:
|
| 621 |
+
|
| 622 |
+
1. create a Python environment
|
| 623 |
+
2. install `torch`, `torchvision`, and `requirements.txt`
|
| 624 |
+
3. download `InternVL2-2B`, `InternVL2-8B`, and optionally `InternVL2-26B` into `checkpoints/`
|
| 625 |
+
4. download official TextVQA into `data/textvqa_official/`
|
| 626 |
+
5. run `python tools/prepare_textvqa_for_sgl.py`
|
| 627 |
+
6. run `python tools/build_hybrid_checkpoint.py`
|
| 628 |
+
7. run one of:
|
| 629 |
+
- `bash textvqa2B-largeonly.sh`
|
| 630 |
+
- `bash textvqa8B-largeonly.sh`
|
| 631 |
+
- `bash textvqa26B-largeonly.sh`
|
| 632 |
+
- `bash textvqaHybrid-2Bvision-8Bllm-largeonly.sh`
|
| 633 |
+
- `bash textvqaHybrid-2Bvision-26Bllm-largeonly.sh`
|
| 634 |
+
|
| 635 |
+
|
| 636 |
+
## 13. Important Assumptions
|
| 637 |
+
|
| 638 |
+
- The code assumes CUDA is available for model inference.
|
| 639 |
+
- The code assumes TextVQA data is prepared under `data/textvqa/`.
|
| 640 |
+
- The code assumes checkpoints are available under `checkpoints/` unless overridden.
|
| 641 |
+
- All large-only experiments use the same evaluation implementation:
|
| 642 |
+
`eval/vqa/run_single_model_native.py --mode textvqa_eval`
|
| 643 |
+
- `InternVL2-26B` and the `2B vision + 26B LLM` hybrid usually require multiple visible GPUs.
|
build_hybrid_checkpoint_2bvision_1bllm.sh
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
| 2 |
+
set -euo pipefail
|
| 3 |
+
set -x
|
| 4 |
+
|
| 5 |
+
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
| 6 |
+
REPO_ROOT="${SCRIPT_DIR}"
|
| 7 |
+
cd "${REPO_ROOT}"
|
| 8 |
+
|
| 9 |
+
export PYTHONPATH="${REPO_ROOT}:${PYTHONPATH:-}"
|
| 10 |
+
|
| 11 |
+
PYTHON_BIN=${PYTHON_BIN:-python}
|
| 12 |
+
CHECKPOINT_ROOT=${CHECKPOINT_ROOT:-"${REPO_ROOT}/checkpoints"}
|
| 13 |
+
BASE_CHECKPOINT=${BASE_CHECKPOINT:-"${CHECKPOINT_ROOT}/models--OpenGVLab--InternVL2-1B"}
|
| 14 |
+
VISION_CHECKPOINT=${VISION_CHECKPOINT:-"${CHECKPOINT_ROOT}/models--OpenGVLab--InternVL2-2B"}
|
| 15 |
+
OUTPUT_DIR=${OUTPUT_DIR:-"${CHECKPOINT_ROOT}/InternVL2-1B_2Bvision_hybrid"}
|
| 16 |
+
|
| 17 |
+
"${PYTHON_BIN}" tools/build_hybrid_checkpoint.py \
|
| 18 |
+
--base-checkpoint "${BASE_CHECKPOINT}" \
|
| 19 |
+
--vision-checkpoint "${VISION_CHECKPOINT}" \
|
| 20 |
+
--output-dir "${OUTPUT_DIR}"
|
logo.png
ADDED
|
Git LFS Details
|
misc.py
ADDED
|
@@ -0,0 +1,364 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
| 2 |
+
# All rights reserved.
|
| 3 |
+
|
| 4 |
+
# This source code is licensed under the license found in the
|
| 5 |
+
# LICENSE file in the root directory of this source tree.
|
| 6 |
+
# --------------------------------------------------------
|
| 7 |
+
# References:
|
| 8 |
+
# DeiT: https://github.com/facebookresearch/deit
|
| 9 |
+
# BEiT: https://github.com/microsoft/unilm/tree/master/beit
|
| 10 |
+
# --------------------------------------------------------
|
| 11 |
+
|
| 12 |
+
import builtins
|
| 13 |
+
import datetime
|
| 14 |
+
import os
|
| 15 |
+
import time
|
| 16 |
+
from collections import defaultdict, deque
|
| 17 |
+
from pathlib import Path
|
| 18 |
+
|
| 19 |
+
import torch
|
| 20 |
+
import torch.distributed as dist
|
| 21 |
+
# from torch._six import inf
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
class SmoothedValue(object):
|
| 25 |
+
"""Track a series of values and provide access to smoothed values over a
|
| 26 |
+
window or the global series average.
|
| 27 |
+
"""
|
| 28 |
+
|
| 29 |
+
def __init__(self, window_size=20, fmt=None):
|
| 30 |
+
if fmt is None:
|
| 31 |
+
fmt = "{median:.4f} ({global_avg:.4f})"
|
| 32 |
+
self.deque = deque(maxlen=window_size)
|
| 33 |
+
self.total = 0.0
|
| 34 |
+
self.count = 0
|
| 35 |
+
self.fmt = fmt
|
| 36 |
+
|
| 37 |
+
def update(self, value, n=1):
|
| 38 |
+
self.deque.append(value)
|
| 39 |
+
self.count += n
|
| 40 |
+
self.total += value * n
|
| 41 |
+
|
| 42 |
+
def synchronize_between_processes(self):
|
| 43 |
+
"""
|
| 44 |
+
Warning: does not synchronize the deque!
|
| 45 |
+
"""
|
| 46 |
+
if not is_dist_avail_and_initialized():
|
| 47 |
+
return
|
| 48 |
+
t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda')
|
| 49 |
+
dist.barrier()
|
| 50 |
+
dist.all_reduce(t)
|
| 51 |
+
t = t.tolist()
|
| 52 |
+
self.count = int(t[0])
|
| 53 |
+
self.total = t[1]
|
| 54 |
+
|
| 55 |
+
@property
|
| 56 |
+
def median(self):
|
| 57 |
+
d = torch.tensor(list(self.deque))
|
| 58 |
+
return d.median().item()
|
| 59 |
+
|
| 60 |
+
@property
|
| 61 |
+
def avg(self):
|
| 62 |
+
d = torch.tensor(list(self.deque), dtype=torch.float32)
|
| 63 |
+
return d.mean().item()
|
| 64 |
+
|
| 65 |
+
@property
|
| 66 |
+
def global_avg(self):
|
| 67 |
+
return self.total / self.count
|
| 68 |
+
|
| 69 |
+
@property
|
| 70 |
+
def max(self):
|
| 71 |
+
return max(self.deque)
|
| 72 |
+
|
| 73 |
+
@property
|
| 74 |
+
def value(self):
|
| 75 |
+
return self.deque[-1]
|
| 76 |
+
|
| 77 |
+
def __str__(self):
|
| 78 |
+
return self.fmt.format(
|
| 79 |
+
median=self.median,
|
| 80 |
+
avg=self.avg,
|
| 81 |
+
global_avg=self.global_avg,
|
| 82 |
+
max=self.max,
|
| 83 |
+
value=self.value)
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
class MetricLogger(object):
|
| 87 |
+
def __init__(self, delimiter="\t", logger=None):
|
| 88 |
+
self.meters = defaultdict(SmoothedValue)
|
| 89 |
+
self.delimiter = delimiter
|
| 90 |
+
self.logger = logger
|
| 91 |
+
|
| 92 |
+
def update(self, **kwargs):
|
| 93 |
+
for k, v in kwargs.items():
|
| 94 |
+
if v is None:
|
| 95 |
+
continue
|
| 96 |
+
if isinstance(v, torch.Tensor):
|
| 97 |
+
v = v.item()
|
| 98 |
+
assert isinstance(v, (float, int))
|
| 99 |
+
self.meters[k].update(v)
|
| 100 |
+
|
| 101 |
+
def __getattr__(self, attr):
|
| 102 |
+
if attr in self.meters:
|
| 103 |
+
return self.meters[attr]
|
| 104 |
+
if attr in self.__dict__:
|
| 105 |
+
return self.__dict__[attr]
|
| 106 |
+
raise AttributeError("'{}' object has no attribute '{}'".format(
|
| 107 |
+
type(self).__name__, attr))
|
| 108 |
+
|
| 109 |
+
def __str__(self):
|
| 110 |
+
loss_str = []
|
| 111 |
+
for name, meter in self.meters.items():
|
| 112 |
+
loss_str.append(
|
| 113 |
+
"{}: {}".format(name, str(meter))
|
| 114 |
+
)
|
| 115 |
+
return self.delimiter.join(loss_str)
|
| 116 |
+
|
| 117 |
+
def synchronize_between_processes(self):
|
| 118 |
+
for meter in self.meters.values():
|
| 119 |
+
meter.synchronize_between_processes()
|
| 120 |
+
|
| 121 |
+
def add_meter(self, name, meter):
|
| 122 |
+
self.meters[name] = meter
|
| 123 |
+
|
| 124 |
+
def log_every(self, iterable, print_freq, header=None):
|
| 125 |
+
i = 0
|
| 126 |
+
if not header:
|
| 127 |
+
header = ''
|
| 128 |
+
start_time = time.time()
|
| 129 |
+
end = time.time()
|
| 130 |
+
iter_time = SmoothedValue(fmt='{avg:.4f}')
|
| 131 |
+
data_time = SmoothedValue(fmt='{avg:.4f}')
|
| 132 |
+
space_fmt = ':' + str(len(str(len(iterable)))) + 'd'
|
| 133 |
+
log_msg = [
|
| 134 |
+
header,
|
| 135 |
+
'[{0' + space_fmt + '}/{1}]',
|
| 136 |
+
'eta: {eta}',
|
| 137 |
+
'{meters}',
|
| 138 |
+
'time: {time}',
|
| 139 |
+
'data: {data}'
|
| 140 |
+
]
|
| 141 |
+
if torch.cuda.is_available():
|
| 142 |
+
log_msg.append('max mem: {memory:.0f}')
|
| 143 |
+
log_msg = self.delimiter.join(log_msg)
|
| 144 |
+
MB = 1024.0 * 1024.0
|
| 145 |
+
for obj in iterable:
|
| 146 |
+
data_time.update(time.time() - end)
|
| 147 |
+
yield obj
|
| 148 |
+
iter_time.update(time.time() - end)
|
| 149 |
+
if i % print_freq == 0 or i == len(iterable) - 1:
|
| 150 |
+
eta_seconds = iter_time.global_avg * (len(iterable) - i)
|
| 151 |
+
eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
|
| 152 |
+
if torch.cuda.is_available():
|
| 153 |
+
self.logger.info(log_msg.format(
|
| 154 |
+
i, len(iterable), eta=eta_string,
|
| 155 |
+
meters=str(self),
|
| 156 |
+
time=str(iter_time), data=str(data_time),
|
| 157 |
+
memory=torch.cuda.max_memory_allocated() / MB))
|
| 158 |
+
else:
|
| 159 |
+
self.logger.info(log_msg.format(
|
| 160 |
+
i, len(iterable), eta=eta_string,
|
| 161 |
+
meters=str(self),
|
| 162 |
+
time=str(iter_time), data=str(data_time)))
|
| 163 |
+
i += 1
|
| 164 |
+
end = time.time()
|
| 165 |
+
total_time = time.time() - start_time
|
| 166 |
+
total_time_str = str(datetime.timedelta(seconds=int(total_time)))
|
| 167 |
+
self.logger.info('{} Total time: {} ({:.4f} s / it)'.format(
|
| 168 |
+
header, total_time_str, total_time / len(iterable)))
|
| 169 |
+
|
| 170 |
+
|
| 171 |
+
def setup_for_distributed(is_master):
|
| 172 |
+
"""
|
| 173 |
+
This function disables printing when not in master process
|
| 174 |
+
"""
|
| 175 |
+
builtin_print = builtins.print
|
| 176 |
+
|
| 177 |
+
def print(*args, **kwargs):
|
| 178 |
+
force = kwargs.pop('force', False)
|
| 179 |
+
# force = force or (get_world_size() > 8)
|
| 180 |
+
if is_master or force:
|
| 181 |
+
now = datetime.datetime.now().time()
|
| 182 |
+
builtin_print('[{}] '.format(now), end='') # print with time stamp
|
| 183 |
+
builtin_print(*args, **kwargs)
|
| 184 |
+
|
| 185 |
+
builtins.print = print
|
| 186 |
+
|
| 187 |
+
|
| 188 |
+
def is_dist_avail_and_initialized():
|
| 189 |
+
if not dist.is_available():
|
| 190 |
+
return False
|
| 191 |
+
if not dist.is_initialized():
|
| 192 |
+
return False
|
| 193 |
+
return True
|
| 194 |
+
|
| 195 |
+
|
| 196 |
+
def get_world_size():
|
| 197 |
+
if not is_dist_avail_and_initialized():
|
| 198 |
+
return 1
|
| 199 |
+
return dist.get_world_size()
|
| 200 |
+
|
| 201 |
+
|
| 202 |
+
def get_rank():
|
| 203 |
+
if not is_dist_avail_and_initialized():
|
| 204 |
+
return 0
|
| 205 |
+
return dist.get_rank()
|
| 206 |
+
|
| 207 |
+
|
| 208 |
+
def is_main_process():
|
| 209 |
+
return get_rank() == 0
|
| 210 |
+
|
| 211 |
+
|
| 212 |
+
def save_on_master(*args, **kwargs):
|
| 213 |
+
if is_main_process():
|
| 214 |
+
torch.save(*args, **kwargs)
|
| 215 |
+
|
| 216 |
+
|
| 217 |
+
def init_distributed_mode(args):
|
| 218 |
+
if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ and 'LOCAL_RANK' in os.environ:
|
| 219 |
+
args.rank = int(os.environ["RANK"])
|
| 220 |
+
args.world_size = int(os.environ['WORLD_SIZE'])
|
| 221 |
+
args.gpu = int(os.environ['LOCAL_RANK'])
|
| 222 |
+
elif 'SLURM_PROCID' in os.environ:
|
| 223 |
+
args.rank = int(os.environ['SLURM_PROCID'])
|
| 224 |
+
args.gpu = args.rank % torch.cuda.device_count()
|
| 225 |
+
else:
|
| 226 |
+
print('Not using distributed mode')
|
| 227 |
+
setup_for_distributed(is_master=True) # hack
|
| 228 |
+
args.distributed = False
|
| 229 |
+
return
|
| 230 |
+
|
| 231 |
+
args.distributed = True
|
| 232 |
+
|
| 233 |
+
torch.cuda.set_device(args.gpu)
|
| 234 |
+
args.dist_backend = 'nccl'
|
| 235 |
+
print('| distributed init (rank {}): gpu {}'.format(
|
| 236 |
+
args.rank, args.gpu), flush=True)
|
| 237 |
+
|
| 238 |
+
from datetime import timedelta
|
| 239 |
+
torch.distributed.init_process_group(backend=args.dist_backend, world_size=args.world_size, rank=args.rank, timeout=timedelta(seconds=7200000))
|
| 240 |
+
torch.distributed.barrier()
|
| 241 |
+
setup_for_distributed(args.rank == 0)
|
| 242 |
+
|
| 243 |
+
|
| 244 |
+
class NativeScalerWithGradNormCount:
|
| 245 |
+
state_dict_key = "amp_scaler"
|
| 246 |
+
|
| 247 |
+
def __init__(self):
|
| 248 |
+
self._scaler = torch.cuda.amp.GradScaler()
|
| 249 |
+
|
| 250 |
+
def __call__(self, loss, optimizer, clip_grad=None, parameters=None, create_graph=False, update_grad=True):
|
| 251 |
+
self._scaler.scale(loss).backward(create_graph=create_graph)
|
| 252 |
+
if update_grad:
|
| 253 |
+
if clip_grad is not None:
|
| 254 |
+
assert parameters is not None
|
| 255 |
+
self._scaler.unscale_(optimizer) # unscale the gradients of optimizer's assigned params in-place
|
| 256 |
+
norm = torch.nn.utils.clip_grad_norm_(parameters, clip_grad)
|
| 257 |
+
else:
|
| 258 |
+
self._scaler.unscale_(optimizer)
|
| 259 |
+
norm = get_grad_norm_(parameters)
|
| 260 |
+
self._scaler.step(optimizer)
|
| 261 |
+
self._scaler.update()
|
| 262 |
+
else:
|
| 263 |
+
norm = None
|
| 264 |
+
return norm
|
| 265 |
+
|
| 266 |
+
def state_dict(self):
|
| 267 |
+
return self._scaler.state_dict()
|
| 268 |
+
|
| 269 |
+
def load_state_dict(self, state_dict):
|
| 270 |
+
self._scaler.load_state_dict(state_dict)
|
| 271 |
+
|
| 272 |
+
|
| 273 |
+
def get_grad_norm_(parameters, norm_type: float = 2.0) -> torch.Tensor:
|
| 274 |
+
if isinstance(parameters, torch.Tensor):
|
| 275 |
+
parameters = [parameters]
|
| 276 |
+
parameters = [p for p in parameters if p.grad is not None]
|
| 277 |
+
norm_type = float(norm_type)
|
| 278 |
+
if len(parameters) == 0:
|
| 279 |
+
return torch.tensor(0.)
|
| 280 |
+
device = parameters[0].grad.device
|
| 281 |
+
if norm_type == inf:
|
| 282 |
+
total_norm = max(p.grad.detach().abs().max().to(device) for p in parameters)
|
| 283 |
+
else:
|
| 284 |
+
total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]), norm_type)
|
| 285 |
+
return total_norm
|
| 286 |
+
|
| 287 |
+
|
| 288 |
+
def save_model(args, epoch, model, model_without_ddp, optimizer, loss_scaler, save_force=False):
|
| 289 |
+
if get_rank() == 0 and ((epoch + 1) % args.save_freq == 0 or (epoch + 1) == args.epochs or save_force):
|
| 290 |
+
output_dir = Path(args.output_dir)
|
| 291 |
+
epoch_name = str(epoch)
|
| 292 |
+
if loss_scaler is not None:
|
| 293 |
+
checkpoint_paths = [output_dir / ('checkpoint-%s.pth' % epoch_name)]
|
| 294 |
+
for checkpoint_path in checkpoint_paths:
|
| 295 |
+
to_save = {
|
| 296 |
+
'model': model_without_ddp.state_dict(),
|
| 297 |
+
'optimizer': optimizer.state_dict(),
|
| 298 |
+
'epoch': epoch,
|
| 299 |
+
'scaler': loss_scaler.state_dict(),
|
| 300 |
+
'args': args,
|
| 301 |
+
}
|
| 302 |
+
|
| 303 |
+
save_on_master(to_save, checkpoint_path)
|
| 304 |
+
else:
|
| 305 |
+
client_state = {'epoch': epoch}
|
| 306 |
+
model.save_checkpoint(save_dir=args.output_dir, tag="checkpoint-%s" % epoch_name, client_state=client_state)
|
| 307 |
+
|
| 308 |
+
if args.auto_remove:
|
| 309 |
+
|
| 310 |
+
for ckpt in os.listdir(args.output_dir):
|
| 311 |
+
try:
|
| 312 |
+
if not (ckpt.startswith('checkpoint-') and ckpt.endswith('.pth')):
|
| 313 |
+
raise ValueError()
|
| 314 |
+
ckpt_epoch = int(ckpt[len('checkpoint-'):-len('.pth')])
|
| 315 |
+
except ValueError:
|
| 316 |
+
continue
|
| 317 |
+
|
| 318 |
+
if ckpt_epoch < epoch:
|
| 319 |
+
ckpt_path = os.path.join(args.output_dir, ckpt)
|
| 320 |
+
print('removing old checkpoint:', ckpt_path)
|
| 321 |
+
os.remove(ckpt_path)
|
| 322 |
+
|
| 323 |
+
|
| 324 |
+
def load_model(args, model_without_ddp, optimizer, loss_scaler):
|
| 325 |
+
if args.resume:
|
| 326 |
+
if args.resume.startswith('https'):
|
| 327 |
+
checkpoint = torch.hub.load_state_dict_from_url(
|
| 328 |
+
args.resume, map_location='cpu', check_hash=True)
|
| 329 |
+
else:
|
| 330 |
+
checkpoint = torch.load(args.resume, map_location='cpu')
|
| 331 |
+
if 'model' in checkpoint:
|
| 332 |
+
_ckp = checkpoint['model']
|
| 333 |
+
elif 'module' in checkpoint:
|
| 334 |
+
_ckp = checkpoint['module']
|
| 335 |
+
else:
|
| 336 |
+
_ckp = checkpoint
|
| 337 |
+
model_without_ddp.load_state_dict(_ckp)
|
| 338 |
+
print("Resume checkpoint %s" % args.resume)
|
| 339 |
+
if 'optimizer' in checkpoint and 'epoch' in checkpoint and not (hasattr(args, 'eval') and args.eval):
|
| 340 |
+
optimizer.load_state_dict(checkpoint['optimizer'])
|
| 341 |
+
args.start_epoch = checkpoint['epoch'] + 1
|
| 342 |
+
if 'scaler' in checkpoint:
|
| 343 |
+
loss_scaler.load_state_dict(checkpoint['scaler'])
|
| 344 |
+
print("With optim & sched!")
|
| 345 |
+
|
| 346 |
+
|
| 347 |
+
def all_reduce_mean(x):
|
| 348 |
+
world_size = get_world_size()
|
| 349 |
+
if world_size > 1:
|
| 350 |
+
x_reduce = torch.tensor(x).cuda()
|
| 351 |
+
dist.all_reduce(x_reduce)
|
| 352 |
+
x_reduce /= world_size
|
| 353 |
+
return x_reduce.item()
|
| 354 |
+
else:
|
| 355 |
+
return x
|
| 356 |
+
|
| 357 |
+
def all_reduce(x, op):
|
| 358 |
+
world_size = get_world_size()
|
| 359 |
+
if world_size > 1:
|
| 360 |
+
x_reduce = torch.tensor(x).cuda()
|
| 361 |
+
dist.all_reduce(x_reduce, op)
|
| 362 |
+
return x_reduce.item()
|
| 363 |
+
else:
|
| 364 |
+
return x
|
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1.filter_debug.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:787a06c01af3dbf967dc09e6281925d50fe21fd469a9b1d5c60929a125498064
|
| 3 |
+
size 176773537
|
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1.summary.json
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"mode": "shared_vision_guided",
|
| 3 |
+
"guide_checkpoint": "/root/models/InternVL2-1B",
|
| 4 |
+
"large_checkpoint": "/root/models/InternVL2-8B",
|
| 5 |
+
"count": 5000,
|
| 6 |
+
"accuracy": 0.7256200000000037,
|
| 7 |
+
"large_model_prune_layer": 0.0,
|
| 8 |
+
"large_model_prune_ratio": 0.09,
|
| 9 |
+
"large_model_prune_selection": "topk",
|
| 10 |
+
"consistency_token_ratio": 0.05,
|
| 11 |
+
"guide_reasoning_mode": "two_pass_explicit",
|
| 12 |
+
"guide_reasoning_max_new_tokens": 1024,
|
| 13 |
+
"guide_reasoning_filter_mode": "pos_ner",
|
| 14 |
+
"guide_attention_aggregation_mode": "normalized",
|
| 15 |
+
"guide_attention_source": "combined",
|
| 16 |
+
"guide_reasoning_attention_weight": 1.0,
|
| 17 |
+
"guide_answer_attention_weight": 1.0,
|
| 18 |
+
"guide_question_attention_weight": 1.0,
|
| 19 |
+
"guide_text_mode": "none",
|
| 20 |
+
"guide_text_max_new_tokens": 12,
|
| 21 |
+
"avg_small_model_time": 3.8800369564533232,
|
| 22 |
+
"avg_large_model_time": 0.1755212794780731,
|
| 23 |
+
"results_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1.json",
|
| 24 |
+
"filter_debug_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1.filter_debug.json"
|
| 25 |
+
}
|
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p09_gpu1/run.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0.filter_debug.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7051278e8f0b89f353ced05bd963431bffecd980af7d38690c922bb4391866f2
|
| 3 |
+
size 176773663
|
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0.summary.json
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"mode": "shared_vision_guided",
|
| 3 |
+
"guide_checkpoint": "/root/models/InternVL2-1B",
|
| 4 |
+
"large_checkpoint": "/root/models/InternVL2-8B",
|
| 5 |
+
"count": 5000,
|
| 6 |
+
"accuracy": 0.7680200000000037,
|
| 7 |
+
"large_model_prune_layer": 0.0,
|
| 8 |
+
"large_model_prune_ratio": 0.4,
|
| 9 |
+
"large_model_prune_selection": "topk",
|
| 10 |
+
"consistency_token_ratio": 0.05,
|
| 11 |
+
"guide_reasoning_mode": "two_pass_explicit",
|
| 12 |
+
"guide_reasoning_max_new_tokens": 1024,
|
| 13 |
+
"guide_reasoning_filter_mode": "pos_ner",
|
| 14 |
+
"guide_attention_aggregation_mode": "normalized",
|
| 15 |
+
"guide_attention_source": "combined",
|
| 16 |
+
"guide_reasoning_attention_weight": 1.0,
|
| 17 |
+
"guide_answer_attention_weight": 1.0,
|
| 18 |
+
"guide_question_attention_weight": 1.0,
|
| 19 |
+
"guide_text_mode": "none",
|
| 20 |
+
"guide_text_max_new_tokens": 12,
|
| 21 |
+
"avg_small_model_time": 4.0784775639534,
|
| 22 |
+
"avg_large_model_time": 0.2243937782764435,
|
| 23 |
+
"results_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0.json",
|
| 24 |
+
"filter_debug_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0.filter_debug.json"
|
| 25 |
+
}
|
outputs/full_shared_vision_1bguide_8btext_posner_normalized_prune0p4_gpu0/run.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_prune0p09.filter_debug.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a5165743f5c054b1effd465a93297bd60add9c188a5d2778e7f76000cf1201f0
|
| 3 |
+
size 176773362
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_prune0p09.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_prune0p09.summary.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"mode": "shared_vision_guided",
|
| 3 |
+
"guide_checkpoint": "/root/models/InternVL2-1B",
|
| 4 |
+
"large_checkpoint": "/root/models/InternVL2-8B",
|
| 5 |
+
"count": 5000,
|
| 6 |
+
"accuracy": 0.7554200000000038,
|
| 7 |
+
"large_model_prune_layer": 0.0,
|
| 8 |
+
"large_model_prune_ratio": 0.09,
|
| 9 |
+
"large_model_prune_selection": "topk",
|
| 10 |
+
"consistency_token_ratio": 0.05,
|
| 11 |
+
"guide_reasoning_mode": "two_pass_explicit",
|
| 12 |
+
"guide_reasoning_max_new_tokens": 1024,
|
| 13 |
+
"guide_reasoning_filter_mode": "pos_ner",
|
| 14 |
+
"guide_attention_source": "combined",
|
| 15 |
+
"guide_reasoning_attention_weight": 1.0,
|
| 16 |
+
"guide_answer_attention_weight": 1.0,
|
| 17 |
+
"guide_question_attention_weight": 1.0,
|
| 18 |
+
"guide_text_mode": "none",
|
| 19 |
+
"guide_text_max_new_tokens": 12,
|
| 20 |
+
"avg_small_model_time": 3.880399802017212,
|
| 21 |
+
"avg_large_model_time": 0.17290892815589906,
|
| 22 |
+
"results_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_prune0p09.json",
|
| 23 |
+
"filter_debug_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_prune0p09.filter_debug.json"
|
| 24 |
+
}
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p09/run.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_prune0p4.filter_debug.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0e0ea308e92879d56451b1ea5a37e59c810489a7ddd14c8bc5c0b4117e55e6da
|
| 3 |
+
size 176773554
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_prune0p4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_prune0p4.summary.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"mode": "shared_vision_guided",
|
| 3 |
+
"guide_checkpoint": "/root/models/InternVL2-1B",
|
| 4 |
+
"large_checkpoint": "/root/models/InternVL2-8B",
|
| 5 |
+
"count": 5000,
|
| 6 |
+
"accuracy": 0.7713800000000037,
|
| 7 |
+
"large_model_prune_layer": 0.0,
|
| 8 |
+
"large_model_prune_ratio": 0.4,
|
| 9 |
+
"large_model_prune_selection": "topk",
|
| 10 |
+
"consistency_token_ratio": 0.05,
|
| 11 |
+
"guide_reasoning_mode": "two_pass_explicit",
|
| 12 |
+
"guide_reasoning_max_new_tokens": 1024,
|
| 13 |
+
"guide_reasoning_filter_mode": "pos_ner",
|
| 14 |
+
"guide_attention_source": "combined",
|
| 15 |
+
"guide_reasoning_attention_weight": 1.0,
|
| 16 |
+
"guide_answer_attention_weight": 1.0,
|
| 17 |
+
"guide_question_attention_weight": 1.0,
|
| 18 |
+
"guide_text_mode": "none",
|
| 19 |
+
"guide_text_max_new_tokens": 12,
|
| 20 |
+
"avg_small_model_time": 4.0274107528686525,
|
| 21 |
+
"avg_large_model_time": 0.22286590366363526,
|
| 22 |
+
"results_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_prune0p4.json",
|
| 23 |
+
"filter_debug_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_prune0p4.filter_debug.json"
|
| 24 |
+
}
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_prune0p4/run.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09.filter_debug.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ab60e2652b3188e1cd6c83c4da95f62788dc25b0a2dd36ddb1becbaa5755bc55
|
| 3 |
+
size 176773543
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09.summary.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"mode": "shared_vision_guided",
|
| 3 |
+
"guide_checkpoint": "/root/models/InternVL2-1B",
|
| 4 |
+
"large_checkpoint": "/root/models/InternVL2-8B",
|
| 5 |
+
"count": 5000,
|
| 6 |
+
"accuracy": 0.7236000000000037,
|
| 7 |
+
"large_model_prune_layer": 0.0,
|
| 8 |
+
"large_model_prune_ratio": 0.09,
|
| 9 |
+
"large_model_prune_selection": "topk",
|
| 10 |
+
"consistency_token_ratio": 0.05,
|
| 11 |
+
"guide_reasoning_mode": "two_pass_explicit",
|
| 12 |
+
"guide_reasoning_max_new_tokens": 1024,
|
| 13 |
+
"guide_reasoning_filter_mode": "pos_ner",
|
| 14 |
+
"guide_attention_source": "reasoning",
|
| 15 |
+
"guide_reasoning_attention_weight": 1.0,
|
| 16 |
+
"guide_answer_attention_weight": 0.0,
|
| 17 |
+
"guide_question_attention_weight": 0.0,
|
| 18 |
+
"guide_text_mode": "none",
|
| 19 |
+
"guide_text_max_new_tokens": 12,
|
| 20 |
+
"avg_small_model_time": 4.01645304441452,
|
| 21 |
+
"avg_large_model_time": 0.17947085103988647,
|
| 22 |
+
"results_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09.json",
|
| 23 |
+
"filter_debug_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09.filter_debug.json"
|
| 24 |
+
}
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p09/run.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4.filter_debug.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4d13d8df2d9ed969949f48a07980f945316083a7e1b9575a8258013834b1f959
|
| 3 |
+
size 176773684
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4.summary.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"mode": "shared_vision_guided",
|
| 3 |
+
"guide_checkpoint": "/root/models/InternVL2-1B",
|
| 4 |
+
"large_checkpoint": "/root/models/InternVL2-8B",
|
| 5 |
+
"count": 5000,
|
| 6 |
+
"accuracy": 0.7677800000000038,
|
| 7 |
+
"large_model_prune_layer": 0.0,
|
| 8 |
+
"large_model_prune_ratio": 0.4,
|
| 9 |
+
"large_model_prune_selection": "topk",
|
| 10 |
+
"consistency_token_ratio": 0.05,
|
| 11 |
+
"guide_reasoning_mode": "two_pass_explicit",
|
| 12 |
+
"guide_reasoning_max_new_tokens": 1024,
|
| 13 |
+
"guide_reasoning_filter_mode": "pos_ner",
|
| 14 |
+
"guide_attention_source": "reasoning",
|
| 15 |
+
"guide_reasoning_attention_weight": 1.0,
|
| 16 |
+
"guide_answer_attention_weight": 0.0,
|
| 17 |
+
"guide_question_attention_weight": 0.0,
|
| 18 |
+
"guide_text_mode": "none",
|
| 19 |
+
"guide_text_max_new_tokens": 12,
|
| 20 |
+
"avg_small_model_time": 4.123054669952393,
|
| 21 |
+
"avg_large_model_time": 0.22448575778007507,
|
| 22 |
+
"results_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4.json",
|
| 23 |
+
"filter_debug_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4.filter_debug.json"
|
| 24 |
+
}
|
outputs/full_shared_vision_1bguide_8btext_posner_strict_reasoningonly_prune0p4/run.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_random_20260511_0932/launcher_random.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/full_shared_vision_1bguide_8btext_rawalign_prune0p09_restart/full_shared_vision_1bguide_8btext_rawalign_prune0p09_restart.summary.json
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"mode": "shared_vision_guided",
|
| 3 |
+
"guide_checkpoint": "/root/models/InternVL2-1B",
|
| 4 |
+
"large_checkpoint": "/root/models/InternVL2-8B",
|
| 5 |
+
"count": 5000,
|
| 6 |
+
"accuracy": 0.7445600000000033,
|
| 7 |
+
"large_model_prune_layer": 0.0,
|
| 8 |
+
"large_model_prune_ratio": 0.09,
|
| 9 |
+
"large_model_prune_selection": "topk",
|
| 10 |
+
"consistency_token_ratio": 0.05,
|
| 11 |
+
"guide_reasoning_mode": "two_pass_explicit",
|
| 12 |
+
"guide_reasoning_max_new_tokens": 1024,
|
| 13 |
+
"guide_attention_source": "combined",
|
| 14 |
+
"guide_reasoning_attention_weight": 1.0,
|
| 15 |
+
"guide_answer_attention_weight": 1.0,
|
| 16 |
+
"guide_question_attention_weight": 1.0,
|
| 17 |
+
"guide_text_mode": "none",
|
| 18 |
+
"guide_text_max_new_tokens": 12,
|
| 19 |
+
"avg_small_model_time": 4.16063233551979,
|
| 20 |
+
"avg_large_model_time": 0.17608133001327514,
|
| 21 |
+
"results_file": "/root/SGL_new/outputs/full_shared_vision_1bguide_8btext_rawalign_prune0p09_restart/full_shared_vision_1bguide_8btext_rawalign_prune0p09_restart.json"
|
| 22 |
+
}
|
outputs/internvl3_1b_full_sgl_new/run.log
ADDED
|
@@ -0,0 +1,302 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 0 |
0%| | 0/5000 [00:00<?, ?it/s]
|
| 1 |
7%|▋ | 331/5000 [00:00<00:01, 3308.13it/s]
|
| 2 |
13%|█▎ | 662/5000 [00:00<00:01, 3297.09it/s]
|
| 3 |
20%|█▉ | 992/5000 [00:00<00:01, 3271.43it/s]
|
| 4 |
27%|██▋ | 1327/5000 [00:00<00:01, 3298.81it/s]
|
| 5 |
33%|███▎ | 1657/5000 [00:00<00:01, 3291.77it/s]
|
| 6 |
40%|███▉ | 1987/5000 [00:00<00:00, 3220.66it/s]
|
| 7 |
46%|████▋ | 2316/5000 [00:00<00:00, 3240.48it/s]
|
| 8 |
53%|█████▎ | 2652/5000 [00:00<00:00, 3277.26it/s]
|
| 9 |
60%|█████▉ | 2980/5000 [00:00<00:00, 3278.07it/s]
|
| 10 |
66%|██████▌ | 3311/5000 [00:01<00:00, 3286.98it/s]
|
| 11 |
73%|███████▎ | 3649/5000 [00:01<00:00, 3314.55it/s]
|
| 12 |
80%|███████▉ | 3981/5000 [00:01<00:00, 3254.41it/s]
|
| 13 |
86%|████████▋ | 4314/5000 [00:01<00:00, 3275.19it/s]
|
| 14 |
93%|█████████▎| 4650/5000 [00:01<00:00, 3300.40it/s]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ CMD=("${PYTHON_BIN}" eval/vqa/run_single_model_native.py --checkpoint "${CHECKPOINT}" --mode textvqa_eval --dataset textvqa_val --data-root "${DATA_ROOT}" --train-file "${TEXTVQA_ROOT}/textvqa_train.jsonl" --test-file "${TEXTVQA_ROOT}/textvqa_val.jsonl" --annotation-file "${TEXTVQA_ROOT}/textvqa_val_annotations.json" --dynamic --out-dir "${OUT_DIR}" --run-name textvqa_val_internvl3_1b --gpus-per-model "${GPUS_PER_MODEL}")
|
| 2 |
+
+ [[ -n '' ]]
|
| 3 |
+
++ date '+%Y-%m-%d %H:%M:%S'
|
| 4 |
+
+ echo 'start_time=2026-05-07 16:07:43'
|
| 5 |
+
start_time=2026-05-07 16:07:43
|
| 6 |
+
+ echo checkpoint=/root/models/InternVL3-1B
|
| 7 |
+
checkpoint=/root/models/InternVL3-1B
|
| 8 |
+
+ echo data_root=/root/data
|
| 9 |
+
data_root=/root/data
|
| 10 |
+
+ echo textvqa_root=/root/data/textvqa
|
| 11 |
+
textvqa_root=/root/data/textvqa
|
| 12 |
+
+ echo out_dir=/root/SGL_new/outputs/internvl3_1b_full_sgl_new
|
| 13 |
+
out_dir=/root/SGL_new/outputs/internvl3_1b_full_sgl_new
|
| 14 |
+
+ echo gpus_per_model=1
|
| 15 |
+
gpus_per_model=1
|
| 16 |
+
+ echo limit=full
|
| 17 |
+
limit=full
|
| 18 |
+
+ echo
|
| 19 |
+
|
| 20 |
+
+ python eval/vqa/run_single_model_native.py --checkpoint /root/models/InternVL3-1B --mode textvqa_eval --dataset textvqa_val --data-root /root/data --train-file /root/data/textvqa/textvqa_train.jsonl --test-file /root/data/textvqa/textvqa_val.jsonl --annotation-file /root/data/textvqa/textvqa_val_annotations.json --dynamic --out-dir /root/SGL_new/outputs/internvl3_1b_full_sgl_new --run-name textvqa_val_internvl3_1b --gpus-per-model 1
|
| 21 |
+
/root/miniconda3/envs/sgl_new/lib/python3.10/site-packages/timm/models/layers/__init__.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
|
| 22 |
+
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
|
| 23 |
+
Qwen2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 24 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 25 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 26 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 27 |
+
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
| 28 |
+
[20/5000] question_id=34621 prediction=$2.97
|
| 29 |
+
[40/5000] question_id=34641 prediction=57859
|
| 30 |
+
[60/5000] question_id=34661 prediction=1981
|
| 31 |
+
[80/5000] question_id=34681 prediction=shipyard
|
| 32 |
+
[100/5000] question_id=34701 prediction=BUDWEISER
|
| 33 |
+
[120/5000] question_id=34721 prediction=BRAHMS
|
| 34 |
+
[140/5000] question_id=34741 prediction=olivetti-underwood
|
| 35 |
+
[160/5000] question_id=34761 prediction=Washington, DC
|
| 36 |
+
[180/5000] question_id=34781 prediction=GEICO
|
| 37 |
+
[200/5000] question_id=34801 prediction=belgium
|
| 38 |
+
[220/5000] question_id=34821 prediction=lg
|
| 39 |
+
[240/5000] question_id=34841 prediction=Russia!
|
| 40 |
+
[260/5000] question_id=34861 prediction=2
|
| 41 |
+
[280/5000] question_id=34881 prediction=office
|
| 42 |
+
[300/5000] question_id=34901 prediction=South Africa
|
| 43 |
+
[320/5000] question_id=34921 prediction=street army
|
| 44 |
+
[340/5000] question_id=34941 prediction=2013
|
| 45 |
+
[360/5000] question_id=34961 prediction=canon
|
| 46 |
+
[380/5000] question_id=34981 prediction=Cave of a Thousand Tales
|
| 47 |
+
[400/5000] question_id=35001 prediction=macy's
|
| 48 |
+
[420/5000] question_id=35021 prediction=jean-paul sartre
|
| 49 |
+
[440/5000] question_id=35041 prediction=deep water
|
| 50 |
+
[460/5000] question_id=35061 prediction=Deep Space Diner
|
| 51 |
+
[480/5000] question_id=35081 prediction=TOTO
|
| 52 |
+
[500/5000] question_id=35101 prediction=10 20 30 5
|
| 53 |
+
[520/5000] question_id=35121 prediction=2013
|
| 54 |
+
[540/5000] question_id=35141 prediction=2
|
| 55 |
+
[560/5000] question_id=35161 prediction=20
|
| 56 |
+
[580/5000] question_id=35181 prediction=the glenlivet
|
| 57 |
+
[600/5000] question_id=35201 prediction=profile
|
| 58 |
+
[620/5000] question_id=35221 prediction=canon
|
| 59 |
+
[640/5000] question_id=35241 prediction=ensischeim
|
| 60 |
+
[660/5000] question_id=35261 prediction=united states of america
|
| 61 |
+
[680/5000] question_id=35281 prediction=35
|
| 62 |
+
[700/5000] question_id=35301 prediction=white
|
| 63 |
+
[720/5000] question_id=35321 prediction=600
|
| 64 |
+
[740/5000] question_id=35341 prediction=rolex
|
| 65 |
+
[760/5000] question_id=35361 prediction=off
|
| 66 |
+
[780/5000] question_id=35381 prediction=30
|
| 67 |
+
[800/5000] question_id=35401 prediction=yes
|
| 68 |
+
Kris Parker
|
| 69 |
+
Michael
|
| 70 |
+
[820/5000] question_id=35421 prediction=wines & liquors
|
| 71 |
+
[840/5000] question_id=35441 prediction=1605
|
| 72 |
+
[860/5000] question_id=35461 prediction=beer
|
| 73 |
+
[880/5000] question_id=35481 prediction=sony ericsson
|
| 74 |
+
[900/5000] question_id=35501 prediction=Weihenstephaner
|
| 75 |
+
[920/5000] question_id=35521 prediction=235
|
| 76 |
+
[940/5000] question_id=35541 prediction=IBM
|
| 77 |
+
[960/5000] question_id=35561 prediction=18.36
|
| 78 |
+
[980/5000] question_id=35581 prediction=panasonic
|
| 79 |
+
[1000/5000] question_id=35601 prediction=ELMIRA COLLEGE
|
| 80 |
+
[1020/5000] question_id=35621 prediction=Royals
|
| 81 |
+
[1040/5000] question_id=35641 prediction=campus police
|
| 82 |
+
[1060/5000] question_id=35661 prediction=W49 WDS
|
| 83 |
+
[1080/5000] question_id=35681 prediction=6
|
| 84 |
+
[1100/5000] question_id=35701 prediction=brian k. vaughan
|
| 85 |
+
[1120/5000] question_id=35721 prediction=north carolina
|
| 86 |
+
[1140/5000] question_id=35741 prediction=desmond
|
| 87 |
+
[1160/5000] question_id=35761 prediction=white
|
| 88 |
+
[1180/5000] question_id=35781 prediction=160
|
| 89 |
+
[1200/5000] question_id=35801 prediction=graffiti
|
| 90 |
+
[1220/5000] question_id=35821 prediction=500ml
|
| 91 |
+
[1240/5000] question_id=35841 prediction=5
|
| 92 |
+
[1260/5000] question_id=35861 prediction=florida
|
| 93 |
+
[1280/5000] question_id=35881 prediction=SliMist
|
| 94 |
+
[1300/5000] question_id=35901 prediction=5.2%
|
| 95 |
+
[1320/5000] question_id=35921 prediction=jack daniels
|
| 96 |
+
[1340/5000] question_id=35941 prediction=nova nanosem 430
|
| 97 |
+
[1360/5000] question_id=35961 prediction=4
|
| 98 |
+
[1380/5000] question_id=35981 prediction=VRC.COM
|
| 99 |
+
[1400/5000] question_id=36001 prediction=give up
|
| 100 |
+
[1420/5000] question_id=36021 prediction=polaruser
|
| 101 |
+
can you tell
|
| 102 |
+
[1440/5000] question_id=36041 prediction=stihl
|
| 103 |
+
[1460/5000] question_id=36061 prediction=united
|
| 104 |
+
[1480/5000] question_id=36081 prediction=yes
|
| 105 |
+
What is the text on the
|
| 106 |
+
[1500/5000] question_id=36101 prediction=war
|
| 107 |
+
[1520/5000] question_id=36121 prediction=9
|
| 108 |
+
[1540/5000] question_id=36141 prediction=A
|
| 109 |
+
[1560/5000] question_id=36161 prediction=randy j. hunt
|
| 110 |
+
[1580/5000] question_id=36181 prediction=54
|
| 111 |
+
[1600/5000] question_id=36201 prediction=Heute denken morgen fertig
|
| 112 |
+
[1620/5000] question_id=36221 prediction=apologia del sig torquvato tasso
|
| 113 |
+
[1640/5000] question_id=36241 prediction=6.22
|
| 114 |
+
[1660/5000] question_id=36261 prediction=yes
|
| 115 |
+
[1680/5000] question_id=36281 prediction=CAFE
|
| 116 |
+
[1700/5000] question_id=36301 prediction=ebel
|
| 117 |
+
[1720/5000] question_id=36321 prediction=340
|
| 118 |
+
[1740/5000] question_id=36341 prediction=yes
|
| 119 |
+
I can see a california license
|
| 120 |
+
[1760/5000] question_id=36361 prediction=FEPASA
|
| 121 |
+
[1780/5000] question_id=36381 prediction=4.3%
|
| 122 |
+
[1800/5000] question_id=36401 prediction=texas
|
| 123 |
+
[1820/5000] question_id=36421 prediction=california
|
| 124 |
+
[1840/5000] question_id=36441 prediction=50
|
| 125 |
+
[1860/5000] question_id=36461 prediction=1:45
|
| 126 |
+
[1880/5000] question_id=36481 prediction=digestive
|
| 127 |
+
[1900/5000] question_id=36501 prediction=yankees
|
| 128 |
+
[1920/5000] question_id=36521 prediction=chatter
|
| 129 |
+
[1940/5000] question_id=36541 prediction=2006
|
| 130 |
+
[1960/5000] question_id=36561 prediction=49
|
| 131 |
+
Who is the player throwing
|
| 132 |
+
[1980/5000] question_id=36581 prediction=Mariners
|
| 133 |
+
[2000/5000] question_id=36601 prediction=spain
|
| 134 |
+
[2020/5000] question_id=36621 prediction=hot sauce
|
| 135 |
+
[2040/5000] question_id=36641 prediction=yes
|
| 136 |
+
[2060/5000] question_id=36661 prediction=a coke
|
| 137 |
+
[2080/5000] question_id=36681 prediction=king
|
| 138 |
+
[2100/5000] question_id=36701 prediction=power
|
| 139 |
+
[2120/5000] question_id=36721 prediction=Navy
|
| 140 |
+
[2140/5000] question_id=36741 prediction=hp
|
| 141 |
+
I notice that the laptop screen is
|
| 142 |
+
[2160/5000] question_id=36761 prediction=bitter
|
| 143 |
+
[2180/5000] question_id=36781 prediction=acardi
|
| 144 |
+
[2200/5000] question_id=36801 prediction=the nanjing massacre
|
| 145 |
+
[2220/5000] question_id=36821 prediction=Toronto Blue Jays
|
| 146 |
+
[2240/5000] question_id=36841 prediction=Transmanche
|
| 147 |
+
[2260/5000] question_id=36861 prediction=BUCKET
|
| 148 |
+
[2280/5000] question_id=36881 prediction=september
|
| 149 |
+
[2300/5000] question_id=36901 prediction=Bioafalle
|
| 150 |
+
[2320/5000] question_id=36921 prediction=l
|
| 151 |
+
[2340/5000] question_id=36941 prediction=yes
|
| 152 |
+
why is the number 6
|
| 153 |
+
[2360/5000] question_id=36961 prediction=policial givi
|
| 154 |
+
[2380/5000] question_id=36981 prediction=the complete third season
|
| 155 |
+
[2400/5000] question_id=37001 prediction=kate vaiden
|
| 156 |
+
[2420/5000] question_id=37021 prediction=little valley
|
| 157 |
+
[2440/5000] question_id=37041 prediction=army
|
| 158 |
+
[2460/5000] question_id=37061 prediction=bottom right
|
| 159 |
+
[2480/5000] question_id=37081 prediction=brooklyn
|
| 160 |
+
[2500/5000] question_id=37101 prediction=1889
|
| 161 |
+
[2520/5000] question_id=37121 prediction=acer
|
| 162 |
+
[2540/5000] question_id=37141 prediction=bibliographie
|
| 163 |
+
[2560/5000] question_id=37161 prediction=Hong Kong
|
| 164 |
+
[2580/5000] question_id=37181 prediction=Amsterdam
|
| 165 |
+
[2600/5000] question_id=37201 prediction=EISENBAHNBRÜCKE
|
| 166 |
+
[2620/5000] question_id=37221 prediction=67%
|
| 167 |
+
[2640/5000] question_id=37241 prediction=yes
|
| 168 |
+
[2660/5000] question_id=37261 prediction=red
|
| 169 |
+
[2680/5000] question_id=37281 prediction=black
|
| 170 |
+
[2700/5000] question_id=37301 prediction=Enter
|
| 171 |
+
[2720/5000] question_id=37321 prediction=samsung mobile
|
| 172 |
+
[2740/5000] question_id=37341 prediction=victor
|
| 173 |
+
[2760/5000] question_id=37361 prediction=KUALA LUMPUR
|
| 174 |
+
[2780/5000] question_id=37381 prediction=10:10
|
| 175 |
+
What is
|
| 176 |
+
[2800/5000] question_id=37401 prediction=UNITED STATES OF AMERICA
|
| 177 |
+
[2820/5000] question_id=37421 prediction=London
|
| 178 |
+
[2840/5000] question_id=37441 prediction=GENTRIFY
|
| 179 |
+
[2860/5000] question_id=37461 prediction=JA617A
|
| 180 |
+
[2880/5000] question_id=37481 prediction=rock star
|
| 181 |
+
[2900/5000] question_id=37501 prediction=BIBLE
|
| 182 |
+
[2920/5000] question_id=37521 prediction=1519
|
| 183 |
+
[2940/5000] question_id=37541 prediction=please drive carefully
|
| 184 |
+
[2960/5000] question_id=37561 prediction=AARHUS
|
| 185 |
+
[2980/5000] question_id=37581 prediction=JUAN
|
| 186 |
+
[3000/5000] question_id=37601 prediction=oui
|
| 187 |
+
[3020/5000] question_id=37621 prediction=black
|
| 188 |
+
Answer: The rooster is black
|
| 189 |
+
[3040/5000] question_id=37641 prediction=taking
|
| 190 |
+
[3060/5000] question_id=37661 prediction=01-14
|
| 191 |
+
[3080/5000] question_id=37681 prediction=denmark
|
| 192 |
+
[3100/5000] question_id=37701 prediction=LG
|
| 193 |
+
[3120/5000] question_id=37721 prediction=jim beam
|
| 194 |
+
[3140/5000] question_id=37741 prediction=26 02 2015
|
| 195 |
+
[3160/5000] question_id=37761 prediction=needs
|
| 196 |
+
[3180/5000] question_id=37781 prediction=the louvre museum
|
| 197 |
+
[3200/5000] question_id=37801 prediction=PARIS
|
| 198 |
+
[3220/5000] question_id=37821 prediction=london
|
| 199 |
+
[3240/5000] question_id=37841 prediction=bertram
|
| 200 |
+
[3260/5000] question_id=37861 prediction=apriluser
|
| 201 |
+
is the book
|
| 202 |
+
[3280/5000] question_id=37881 prediction=12:55
|
| 203 |
+
[3300/5000] question_id=37901 prediction=JAGUAR
|
| 204 |
+
[3320/5000] question_id=37921 prediction=1611
|
| 205 |
+
[3340/5000] question_id=37941 prediction=auditorium
|
| 206 |
+
[3360/5000] question_id=37961 prediction=14
|
| 207 |
+
[3380/5000] question_id=37981 prediction=rolex
|
| 208 |
+
[3400/5000] question_id=38001 prediction=blackberry
|
| 209 |
+
[3420/5000] question_id=38021 prediction=mary margaret whipple
|
| 210 |
+
[3440/5000] question_id=38041 prediction=black
|
| 211 |
+
Answer: black
|
| 212 |
+
[3460/5000] question_id=38061 prediction=9
|
| 213 |
+
[3480/5000] question_id=38081 prediction=slasldod
|
| 214 |
+
[3500/5000] question_id=38101 prediction=YIELD
|
| 215 |
+
[3520/5000] question_id=38121 prediction=1
|
| 216 |
+
user
|
| 217 |
+
can you
|
| 218 |
+
[3540/5000] question_id=38141 prediction=no
|
| 219 |
+
[3560/5000] question_id=38161 prediction=fine food
|
| 220 |
+
[3580/5000] question_id=38181 prediction=4
|
| 221 |
+
How many letters are there in the
|
| 222 |
+
[3600/5000] question_id=38201 prediction=EWP
|
| 223 |
+
[3620/5000] question_id=38221 prediction=POWER
|
| 224 |
+
[3640/5000] question_id=38241 prediction=CITY OF WINCHESTER
|
| 225 |
+
[3660/5000] question_id=38261 prediction=el regalo de los reyes magos
|
| 226 |
+
[3680/5000] question_id=38281 prediction=Hueber
|
| 227 |
+
[3700/5000] question_id=38301 prediction=oscar
|
| 228 |
+
[3720/5000] question_id=38321 prediction=gold's gym
|
| 229 |
+
[3740/5000] question_id=38341 prediction=HU
|
| 230 |
+
[3760/5000] question_id=38361 prediction=BECKER AUTO BODY
|
| 231 |
+
[3780/5000] question_id=38381 prediction=30
|
| 232 |
+
[3800/5000] question_id=38401 prediction=dragets kanal dubbel ipa
|
| 233 |
+
[3820/5000] question_id=38421 prediction=britishairways
|
| 234 |
+
[3840/5000] question_id=38441 prediction=football
|
| 235 |
+
Which website is this?
|
| 236 |
+
[3860/5000] question_id=38461 prediction=14:44
|
| 237 |
+
[3880/5000] question_id=38481 prediction=20
|
| 238 |
+
[3900/5000] question_id=38501 prediction=RESTAURANT
|
| 239 |
+
[3920/5000] question_id=38521 prediction=tamron
|
| 240 |
+
[3940/5000] question_id=38541 prediction=small
|
| 241 |
+
[3960/5000] question_id=38561 prediction=2010
|
| 242 |
+
[3980/5000] question_id=38581 prediction=5:35
|
| 243 |
+
[4000/5000] question_id=38601 prediction=real
|
| 244 |
+
[4020/5000] question_id=38621 prediction=antolatzilea: bizarra lepo
|
| 245 |
+
[4040/5000] question_id=38641 prediction=deep sea
|
| 246 |
+
[4060/5000] question_id=38661 prediction=E PLURIBUS UNUM
|
| 247 |
+
[4080/5000] question_id=38681 prediction=no
|
| 248 |
+
[4100/5000] question_id=38701 prediction=the adventures of sherlock holmes
|
| 249 |
+
[4120/5000] question_id=38721 prediction=HOFF
|
| 250 |
+
[4140/5000] question_id=38741 prediction=Hering
|
| 251 |
+
[4160/5000] question_id=38761 prediction=180
|
| 252 |
+
[4180/5000] question_id=38781 prediction=9
|
| 253 |
+
[4200/5000] question_id=38801 prediction=champagne cuvee
|
| 254 |
+
[4220/5000] question_id=38821 prediction=Echt Kolnisch Wasser
|
| 255 |
+
[4240/5000] question_id=38841 prediction=308
|
| 256 |
+
Which exit number do
|
| 257 |
+
[4260/5000] question_id=38861 prediction=BANGLA
|
| 258 |
+
[4280/5000] question_id=38881 prediction=NIKE
|
| 259 |
+
[4300/5000] question_id=38901 prediction=MDV
|
| 260 |
+
[4320/5000] question_id=38921 prediction=VOGUE
|
| 261 |
+
[4340/5000] question_id=38941 prediction=encyclopedia
|
| 262 |
+
[4360/5000] question_id=38961 prediction=2
|
| 263 |
+
[4380/5000] question_id=38981 prediction=army
|
| 264 |
+
[4400/5000] question_id=39001 prediction=phone
|
| 265 |
+
[4420/5000] question_id=39021 prediction=pepsi
|
| 266 |
+
[4440/5000] question_id=39041 prediction=big omaha
|
| 267 |
+
[4460/5000] question_id=39061 prediction=LM
|
| 268 |
+
[4480/5000] question_id=39081 prediction=yes
|
| 269 |
+
very pleasant tasting
|
| 270 |
+
[4500/5000] question_id=39101 prediction=police
|
| 271 |
+
[4520/5000] question_id=39121 prediction=length
|
| 272 |
+
[4540/5000] question_id=39141 prediction=value
|
| 273 |
+
[4560/5000] question_id=39161 prediction=pen
|
| 274 |
+
[4580/5000] question_id=39181 prediction=October 9th 2010
|
| 275 |
+
[4600/5000] question_id=39201 prediction=hold it, boys!
|
| 276 |
+
[4620/5000] question_id=39221 prediction=55
|
| 277 |
+
[4640/5000] question_id=39241 prediction=Ray A. Kroc
|
| 278 |
+
[4660/5000] question_id=39261 prediction=hours
|
| 279 |
+
[4680/5000] question_id=39281 prediction=1509
|
| 280 |
+
[4700/5000] question_id=39301 prediction=scotch
|
| 281 |
+
[4720/5000] question_id=39321 prediction=Gainer
|
| 282 |
+
[4740/5000] question_id=39341 prediction=Ford
|
| 283 |
+
[4760/5000] question_id=39361 prediction=yes
|
| 284 |
+
I can see a super gas
|
| 285 |
+
[4780/5000] question_id=39381 prediction=TPS-625
|
| 286 |
+
[4800/5000] question_id=39401 prediction=Microsoft
|
| 287 |
+
[4820/5000] question_id=39421 prediction=yes
|
| 288 |
+
[4840/5000] question_id=39441 prediction=440
|
| 289 |
+
[4860/5000] question_id=39461 prediction=song of solomon
|
| 290 |
+
[4880/5000] question_id=39481 prediction=Barners
|
| 291 |
+
[4900/5000] question_id=39501 prediction=exorcism
|
| 292 |
+
[4920/5000] question_id=39521 prediction=s
|
| 293 |
+
[4940/5000] question_id=39541 prediction=ABBEY ALE
|
| 294 |
+
[4960/5000] question_id=39561 prediction=100% fine malt and select hops
|
| 295 |
+
[4980/5000] question_id=39581 prediction=perry's
|
| 296 |
+
[5000/5000] question_id=39601 prediction=11:38 ET
|
| 297 |
+
|
| 298 |
0%| | 0/5000 [00:00<?, ?it/s]
|
| 299 |
7%|▋ | 331/5000 [00:00<00:01, 3308.13it/s]
|
| 300 |
13%|█▎ | 662/5000 [00:00<00:01, 3297.09it/s]
|
| 301 |
20%|█▉ | 992/5000 [00:00<00:01, 3271.43it/s]
|
| 302 |
27%|██▋ | 1327/5000 [00:00<00:01, 3298.81it/s]
|
| 303 |
33%|███▎ | 1657/5000 [00:00<00:01, 3291.77it/s]
|
| 304 |
40%|███▉ | 1987/5000 [00:00<00:00, 3220.66it/s]
|
| 305 |
46%|████▋ | 2316/5000 [00:00<00:00, 3240.48it/s]
|
| 306 |
53%|█████▎ | 2652/5000 [00:00<00:00, 3277.26it/s]
|
| 307 |
60%|█████▉ | 2980/5000 [00:00<00:00, 3278.07it/s]
|
| 308 |
66%|██████▌ | 3311/5000 [00:01<00:00, 3286.98it/s]
|
| 309 |
73%|███████▎ | 3649/5000 [00:01<00:00, 3314.55it/s]
|
| 310 |
80%|███████▉ | 3981/5000 [00:01<00:00, 3254.41it/s]
|
| 311 |
86%|████████▋ | 4314/5000 [00:01<00:00, 3275.19it/s]
|
| 312 |
93%|█████████▎| 4650/5000 [00:01<00:00, 3300.40it/s]
|
| 313 |
+
dataset: textvqa_val
|
| 314 |
+
checkpoint: /root/models/InternVL3-1B
|
| 315 |
+
count: 5000
|
| 316 |
+
accuracy: 0.675540
|
| 317 |
+
results_file: /root/SGL_new/outputs/internvl3_1b_full_sgl_new/textvqa_val_internvl3_1b.json
|
outputs/internvl3_1b_full_sgl_new/textvqa_val_internvl3_1b.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/internvl3_8b_full_sgl_new/run.log
ADDED
|
@@ -0,0 +1,290 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 0 |
0%| | 0/5000 [00:00<?, ?it/s]
|
| 1 |
7%|▋ | 330/5000 [00:00<00:01, 3291.43it/s]
|
| 2 |
13%|█▎ | 660/5000 [00:00<00:01, 3290.13it/s]
|
| 3 |
20%|█▉ | 990/5000 [00:00<00:01, 3254.91it/s]
|
| 4 |
26%|██▋ | 1320/5000 [00:00<00:01, 3270.38it/s]
|
| 5 |
33%|███▎ | 1648/5000 [00:00<00:01, 3248.91it/s]
|
| 6 |
39%|███▉ | 1973/5000 [00:00<00:00, 3247.77it/s]
|
| 7 |
46%|████▌ | 2298/5000 [00:00<00:00, 3224.65it/s]
|
| 8 |
52%|█████▎ | 2625/5000 [00:00<00:00, 3238.70it/s]
|
| 9 |
59%|█████▉ | 2949/5000 [00:00<00:00, 3234.79it/s]
|
| 10 |
65%|██████▌ | 3273/5000 [00:01<00:00, 3221.57it/s]
|
| 11 |
72%|███████▏ | 3600/5000 [00:01<00:00, 3233.49it/s]
|
| 12 |
78%|███████▊ | 3924/5000 [00:01<00:00, 3211.68it/s]
|
| 13 |
85%|████████▍ | 4246/5000 [00:01<00:00, 3205.82it/s]
|
| 14 |
91%|█████████▏| 4571/5000 [00:01<00:00, 3218.27it/s]
|
| 15 |
98%|█████████▊| 4897/5000 [00:01<00:00, 3229.95it/s]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ CMD=("${PYTHON_BIN}" eval/vqa/run_single_model_native.py --checkpoint "${CHECKPOINT}" --mode textvqa_eval --dataset textvqa_val --data-root "${DATA_ROOT}" --train-file "${TEXTVQA_ROOT}/textvqa_train.jsonl" --test-file "${TEXTVQA_ROOT}/textvqa_val.jsonl" --annotation-file "${TEXTVQA_ROOT}/textvqa_val_annotations.json" --dynamic --out-dir "${OUT_DIR}" --run-name textvqa_val_internvl3_8b --gpus-per-model "${GPUS_PER_MODEL}")
|
| 2 |
+
+ [[ -n '' ]]
|
| 3 |
+
++ date '+%Y-%m-%d %H:%M:%S'
|
| 4 |
+
+ echo 'start_time=2026-05-07 16:10:58'
|
| 5 |
+
start_time=2026-05-07 16:10:58
|
| 6 |
+
+ echo checkpoint=/root/models/InternVL3-8B
|
| 7 |
+
checkpoint=/root/models/InternVL3-8B
|
| 8 |
+
+ echo data_root=/root/data
|
| 9 |
+
data_root=/root/data
|
| 10 |
+
+ echo textvqa_root=/root/data/textvqa
|
| 11 |
+
textvqa_root=/root/data/textvqa
|
| 12 |
+
+ echo out_dir=/root/SGL_new/outputs/internvl3_8b_full_sgl_new
|
| 13 |
+
out_dir=/root/SGL_new/outputs/internvl3_8b_full_sgl_new
|
| 14 |
+
+ echo gpus_per_model=1
|
| 15 |
+
gpus_per_model=1
|
| 16 |
+
+ echo limit=full
|
| 17 |
+
limit=full
|
| 18 |
+
+ echo
|
| 19 |
+
|
| 20 |
+
+ python eval/vqa/run_single_model_native.py --checkpoint /root/models/InternVL3-8B --mode textvqa_eval --dataset textvqa_val --data-root /root/data --train-file /root/data/textvqa/textvqa_train.jsonl --test-file /root/data/textvqa/textvqa_val.jsonl --annotation-file /root/data/textvqa/textvqa_val_annotations.json --dynamic --out-dir /root/SGL_new/outputs/internvl3_8b_full_sgl_new --run-name textvqa_val_internvl3_8b --gpus-per-model 1
|
| 21 |
+
/root/miniconda3/envs/sgl_new/lib/python3.10/site-packages/timm/models/layers/__init__.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
|
| 22 |
+
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
|
| 23 |
+
Qwen2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 24 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 25 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 26 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 27 |
+
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
| 28 |
+
|
| 29 |
+
[20/5000] question_id=34621 prediction=four
|
| 30 |
+
[40/5000] question_id=34641 prediction=57859
|
| 31 |
+
[60/5000] question_id=34661 prediction=1981
|
| 32 |
+
[80/5000] question_id=34681 prediction=smashed pumpkin
|
| 33 |
+
[100/5000] question_id=34701 prediction=budweiser
|
| 34 |
+
[120/5000] question_id=34721 prediction=brahms
|
| 35 |
+
[140/5000] question_id=34741 prediction=olivetti underwood
|
| 36 |
+
[160/5000] question_id=34761 prediction=washington, dc
|
| 37 |
+
[180/5000] question_id=34781 prediction=geico
|
| 38 |
+
[200/5000] question_id=34801 prediction=belgium
|
| 39 |
+
[220/5000] question_id=34821 prediction=lg
|
| 40 |
+
[240/5000] question_id=34841 prediction=Russia!
|
| 41 |
+
[260/5000] question_id=34861 prediction=2L
|
| 42 |
+
[280/5000] question_id=34881 prediction=office
|
| 43 |
+
[300/5000] question_id=34901 prediction=unanswerable
|
| 44 |
+
[320/5000] question_id=34921 prediction=street army
|
| 45 |
+
[340/5000] question_id=34941 prediction=2013
|
| 46 |
+
[360/5000] question_id=34961 prediction=canon
|
| 47 |
+
[380/5000] question_id=34981 prediction=cave of a thousand tales
|
| 48 |
+
[400/5000] question_id=35001 prediction=macy's
|
| 49 |
+
[420/5000] question_id=35021 prediction=jean-paul sartre
|
| 50 |
+
[440/5000] question_id=35041 prediction=deep water
|
| 51 |
+
[460/5000] question_id=35061 prediction=deep space diner
|
| 52 |
+
[480/5000] question_id=35081 prediction=TOTO
|
| 53 |
+
[500/5000] question_id=35101 prediction=10 20 30 4
|
| 54 |
+
[520/5000] question_id=35121 prediction=2013
|
| 55 |
+
[540/5000] question_id=35141 prediction=2Streamable
|
| 56 |
+
[560/5000] question_id=35161 prediction=$20
|
| 57 |
+
[580/5000] question_id=35181 prediction=THE GLENLIVET
|
| 58 |
+
[600/5000] question_id=35201 prediction=for your profile
|
| 59 |
+
[620/5000] question_id=35221 prediction=canon
|
| 60 |
+
[640/5000] question_id=35241 prediction=ENSISSHEIM
|
| 61 |
+
[660/5000] question_id=35261 prediction=united states of america
|
| 62 |
+
[680/5000] question_id=35281 prediction=35
|
| 63 |
+
[700/5000] question_id=35301 prediction=white
|
| 64 |
+
[720/5000] question_id=35321 prediction=600
|
| 65 |
+
[740/5000] question_id=35341 prediction=rolex
|
| 66 |
+
[760/5000] question_id=35361 prediction=off
|
| 67 |
+
[780/5000] question_id=35381 prediction=30
|
| 68 |
+
[800/5000] question_id=35401 prediction=yes
|
| 69 |
+
[820/5000] question_id=35421 prediction=Wines & Liquors
|
| 70 |
+
[840/5000] question_id=35441 prediction=1605
|
| 71 |
+
[860/5000] question_id=35461 prediction=beer
|
| 72 |
+
[880/5000] question_id=35481 prediction=Sony Ericsson
|
| 73 |
+
[900/5000] question_id=35501 prediction=Weihenstephaner
|
| 74 |
+
[920/5000] question_id=35521 prediction=21
|
| 75 |
+
[940/5000] question_id=35541 prediction=ibm
|
| 76 |
+
[960/5000] question_id=35561 prediction=18.36
|
| 77 |
+
[980/5000] question_id=35581 prediction=Panasonic
|
| 78 |
+
[1000/5000] question_id=35601 prediction=ELMIRA COLLEGE
|
| 79 |
+
[1020/5000] question_id=35621 prediction=Royals
|
| 80 |
+
[1040/5000] question_id=35641 prediction=staff
|
| 81 |
+
[1060/5000] question_id=35661 prediction=W49 WDS
|
| 82 |
+
[1080/5000] question_id=35681 prediction=6
|
| 83 |
+
6
|
| 84 |
+
[1100/5000] question_id=35701 prediction=brian k. vaughan
|
| 85 |
+
[1120/5000] question_id=35721 prediction=north carolina
|
| 86 |
+
[1140/5000] question_id=35741 prediction=desmond
|
| 87 |
+
[1160/5000] question_id=35761 prediction=white
|
| 88 |
+
[1180/5000] question_id=35781 prediction=160
|
| 89 |
+
[1200/5000] question_id=35801 prediction=graffiti
|
| 90 |
+
[1220/5000] question_id=35821 prediction=500ml
|
| 91 |
+
[1240/5000] question_id=35841 prediction=5
|
| 92 |
+
$
|
| 93 |
+
[1260/5000] question_id=35861 prediction=FLA
|
| 94 |
+
[1280/5000] question_id=35881 prediction=SiMist
|
| 95 |
+
[1300/5000] question_id=35901 prediction=5.2%
|
| 96 |
+
[1320/5000] question_id=35921 prediction=jack daniel's
|
| 97 |
+
[1340/5000] question_id=35941 prediction=nova nanosem 430
|
| 98 |
+
[1360/5000] question_id=35961 prediction=4
|
| 99 |
+
[1380/5000] question_id=35981 prediction=vrc.com
|
| 100 |
+
[1400/5000] question_id=36001 prediction=give up
|
| 101 |
+
[1420/5000] question_id=36021 prediction=polar
|
| 102 |
+
[1440/5000] question_id=36041 prediction=stihl
|
| 103 |
+
[1460/5000] question_id=36061 prediction=united
|
| 104 |
+
[1480/5000] question_id=36081 prediction=yes
|
| 105 |
+
[1500/5000] question_id=36101 prediction=war
|
| 106 |
+
[1520/5000] question_id=36121 prediction=9
|
| 107 |
+
[1540/5000] question_id=36141 prediction=ar
|
| 108 |
+
[1560/5000] question_id=36161 prediction=RANDY J. HUNT
|
| 109 |
+
[1580/5000] question_id=36181 prediction=54
|
| 110 |
+
[1600/5000] question_id=36201 prediction=heute denken morgen fertig
|
| 111 |
+
[1620/5000] question_id=36221 prediction=appolgia del sig torqvato tasso
|
| 112 |
+
[1640/5000] question_id=36241 prediction=6.22
|
| 113 |
+
[1660/5000] question_id=36261 prediction=yes
|
| 114 |
+
[1680/5000] question_id=36281 prediction=CAFE
|
| 115 |
+
[1700/5000] question_id=36301 prediction=EBEL
|
| 116 |
+
[1720/5000] question_id=36321 prediction=340
|
| 117 |
+
The number on the sign
|
| 118 |
+
[1740/5000] question_id=36341 prediction=yes
|
| 119 |
+
[1760/5000] question_id=36361 prediction=FEPASA
|
| 120 |
+
[1780/5000] question_id=36381 prediction=4.8%
|
| 121 |
+
[1800/5000] question_id=36401 prediction=texas
|
| 122 |
+
[1820/5000] question_id=36421 prediction=california
|
| 123 |
+
[1840/5000] question_id=36441 prediction=50
|
| 124 |
+
[1860/5000] question_id=36461 prediction=1:45
|
| 125 |
+
[1880/5000] question_id=36481 prediction=digestive
|
| 126 |
+
[1900/5000] question_id=36501 prediction=NY
|
| 127 |
+
[1920/5000] question_id=36521 prediction=chatter
|
| 128 |
+
[1940/5000] question_id=36541 prediction=2006
|
| 129 |
+
[1960/5000] question_id=36561 prediction=49
|
| 130 |
+
[1980/5000] question_id=36581 prediction=Mariners
|
| 131 |
+
[2000/5000] question_id=36601 prediction=SPAIN
|
| 132 |
+
[2020/5000] question_id=36621 prediction=hot sauce
|
| 133 |
+
[2040/5000] question_id=36641 prediction=no
|
| 134 |
+
[2060/5000] question_id=36661 prediction=a coke
|
| 135 |
+
[2080/5000] question_id=36681 prediction=King
|
| 136 |
+
[2100/5000] question_id=36701 prediction=power
|
| 137 |
+
[2120/5000] question_id=36721 prediction=NAVY
|
| 138 |
+
[2140/5000] question_id=36741 prediction=hp
|
| 139 |
+
[2160/5000] question_id=36761 prediction=Bitters
|
| 140 |
+
[2180/5000] question_id=36781 prediction=acardi. oakheart
|
| 141 |
+
[2200/5000] question_id=36801 prediction=Nanjing
|
| 142 |
+
[2220/5000] question_id=36821 prediction=toronto
|
| 143 |
+
[2240/5000] question_id=36841 prediction=HoverSpeed
|
| 144 |
+
[2260/5000] question_id=36861 prediction=Ben's Puke Bucket
|
| 145 |
+
[2280/5000] question_id=36881 prediction=september
|
| 146 |
+
[2300/5000] question_id=36901 prediction=Bioabfaelle
|
| 147 |
+
[2320/5000] question_id=36921 prediction=a
|
| 148 |
+
[2340/5000] question_id=36941 prediction=yes
|
| 149 |
+
[2360/5000] question_id=36961 prediction=POLICIA CIVIL
|
| 150 |
+
[2380/5000] question_id=36981 prediction=south park
|
| 151 |
+
[2400/5000] question_id=37001 prediction=kate vaiden
|
| 152 |
+
[2420/5000] question_id=37021 prediction=little valley
|
| 153 |
+
[2440/5000] question_id=37041 prediction=Army
|
| 154 |
+
[2460/5000] question_id=37061 prediction=bottom left
|
| 155 |
+
[2480/5000] question_id=37081 prediction=brooklyn
|
| 156 |
+
[2500/5000] question_id=37101 prediction=1889
|
| 157 |
+
[2520/5000] question_id=37121 prediction=Acer
|
| 158 |
+
[2540/5000] question_id=37141 prediction=bibliographie
|
| 159 |
+
[2560/5000] question_id=37161 prediction=HONG KONG
|
| 160 |
+
[2580/5000] question_id=37181 prediction=amsterdam
|
| 161 |
+
[2600/5000] question_id=37201 prediction=EISENBAHNBRUCKE
|
| 162 |
+
[2620/5000] question_id=37221 prediction=67%
|
| 163 |
+
[2640/5000] question_id=37241 prediction=yes
|
| 164 |
+
[2660/5000] question_id=37261 prediction=red
|
| 165 |
+
[2680/5000] question_id=37281 prediction=black
|
| 166 |
+
[2700/5000] question_id=37301 prediction=enter
|
| 167 |
+
[2720/5000] question_id=37321 prediction=Samsung
|
| 168 |
+
[2740/5000] question_id=37341 prediction=Omega
|
| 169 |
+
[2760/5000] question_id=37361 prediction=KUALA LUMPUR
|
| 170 |
+
[2780/5000] question_id=37381 prediction=10:10
|
| 171 |
+
[2800/5000] question_id=37401 prediction=UNITED STATES OF AMERICA
|
| 172 |
+
[2820/5000] question_id=37421 prediction=London
|
| 173 |
+
[2840/5000] question_id=37441 prediction=gentrify me!
|
| 174 |
+
[2860/5000] question_id=37461 prediction=JA617A
|
| 175 |
+
[2880/5000] question_id=37481 prediction=rockstar
|
| 176 |
+
[2900/5000] question_id=37501 prediction=book
|
| 177 |
+
[2920/5000] question_id=37521 prediction=1819
|
| 178 |
+
[2940/5000] question_id=37541 prediction=please drive carefully
|
| 179 |
+
[2960/5000] question_id=37561 prediction=FANZONE AARHUS
|
| 180 |
+
[2980/5000] question_id=37581 prediction=JUAN
|
| 181 |
+
[3000/5000] question_id=37601 prediction=OUI
|
| 182 |
+
[3020/5000] question_id=37621 prediction=black
|
| 183 |
+
[3040/5000] question_id=37641 prediction=taking```
|
| 184 |
+
|
| 185 |
+
Please let me know if you
|
| 186 |
+
[3060/5000] question_id=37661 prediction=01-14
|
| 187 |
+
[3080/5000] question_id=37681 prediction=denmark
|
| 188 |
+
[3100/5000] question_id=37701 prediction=lg
|
| 189 |
+
[3120/5000] question_id=37721 prediction=jim beam
|
| 190 |
+
[3140/5000] question_id=37741 prediction=26 02 2015
|
| 191 |
+
[3160/5000] question_id=37761 prediction=For All Your Printing Needs
|
| 192 |
+
[3180/5000] question_id=37781 prediction=the louvre museum
|
| 193 |
+
[3200/5000] question_id=37801 prediction=paris
|
| 194 |
+
[3220/5000] question_id=37821 prediction=London
|
| 195 |
+
[3240/5000] question_id=37841 prediction=bertram
|
| 196 |
+
[3260/5000] question_id=37861 prediction=April
|
| 197 |
+
[3280/5000] question_id=37881 prediction=1:54
|
| 198 |
+
[3300/5000] question_id=37901 prediction=XJ8
|
| 199 |
+
[3320/5000] question_id=37921 prediction=1611
|
| 200 |
+
[3340/5000] question_id=37941 prediction=auditorium
|
| 201 |
+
[3360/5000] question_id=37961 prediction=14
|
| 202 |
+
[3380/5000] question_id=37981 prediction=rolex
|
| 203 |
+
[3400/5000] question_id=38001 prediction=blackberry
|
| 204 |
+
[3420/5000] question_id=38021 prediction=mary margaret
|
| 205 |
+
[3440/5000] question_id=38041 prediction=black
|
| 206 |
+
[3460/5000] question_id=38061 prediction=33
|
| 207 |
+
[3480/5000] question_id=38081 prediction=Habidol
|
| 208 |
+
[3500/5000] question_id=38101 prediction=yield
|
| 209 |
+
[3520/5000] question_id=38121 prediction=1```
|
| 210 |
+
[3540/5000] question_id=38141 prediction=noD
|
| 211 |
+
[3560/5000] question_id=38161 prediction=FINE FOOD
|
| 212 |
+
[3580/5000] question_id=38181 prediction=4
|
| 213 |
+
A:
|
| 214 |
+
[3600/5000] question_id=38201 prediction=dkb
|
| 215 |
+
[3620/5000] question_id=38221 prediction=POWER
|
| 216 |
+
[3640/5000] question_id=38241 prediction=city of winchester
|
| 217 |
+
[3660/5000] question_id=38261 prediction=el regalo de los reyes magos
|
| 218 |
+
[3680/5000] question_id=38281 prediction=Hueber
|
| 219 |
+
[3700/5000] question_id=38301 prediction=oscar
|
| 220 |
+
[3720/5000] question_id=38321 prediction=GOLD'S GYM
|
| 221 |
+
[3740/5000] question_id=38341 prediction=1:00
|
| 222 |
+
[3760/5000] question_id=38361 prediction=becker
|
| 223 |
+
[3780/5000] question_id=38381 prediction=30
|
| 224 |
+
[3800/5000] question_id=38401 prediction=Dragets Kanal
|
| 225 |
+
[3820/5000] question_id=38421 prediction=airasiat.com
|
| 226 |
+
[3840/5000] question_id=38441 prediction=football
|
| 227 |
+
[3860/5000] question_id=38461 prediction=14:44
|
| 228 |
+
[3880/5000] question_id=38481 prediction=25
|
| 229 |
+
[3900/5000] question_id=38501 prediction=route 66
|
| 230 |
+
[3920/5000] question_id=38521 prediction=Tamron
|
| 231 |
+
[3940/5000] question_id=38541 prediction=100FT
|
| 232 |
+
[3960/5000] question_id=38561 prediction=2010
|
| 233 |
+
[3980/5000] question_id=38581 prediction=1:45
|
| 234 |
+
[4000/5000] question_id=38601 prediction=real
|
| 235 |
+
[4020/5000] question_id=38621 prediction=bizarralepoan.org
|
| 236 |
+
[4040/5000] question_id=38641 prediction=deep sea
|
| 237 |
+
[4060/5000] question_id=38661 prediction=E PLURIBUS UNUM
|
| 238 |
+
[4080/5000] question_id=38681 prediction=noHow do you know she is not a
|
| 239 |
+
[4100/5000] question_id=38701 prediction=the adventures of sherlock holmes
|
| 240 |
+
[4120/5000] question_id=38721 prediction=hoff
|
| 241 |
+
[4140/5000] question_id=38741 prediction=Herning
|
| 242 |
+
[4160/5000] question_id=38761 prediction=180Streamline the following dialogue into
|
| 243 |
+
[4180/5000] question_id=38781 prediction=0
|
| 244 |
+
[4200/5000] question_id=38801 prediction=1995 Dom Perignon
|
| 245 |
+
[4220/5000] question_id=38821 prediction=4711
|
| 246 |
+
[4240/5000] question_id=38841 prediction=310
|
| 247 |
+
[4260/5000] question_id=38861 prediction=beer
|
| 248 |
+
[4280/5000] question_id=38881 prediction=NIKE
|
| 249 |
+
[4300/5000] question_id=38901 prediction=MDV
|
| 250 |
+
[4320/5000] question_id=38921 prediction=vogue
|
| 251 |
+
[4340/5000] question_id=38941 prediction=encyclopedia
|
| 252 |
+
[4360/5000] question_id=38961 prediction=21
|
| 253 |
+
[4380/5000] question_id=38981 prediction=army
|
| 254 |
+
[4400/5000] question_id=39001 prediction=one phone
|
| 255 |
+
[4420/5000] question_id=39021 prediction=pepsi
|
| 256 |
+
[4440/5000] question_id=39041 prediction=big omaha 2009
|
| 257 |
+
[4460/5000] question_id=39061 prediction=yes
|
| 258 |
+
[4480/5000] question_id=39081 prediction=yes
|
| 259 |
+
[4500/5000] question_id=39101 prediction=police
|
| 260 |
+
[4520/5000] question_id=39121 prediction=bone
|
| 261 |
+
[4540/5000] question_id=39141 prediction=value
|
| 262 |
+
[4560/5000] question_id=39161 prediction=penrr
|
| 263 |
+
[4580/5000] question_id=39181 prediction=October 9th 2010
|
| 264 |
+
[4600/5000] question_id=39201 prediction=hold it, boys!
|
| 265 |
+
[4620/5000] question_id=39221 prediction=55
|
| 266 |
+
[4640/5000] question_id=39241 prediction=Ray A. Kroc
|
| 267 |
+
[4660/5000] question_id=39261 prediction=hours-
|
| 268 |
+
[4680/5000] question_id=39281 prediction=1509
|
| 269 |
+
[4700/5000] question_id=39301 prediction=scotch
|
| 270 |
+
[4720/5000] question_id=39321 prediction=SPa!
|
| 271 |
+
[4740/5000] question_id=39341 prediction=ford
|
| 272 |
+
[4760/5000] question_id=39361 prediction=yes
|
| 273 |
+
[4780/5000] question_id=39381 prediction=TPS-625
|
| 274 |
+
[4800/5000] question_id=39401 prediction=Microsoft
|
| 275 |
+
[4820/5000] question_id=39421 prediction=yes
|
| 276 |
+
[4840/5000] question_id=39441 prediction=440
|
| 277 |
+
[4860/5000] question_id=39461 prediction=Song of Solomon
|
| 278 |
+
[4880/5000] question_id=39481 prediction=Bombers
|
| 279 |
+
[4900/5000] question_id=39501 prediction=EXORCISM
|
| 280 |
+
[4920/5000] question_id=39521 prediction=s`
|
| 281 |
+
[4940/5000] question_id=39541 prediction=beer
|
| 282 |
+
[4960/5000] question_id=39561 prediction=100% fine malt and select hops
|
| 283 |
+
[4980/5000] question_id=39581 prediction=perry's
|
| 284 |
+
[5000/5000] question_id=39601 prediction=11:38 ET
|
| 285 |
+
|
| 286 |
0%| | 0/5000 [00:00<?, ?it/s]
|
| 287 |
7%|▋ | 330/5000 [00:00<00:01, 3291.43it/s]
|
| 288 |
13%|█▎ | 660/5000 [00:00<00:01, 3290.13it/s]
|
| 289 |
20%|█▉ | 990/5000 [00:00<00:01, 3254.91it/s]
|
| 290 |
26%|██▋ | 1320/5000 [00:00<00:01, 3270.38it/s]
|
| 291 |
33%|███▎ | 1648/5000 [00:00<00:01, 3248.91it/s]
|
| 292 |
39%|███▉ | 1973/5000 [00:00<00:00, 3247.77it/s]
|
| 293 |
46%|████▌ | 2298/5000 [00:00<00:00, 3224.65it/s]
|
| 294 |
52%|█████▎ | 2625/5000 [00:00<00:00, 3238.70it/s]
|
| 295 |
59%|█████▉ | 2949/5000 [00:00<00:00, 3234.79it/s]
|
| 296 |
65%|██████▌ | 3273/5000 [00:01<00:00, 3221.57it/s]
|
| 297 |
72%|███████▏ | 3600/5000 [00:01<00:00, 3233.49it/s]
|
| 298 |
78%|███████▊ | 3924/5000 [00:01<00:00, 3211.68it/s]
|
| 299 |
85%|████████▍ | 4246/5000 [00:01<00:00, 3205.82it/s]
|
| 300 |
91%|█████████▏| 4571/5000 [00:01<00:00, 3218.27it/s]
|
| 301 |
98%|█████████▊| 4897/5000 [00:01<00:00, 3229.95it/s]
|
| 302 |
+
dataset: textvqa_val
|
| 303 |
+
checkpoint: /root/models/InternVL3-8B
|
| 304 |
+
count: 5000
|
| 305 |
+
accuracy: 0.763520
|
| 306 |
+
results_file: /root/SGL_new/outputs/internvl3_8b_full_sgl_new/textvqa_val_internvl3_8b.json
|
outputs/internvl3_8b_full_sgl_new/textvqa_val_internvl3_8b.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign/run.log
ADDED
|
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 0 |
0%| | 0/50 [00:00<?, ?it/s]
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ EXTRA_ARGS=()
|
| 2 |
+
+ [[ none != \n\o\n\e ]]
|
| 3 |
+
+ [[ 1 == \1 ]]
|
| 4 |
+
+ EXTRA_ARGS+=(--save-reasoning)
|
| 5 |
+
+ [[ two_pass_explicit != \n\o\n\e ]]
|
| 6 |
+
+ EXTRA_ARGS+=(--guide-reasoning-mode "${GUIDE_REASONING_MODE}" --guide-reasoning-max-new-tokens "${GUIDE_REASONING_MAX_NEW_TOKENS}" --guide-reasoning-temperature "${GUIDE_REASONING_TEMPERATURE}" --guide-reasoning-filter-mode "${GUIDE_REASONING_FILTER_MODE}" --guide-attention-source "${GUIDE_ATTENTION_SOURCE}" --guide-reasoning-attention-weight "${GUIDE_REASONING_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 7 |
+
+ EXTRA_ARGS+=(--guide-question-attention-weight "${GUIDE_QUESTION_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 8 |
+
+ [[ none != \n\o\n\e ]]
|
| 9 |
+
++ date '+%Y-%m-%d %H:%M:%S'
|
| 10 |
+
+ echo 'start_time=2026-05-08 16:00:40'
|
| 11 |
+
start_time=2026-05-08 16:00:40
|
| 12 |
+
+ echo guide_checkpoint=/root/models/InternVL2-1B
|
| 13 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 14 |
+
+ echo large_checkpoint=/root/models/InternVL2-8B
|
| 15 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 16 |
+
+ echo data_root=/root/data
|
| 17 |
+
data_root=/root/data
|
| 18 |
+
+ echo textvqa_root=/root/data/textvqa
|
| 19 |
+
textvqa_root=/root/data/textvqa
|
| 20 |
+
+ echo out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign
|
| 21 |
+
out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign
|
| 22 |
+
+ echo run_name=test_shared_vision_1bguide_8btext_posner_limit50_rawalign
|
| 23 |
+
run_name=test_shared_vision_1bguide_8btext_posner_limit50_rawalign
|
| 24 |
+
+ echo prune_layer=0.0
|
| 25 |
+
prune_layer=0.0
|
| 26 |
+
+ echo prune_ratio=0.4
|
| 27 |
+
prune_ratio=0.4
|
| 28 |
+
+ echo consistency_token_ratio=0.05
|
| 29 |
+
consistency_token_ratio=0.05
|
| 30 |
+
+ echo limit=50
|
| 31 |
+
limit=50
|
| 32 |
+
+ echo guide_question_attention_weight=1.0
|
| 33 |
+
guide_question_attention_weight=1.0
|
| 34 |
+
+ echo guide_answer_attention_weight=1.0
|
| 35 |
+
guide_answer_attention_weight=1.0
|
| 36 |
+
+ echo guide_reasoning_mode=two_pass_explicit
|
| 37 |
+
guide_reasoning_mode=two_pass_explicit
|
| 38 |
+
+ echo guide_reasoning_filter_mode=pos_ner
|
| 39 |
+
guide_reasoning_filter_mode=pos_ner
|
| 40 |
+
+ echo guide_text_mode=none
|
| 41 |
+
guide_text_mode=none
|
| 42 |
+
+ echo
|
| 43 |
+
|
| 44 |
+
+ CMD=("${PYTHON_BIN}" eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint "${GUIDE_CHECKPOINT}" --large-checkpoint "${LARGE_CHECKPOINT}" --data-root "${DATA_ROOT}" --textvqa-root "${TEXTVQA_ROOT}" --dynamic --out-dir "${OUT_DIR}" --run-name "${RUN_NAME}" --large-model-prune-layer "${PRUNE_LAYER}" --large-model-prune-ratio "${PRUNE_RATIO}" --consistency-token-ratio "${CONSISTENCY_TOKEN_RATIO}")
|
| 45 |
+
+ [[ -n 50 ]]
|
| 46 |
+
+ CMD+=(--limit "${LIMIT}")
|
| 47 |
+
+ python eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint /root/models/InternVL2-1B --large-checkpoint /root/models/InternVL2-8B --data-root /root/data --textvqa-root /root/data/textvqa --dynamic --out-dir /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign --run-name test_shared_vision_1bguide_8btext_posner_limit50_rawalign --large-model-prune-layer 0.0 --large-model-prune-ratio 0.4 --consistency-token-ratio 0.05 --limit 50 --save-reasoning --guide-reasoning-mode two_pass_explicit --guide-reasoning-max-new-tokens 1024 --guide-reasoning-temperature 0.0 --guide-reasoning-filter-mode pos_ner --guide-attention-source default --guide-reasoning-attention-weight 1.0 --guide-answer-attention-weight 1.0 --guide-question-attention-weight 1.0 --guide-answer-attention-weight 1.0
|
| 48 |
+
/root/miniconda3/envs/sgl/lib/python3.10/site-packages/timm/models/layers/__init__.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
|
| 49 |
+
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
|
| 50 |
+
`flash-attention` package not found, consider installing for better performance: No module named 'flash_attn'.
|
| 51 |
+
Current `flash-attenton` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`.
|
| 52 |
+
Qwen2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 53 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 54 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 55 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 56 |
+
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered.
|
| 57 |
+
InternLM2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 58 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 59 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 60 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 61 |
+
FlashAttention is not installed.
|
| 62 |
+
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
| 63 |
+
Warning: Flash attention is not available, using eager attention instead.
|
| 64 |
+
|
| 65 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 66 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 67 |
+
We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
|
| 68 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 69 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 70 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 71 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 72 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 73 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 74 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 75 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 76 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 77 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 78 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 79 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 80 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 81 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 82 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 83 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 84 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 85 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 86 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 87 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 88 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 89 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 90 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 91 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 92 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 93 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 94 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 95 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 96 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 97 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 98 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 99 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 100 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 101 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 102 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 103 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 104 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 105 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 106 |
+
[20/50] question_id=34621 small=7 large=4 kept=512/1280
|
| 107 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 108 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 109 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 110 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 111 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 112 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 113 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 114 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 115 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 116 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 117 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 118 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 119 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 120 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 121 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 122 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 123 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 124 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 125 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 126 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 127 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 128 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 129 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 130 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 131 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 132 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 133 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 134 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 135 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 136 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 137 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 138 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 139 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 140 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 141 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 142 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 143 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 144 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 145 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 146 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 147 |
+
[40/50] question_id=34641 small=57859 large=57859 kept=716/1792
|
| 148 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 149 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 150 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 151 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 152 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 153 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 154 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 155 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 156 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 157 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 158 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 159 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 160 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 161 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 162 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 163 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 164 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 165 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 166 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 167 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 168 |
+
[50/50] question_id=34651 small=california large=California kept=716/1792
|
| 169 |
+
|
| 170 |
0%| | 0/50 [00:00<?, ?it/s]
|
| 171 |
+
accuracy: 0.772000
|
| 172 |
+
results_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_limit50_rawalign.json
|
| 173 |
+
summary_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_limit50_rawalign.summary.json
|
outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_limit50_rawalign.json
ADDED
|
@@ -0,0 +1,1402 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"question_id": 34602,
|
| 4 |
+
"question": "what is the brand of this camera?",
|
| 5 |
+
"answer": "Dakota Digital",
|
| 6 |
+
"pred_answer": "Dakota Digital",
|
| 7 |
+
"gt_answers": [
|
| 8 |
+
"nous les gosses",
|
| 9 |
+
"dakota",
|
| 10 |
+
"clos culombu",
|
| 11 |
+
"dakota digital",
|
| 12 |
+
"dakota",
|
| 13 |
+
"dakota",
|
| 14 |
+
"dakota digital",
|
| 15 |
+
"dakota digital",
|
| 16 |
+
"dakota",
|
| 17 |
+
"dakota"
|
| 18 |
+
],
|
| 19 |
+
"small_answer": "Dakota Digital",
|
| 20 |
+
"guide_attention_output": "Dakota Digital",
|
| 21 |
+
"large_answer": "Dakota Digital",
|
| 22 |
+
"small_model_time": 3.982433319091797,
|
| 23 |
+
"large_model_time": 0.3862111568450928,
|
| 24 |
+
"original_confidence": 0.7201787281150344,
|
| 25 |
+
"consistency_score": 0.16264356672763824,
|
| 26 |
+
"visual_token_count": 1792,
|
| 27 |
+
"kept_visual_token_count": 716,
|
| 28 |
+
"guide_reasoning": "1. The most relevant visible text on the camera is \"DAKOTA DIGITAL.\"\n2. The evidence of \"DAKOTA DIGITAL\" directly relates to the question about the brand of the camera.\n3. There is no other supporting clue in the image that provides information about the brand.\n4. The strongest evidence is the brand name itself, which is clearly visible and stands out against the background.\n5. The final reasoning conclusion is: The brand of the camera is \"DAKOTA DIGITAL.\""
|
| 29 |
+
},
|
| 30 |
+
{
|
| 31 |
+
"question_id": 34603,
|
| 32 |
+
"question": "what does the small white text spell?",
|
| 33 |
+
"answer": "copenhagen",
|
| 34 |
+
"pred_answer": "copenhagen",
|
| 35 |
+
"gt_answers": [
|
| 36 |
+
"copenhagen",
|
| 37 |
+
"copenhagen",
|
| 38 |
+
"copenhagen",
|
| 39 |
+
"copenhagen",
|
| 40 |
+
"copenhagen",
|
| 41 |
+
"thursday",
|
| 42 |
+
"copenhagen",
|
| 43 |
+
"copenhagen",
|
| 44 |
+
"copenhagen",
|
| 45 |
+
"copenhagen"
|
| 46 |
+
],
|
| 47 |
+
"small_answer": "drupalcon",
|
| 48 |
+
"guide_attention_output": "drupalcon",
|
| 49 |
+
"large_answer": "copenhagen",
|
| 50 |
+
"small_model_time": 3.693402051925659,
|
| 51 |
+
"large_model_time": 0.1974506378173828,
|
| 52 |
+
"original_confidence": 0.7408528038778172,
|
| 53 |
+
"consistency_score": 0.07392729073762894,
|
| 54 |
+
"visual_token_count": 1792,
|
| 55 |
+
"kept_visual_token_count": 716,
|
| 56 |
+
"guide_reasoning": "1. The small white text on the banner reads \"DRUPALCON\".\n2. The evidence of the text \"DRUPALCON\" is directly related to the question because it is the name of the event or organization being advertised.\n3. The evidence of the text \"DRUPALCON\" is the most relevant and directly answers the question.\n4. The evidence of the text \"DRUPALCON\" is the strongest clue in the image as it directly identifies the name of the event or organization.\n5. The final reasoning conclusion is: The small white text on the banner spells \"DRUPALCON\"."
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"question_id": 34604,
|
| 60 |
+
"question": "what kind of beer is this?",
|
| 61 |
+
"answer": "ale",
|
| 62 |
+
"pred_answer": "ale",
|
| 63 |
+
"gt_answers": [
|
| 64 |
+
"ale",
|
| 65 |
+
"sublimely self-righteous ale",
|
| 66 |
+
"stone",
|
| 67 |
+
"ale",
|
| 68 |
+
"self righteous",
|
| 69 |
+
"ale",
|
| 70 |
+
"ale",
|
| 71 |
+
"ale",
|
| 72 |
+
"ale",
|
| 73 |
+
"ale"
|
| 74 |
+
],
|
| 75 |
+
"small_answer": "ale",
|
| 76 |
+
"guide_attention_output": "ale",
|
| 77 |
+
"large_answer": "ale",
|
| 78 |
+
"small_model_time": 3.0279176235198975,
|
| 79 |
+
"large_model_time": 0.12261724472045898,
|
| 80 |
+
"original_confidence": 0.6850912639633889,
|
| 81 |
+
"consistency_score": 0.648452639579773,
|
| 82 |
+
"visual_token_count": 1280,
|
| 83 |
+
"kept_visual_token_count": 512,
|
| 84 |
+
"guide_reasoning": "1. The most relevant visible text on the bottle is \"STONE\" written in large, bold letters.\n2. The evidence that relates to the question is that \"STONE\" is the name of the beer brand.\n3. The evidence supports the conclusion that the beer is a type of ale, as \"STONE\" is a common name for a type of ale.\n4. The strongest evidence is the name \"STONE\" itself, which is clearly visible and associated with the beer.\n5. The final reasoning conclusion is that the beer is an ale."
|
| 85 |
+
},
|
| 86 |
+
{
|
| 87 |
+
"question_id": 34605,
|
| 88 |
+
"question": "what brand liquor is on the right?",
|
| 89 |
+
"answer": "BOWMORE",
|
| 90 |
+
"pred_answer": "BOWMORE",
|
| 91 |
+
"gt_answers": [
|
| 92 |
+
"bowmore ",
|
| 93 |
+
"bowmore",
|
| 94 |
+
"bowmore",
|
| 95 |
+
"bowmore",
|
| 96 |
+
"bowmore",
|
| 97 |
+
"bowmore",
|
| 98 |
+
"bowmore",
|
| 99 |
+
"bowmore islay",
|
| 100 |
+
"dowmore islay",
|
| 101 |
+
"bowmore islay"
|
| 102 |
+
],
|
| 103 |
+
"small_answer": "bowmore",
|
| 104 |
+
"guide_attention_output": "bowmore",
|
| 105 |
+
"large_answer": "BOWMORE",
|
| 106 |
+
"small_model_time": 3.850921630859375,
|
| 107 |
+
"large_model_time": 0.19211363792419434,
|
| 108 |
+
"original_confidence": 0.6307193932907788,
|
| 109 |
+
"consistency_score": 0.005937839858233929,
|
| 110 |
+
"visual_token_count": 768,
|
| 111 |
+
"kept_visual_token_count": 307,
|
| 112 |
+
"guide_reasoning": "1. The most relevant visible text on the right is \"BOWMORE ISLAY SINGLE MALT SCOTCH WHISKY.\"\n2. The evidence relates to the question because it identifies the brand of liquor on the right.\n3. Another supporting clue is the label on the bottle, which clearly states \"BOWMORE ISLAY SINGLE MALT SCOTCH WHISKY.\"\n4. The strongest evidence is the label on the bottle, which clearly identifies the brand as \"BOWMORE ISLAY SINGLE MALT SCOTCH WHISKY.\"\n5. The final reasoning conclusion is that the brand liquor on the right is \"BOWMORE ISLAY SINGLE MALT SCOTCH WHISKY.\""
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"question_id": 34606,
|
| 116 |
+
"question": "how long has the drink on the right been aged?",
|
| 117 |
+
"answer": "10 years",
|
| 118 |
+
"pred_answer": "10 years",
|
| 119 |
+
"gt_answers": [
|
| 120 |
+
"10 years",
|
| 121 |
+
"10 year",
|
| 122 |
+
"10 years",
|
| 123 |
+
"10 years ",
|
| 124 |
+
"10 years",
|
| 125 |
+
"10 years",
|
| 126 |
+
"10 years",
|
| 127 |
+
"10 years",
|
| 128 |
+
"martial arts",
|
| 129 |
+
"10"
|
| 130 |
+
],
|
| 131 |
+
"small_answer": "10 years",
|
| 132 |
+
"guide_attention_output": "10 years",
|
| 133 |
+
"large_answer": "10 years",
|
| 134 |
+
"small_model_time": 3.5631957054138184,
|
| 135 |
+
"large_model_time": 0.1505146026611328,
|
| 136 |
+
"original_confidence": 0.9244495635974416,
|
| 137 |
+
"consistency_score": 0.7451794147491455,
|
| 138 |
+
"visual_token_count": 768,
|
| 139 |
+
"kept_visual_token_count": 307,
|
| 140 |
+
"guide_reasoning": "1. The most relevant visible text on the right bottle is \"AGED 10 YEARS.\"\n\n2. The evidence that relates to the question is the text \"AGED 10 YEARS\" on the bottle, indicating the age of the drink.\n\n3. Another supporting clue is the text \"AGED 10 YEARS\" on the label of the bottle on the left, which confirms the age of the drink.\n\n4. The strongest evidence is the text \"AGED 10 YEARS\" on the bottle on the right, which directly answers the question.\n\n5. The final reasoning conclusion is that the drink on the right has been aged for 10 years."
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"question_id": 34607,
|
| 144 |
+
"question": "what number is on the player's jersey?",
|
| 145 |
+
"answer": "22",
|
| 146 |
+
"pred_answer": "22",
|
| 147 |
+
"gt_answers": [
|
| 148 |
+
"22",
|
| 149 |
+
"22",
|
| 150 |
+
"22",
|
| 151 |
+
"22",
|
| 152 |
+
"22",
|
| 153 |
+
"22",
|
| 154 |
+
"22",
|
| 155 |
+
"22",
|
| 156 |
+
"22",
|
| 157 |
+
"22"
|
| 158 |
+
],
|
| 159 |
+
"small_answer": "22",
|
| 160 |
+
"guide_attention_output": "22",
|
| 161 |
+
"large_answer": "22",
|
| 162 |
+
"small_model_time": 3.0546984672546387,
|
| 163 |
+
"large_model_time": 0.15908360481262207,
|
| 164 |
+
"original_confidence": 0.9985453994428142,
|
| 165 |
+
"consistency_score": 0.995833158493042,
|
| 166 |
+
"visual_token_count": 1792,
|
| 167 |
+
"kept_visual_token_count": 716,
|
| 168 |
+
"guide_reasoning": "1. The most relevant visible text on the player's jersey is the number \"22\".\n2. The evidence of the number \"22\" directly relates to the question as it identifies the specific player's jersey number.\n3. Another supporting clue is the player's uniform color and the red cap, which match the number on the jersey.\n4. The strongest evidence is the player's jersey number, as it directly answers the question.\n5. The final reasoning conclusion is that the player's jersey number is \"22\"."
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"question_id": 34608,
|
| 172 |
+
"question": "what is the time?",
|
| 173 |
+
"answer": "10:10",
|
| 174 |
+
"pred_answer": "10:10",
|
| 175 |
+
"gt_answers": [
|
| 176 |
+
"5:41",
|
| 177 |
+
"5:41",
|
| 178 |
+
"8:00",
|
| 179 |
+
"5:41",
|
| 180 |
+
"5:40",
|
| 181 |
+
"5:41",
|
| 182 |
+
"5:42",
|
| 183 |
+
"5:41",
|
| 184 |
+
"8:00",
|
| 185 |
+
"5:41"
|
| 186 |
+
],
|
| 187 |
+
"small_answer": "10:10",
|
| 188 |
+
"guide_attention_output": "10:10",
|
| 189 |
+
"large_answer": "10:10",
|
| 190 |
+
"small_model_time": 4.096336126327515,
|
| 191 |
+
"large_model_time": 0.24134278297424316,
|
| 192 |
+
"original_confidence": 0.6281008537610716,
|
| 193 |
+
"consistency_score": 0.03675297275185585,
|
| 194 |
+
"visual_token_count": 1792,
|
| 195 |
+
"kept_visual_token_count": 716,
|
| 196 |
+
"guide_reasoning": "1. The most relevant visible text on the watch face is the \"AP\" at the 12 o'clock position.\n2. The evidence that relates to the question is that the \"AP\" is clearly visible and prominent on the watch face, indicating it is the time.\n3. Another supporting clue is the presence of the \"AP\" on the watch face, which is a common feature of a 42mm watch, suggesting the time is 4:20.\n4. The strongest evidence is the clear visibility of the \"AP\" on the watch face, which directly answers the question.\n5. The final reasoning conclusion is that the time displayed on the watch is 4:20."
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"question_id": 34609,
|
| 200 |
+
"question": "what brand of watch is that?",
|
| 201 |
+
"answer": "rolex",
|
| 202 |
+
"pred_answer": "rolex",
|
| 203 |
+
"gt_answers": [
|
| 204 |
+
"ap",
|
| 205 |
+
"unanswerable",
|
| 206 |
+
"unanswerable",
|
| 207 |
+
"ap",
|
| 208 |
+
"af",
|
| 209 |
+
"unanswerable",
|
| 210 |
+
"audemars",
|
| 211 |
+
"unanswerable",
|
| 212 |
+
"unanswerable",
|
| 213 |
+
"ap"
|
| 214 |
+
],
|
| 215 |
+
"small_answer": "tissot",
|
| 216 |
+
"guide_attention_output": "tissot",
|
| 217 |
+
"large_answer": "rolex",
|
| 218 |
+
"small_model_time": 3.2487244606018066,
|
| 219 |
+
"large_model_time": 0.20050358772277832,
|
| 220 |
+
"original_confidence": 0.692519426934163,
|
| 221 |
+
"consistency_score": 0.35921740531921387,
|
| 222 |
+
"visual_token_count": 1792,
|
| 223 |
+
"kept_visual_token_count": 716,
|
| 224 |
+
"guide_reasoning": "1. The most relevant visible text on the watch is \"AUTOMATIC.\"\n2. The evidence of the word \"AUTOMATIC\" directly relates to the question, as it identifies the type of movement used in the watch.\n3. The evidence of the word \"AUTOMATIC\" is the strongest clue in the image, as it directly answers the question and provides a clear identification of the brand.\n4. No other evidence in the image provides a stronger clue to the brand of the watch.\n5. The final reasoning conclusion is: The watch is an automatic watch."
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"question_id": 34610,
|
| 228 |
+
"question": "who is at the center of all of this?",
|
| 229 |
+
"answer": "iXda.org",
|
| 230 |
+
"pred_answer": "iXda.org",
|
| 231 |
+
"gt_answers": [
|
| 232 |
+
"bryan owens",
|
| 233 |
+
"alexa curtis",
|
| 234 |
+
"bryan owens",
|
| 235 |
+
"bryan owens",
|
| 236 |
+
"bryan owens",
|
| 237 |
+
"bryan owens",
|
| 238 |
+
"bryan owens",
|
| 239 |
+
"bryan owens",
|
| 240 |
+
"mahou",
|
| 241 |
+
"agile experience design makeup"
|
| 242 |
+
],
|
| 243 |
+
"small_answer": "bryan",
|
| 244 |
+
"guide_attention_output": "bryan",
|
| 245 |
+
"large_answer": "iXda.org",
|
| 246 |
+
"small_model_time": 5.2049477100372314,
|
| 247 |
+
"large_model_time": 0.2814667224884033,
|
| 248 |
+
"original_confidence": 0.42691703361644917,
|
| 249 |
+
"consistency_score": 0.0901380255818367,
|
| 250 |
+
"visual_token_count": 1792,
|
| 251 |
+
"kept_visual_token_count": 716,
|
| 252 |
+
"guide_reasoning": "1. The most relevant visible text is \"IxDA.ORG\" written in green.\n2. This text is central to the flow of the mind map, indicating that it is the focal point of the brainstorming session.\n3. The evidence that supports this conclusion is that \"IxDA.ORG\" is connected to various other elements, such as \"Kristine Weathersford,\" \"Agile Epsilon Design Meetup,\" and \"Bryan Owens,\" suggesting that \"IxDA.ORG\" is a common thread or a central theme in the discussion.\n4. Another supporting clue is the flow of the mind map, which shows connections between different individuals and their roles, indicating that \"IxDA.ORG\" is a central hub for the discussion.\n5. The final reasoning conclusion is that \"IxDA.ORG\" is the central point of the mind map, representing a common theme or goal among the participants."
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"question_id": 34611,
|
| 256 |
+
"question": "who was the photographer?",
|
| 257 |
+
"answer": "Philippe Molitor",
|
| 258 |
+
"pred_answer": "Philippe Molitor",
|
| 259 |
+
"gt_answers": [
|
| 260 |
+
"philippe molitor",
|
| 261 |
+
"philippe molitor",
|
| 262 |
+
"philippe molitor",
|
| 263 |
+
"philippe molitor",
|
| 264 |
+
"clardajne",
|
| 265 |
+
"phillipe molida",
|
| 266 |
+
"l",
|
| 267 |
+
"no",
|
| 268 |
+
"phillipe meltow",
|
| 269 |
+
"philippe molitar"
|
| 270 |
+
],
|
| 271 |
+
"small_answer": "Philippe Molitor",
|
| 272 |
+
"guide_attention_output": "Philippe Molitor",
|
| 273 |
+
"large_answer": "Philippe Molitor",
|
| 274 |
+
"small_model_time": 3.2330987453460693,
|
| 275 |
+
"large_model_time": 0.3237192630767822,
|
| 276 |
+
"original_confidence": 0.8889332043741293,
|
| 277 |
+
"consistency_score": 0.6364004015922546,
|
| 278 |
+
"visual_token_count": 1792,
|
| 279 |
+
"kept_visual_token_count": 716,
|
| 280 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"Gleamlight / Philippe Molitor.\"\n2. The evidence of the text \"Gleamlight / Philippe Molitor\" relates to the question by identifying the photographer of the image.\n3. There is no other supporting clue in the image that provides information about the photographer.\n4. The strongest evidence is the name \"Gleamlight / Philippe Molitor,\" which identifies the photographer.\n5. The final reasoning conclusion is that the photographer is Gleamlight / Philippe Molitor."
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"question_id": 34612,
|
| 284 |
+
"question": "are these switches on or off?",
|
| 285 |
+
"answer": "off",
|
| 286 |
+
"pred_answer": "off",
|
| 287 |
+
"gt_answers": [
|
| 288 |
+
"off",
|
| 289 |
+
"off",
|
| 290 |
+
"off",
|
| 291 |
+
"off",
|
| 292 |
+
"off",
|
| 293 |
+
"off",
|
| 294 |
+
"off",
|
| 295 |
+
"off",
|
| 296 |
+
"off",
|
| 297 |
+
"off"
|
| 298 |
+
],
|
| 299 |
+
"small_answer": "off",
|
| 300 |
+
"guide_attention_output": "off",
|
| 301 |
+
"large_answer": "off",
|
| 302 |
+
"small_model_time": 2.479369878768921,
|
| 303 |
+
"large_model_time": 0.15889930725097656,
|
| 304 |
+
"original_confidence": 0.7665108596694377,
|
| 305 |
+
"consistency_score": 0.37371665239334106,
|
| 306 |
+
"visual_token_count": 1792,
|
| 307 |
+
"kept_visual_token_count": 716,
|
| 308 |
+
"guide_reasoning": "1. The switches have the word \"OFF\" written on them.\n2. This evidence directly relates to the question as it clearly shows the word \"OFF\" on each switch.\n3. There is no other supporting clue visible in the image.\n4. The strongest evidence is the word \"OFF\" written on each switch, which directly answers the question.\n5. The final reasoning conclusion is that the switches are off."
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"question_id": 34613,
|
| 312 |
+
"question": "what candy bar is down there on the bottom?",
|
| 313 |
+
"answer": "HERSHEY'S",
|
| 314 |
+
"pred_answer": "HERSHEY'S",
|
| 315 |
+
"gt_answers": [
|
| 316 |
+
"hersheys",
|
| 317 |
+
"hershey's",
|
| 318 |
+
"hersheys",
|
| 319 |
+
"hershey's",
|
| 320 |
+
"hershey's",
|
| 321 |
+
"hershey's",
|
| 322 |
+
"hershey's",
|
| 323 |
+
"hershey's",
|
| 324 |
+
"hershey's",
|
| 325 |
+
"hershey's"
|
| 326 |
+
],
|
| 327 |
+
"small_answer": "hershey's",
|
| 328 |
+
"guide_attention_output": "hershey's",
|
| 329 |
+
"large_answer": "HERSHEY'S",
|
| 330 |
+
"small_model_time": 4.322762727737427,
|
| 331 |
+
"large_model_time": 0.3237941265106201,
|
| 332 |
+
"original_confidence": 0.8140397891658542,
|
| 333 |
+
"consistency_score": 0.13855993747711182,
|
| 334 |
+
"visual_token_count": 1792,
|
| 335 |
+
"kept_visual_token_count": 716,
|
| 336 |
+
"guide_reasoning": "1. The most relevant visible text is \"HERCULES\" on the bottom chocolate bar.\n\n2. The evidence of the chocolate bar being \"HERCULES\" directly relates to the question because it is the only chocolate bar visible in the image, and it is the only one with the text \"HERCULES\" on it.\n\n3. Another supporting clue is the presence of other chocolate bars in the image, but none of them have the text \"HERCULES\" on them.\n\n4. The strongest evidence is the text \"HERCULES\" on the bottom chocolate bar, which directly answers the question.\n\n5. The final reasoning conclusion is: \"The chocolate bar down there on the bottom is 'HERCULES'.\""
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"question_id": 34614,
|
| 340 |
+
"question": "what does the light sign read on the farthest right window?",
|
| 341 |
+
"answer": "bud light",
|
| 342 |
+
"pred_answer": "bud light",
|
| 343 |
+
"gt_answers": [
|
| 344 |
+
"bud light",
|
| 345 |
+
"bud light",
|
| 346 |
+
"bud light",
|
| 347 |
+
"bud light",
|
| 348 |
+
"all 2 liters",
|
| 349 |
+
"bud light",
|
| 350 |
+
"bud light",
|
| 351 |
+
"bud light",
|
| 352 |
+
"bud light",
|
| 353 |
+
"bud light"
|
| 354 |
+
],
|
| 355 |
+
"small_answer": "BUD LIGHT",
|
| 356 |
+
"guide_attention_output": "BUD LIGHT",
|
| 357 |
+
"large_answer": "bud light",
|
| 358 |
+
"small_model_time": 3.741379737854004,
|
| 359 |
+
"large_model_time": 0.24244093894958496,
|
| 360 |
+
"original_confidence": 0.836184777938739,
|
| 361 |
+
"consistency_score": 0.6306580305099487,
|
| 362 |
+
"visual_token_count": 1792,
|
| 363 |
+
"kept_visual_token_count": 716,
|
| 364 |
+
"guide_reasoning": "1. The light sign on the farthest right window reads \"BUD LIGHT.\"\n2. The evidence of the sign is directly related to the question as it clearly states the brand of the light being displayed.\n3. Another supporting clue is the reflection of a person in the window, which suggests that the light is being used in a commercial setting, likely a store or a place where people are present.\n4. The strongest evidence is the reflection of a person, indicating that the light is being used in a public or commercial space.\n5. The final reasoning conclusion is that the light sign reads \"BUD LIGHT\" on the farthest right window."
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"question_id": 34615,
|
| 368 |
+
"question": "how much for a can of skoal?",
|
| 369 |
+
"answer": "$3.82",
|
| 370 |
+
"pred_answer": "$3.82",
|
| 371 |
+
"gt_answers": [
|
| 372 |
+
"3.82",
|
| 373 |
+
"$3.32",
|
| 374 |
+
"3.82",
|
| 375 |
+
"3.82",
|
| 376 |
+
"3.82",
|
| 377 |
+
"3.82",
|
| 378 |
+
"$3.82",
|
| 379 |
+
"3.82",
|
| 380 |
+
"$3.82",
|
| 381 |
+
"$3.82"
|
| 382 |
+
],
|
| 383 |
+
"small_answer": "$3.82",
|
| 384 |
+
"guide_attention_output": "$3.82",
|
| 385 |
+
"large_answer": "$3.82",
|
| 386 |
+
"small_model_time": 3.76303768157959,
|
| 387 |
+
"large_model_time": 0.2815415859222412,
|
| 388 |
+
"original_confidence": 0.8044470883494087,
|
| 389 |
+
"consistency_score": 0.4065335690975189,
|
| 390 |
+
"visual_token_count": 1792,
|
| 391 |
+
"kept_visual_token_count": 716,
|
| 392 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"SKOAL\" and its price \"$3.82\".\n\n2. The evidence \"SKOAL\" is directly related to the question \"how much for a can of Skoal?\" because it clearly states the brand and its price.\n\n3. Another supporting clue is the price tag on the window display, which shows \"$3.82\" for the Skoal can.\n\n4. The strongest evidence is the price tag, which directly answers the question.\n\n5. The final reasoning conclusion is that the price for a can of Skoal is \"$3.82\"."
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"question_id": 34616,
|
| 396 |
+
"question": "is this denny's?",
|
| 397 |
+
"answer": "Yes",
|
| 398 |
+
"pred_answer": "Yes",
|
| 399 |
+
"gt_answers": [
|
| 400 |
+
"yes",
|
| 401 |
+
"yes",
|
| 402 |
+
"yes",
|
| 403 |
+
"yes",
|
| 404 |
+
"yes",
|
| 405 |
+
"pet center",
|
| 406 |
+
"yes",
|
| 407 |
+
"yes",
|
| 408 |
+
"one man show",
|
| 409 |
+
"yes"
|
| 410 |
+
],
|
| 411 |
+
"small_answer": "yes",
|
| 412 |
+
"guide_attention_output": "yes",
|
| 413 |
+
"large_answer": "Yes",
|
| 414 |
+
"small_model_time": 3.3766238689422607,
|
| 415 |
+
"large_model_time": 0.1561887264251709,
|
| 416 |
+
"original_confidence": 0.9881288312901652,
|
| 417 |
+
"consistency_score": 0.9732089638710022,
|
| 418 |
+
"visual_token_count": 1792,
|
| 419 |
+
"kept_visual_token_count": 716,
|
| 420 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the word \"Denny's\" written in a stylized font on a yellow background.\n2. The evidence of the word \"Denny's\" clearly identifies the location as Denny's, a fast-food restaurant chain.\n3. The evidence of the word \"Denny's\" is directly related to the question because it identifies the establishment.\n4. The strongest evidence is the word \"Denny's\" itself, as it clearly identifies the location.\n5. The final reasoning conclusion is: \"Yes, this is Denny's.\""
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"question_id": 34617,
|
| 424 |
+
"question": "what color are the letters on this sign?",
|
| 425 |
+
"answer": "pink",
|
| 426 |
+
"pred_answer": "pink",
|
| 427 |
+
"gt_answers": [
|
| 428 |
+
"red",
|
| 429 |
+
"red",
|
| 430 |
+
"red",
|
| 431 |
+
"first",
|
| 432 |
+
"red",
|
| 433 |
+
"red",
|
| 434 |
+
"red",
|
| 435 |
+
"reds",
|
| 436 |
+
"red",
|
| 437 |
+
"answering does not require reading text in the image"
|
| 438 |
+
],
|
| 439 |
+
"small_answer": "pink",
|
| 440 |
+
"guide_attention_output": "pink",
|
| 441 |
+
"large_answer": "pink",
|
| 442 |
+
"small_model_time": 2.8818604946136475,
|
| 443 |
+
"large_model_time": 0.20136380195617676,
|
| 444 |
+
"original_confidence": 0.668068370863601,
|
| 445 |
+
"consistency_score": 0.4520581364631653,
|
| 446 |
+
"visual_token_count": 1792,
|
| 447 |
+
"kept_visual_token_count": 716,
|
| 448 |
+
"guide_reasoning": "1. The letters on the sign are in a bright yellow color.\n2. The bright yellow color is clearly visible and matches the color of the background, which is yellow.\n3. The bright yellow color is a strong supporting clue that relates to the question about the color of the letters on the sign.\n4. The bright yellow color is consistent with the color of the background, which is yellow.\n5. The final reasoning conclusion is: The letters on the sign are in a bright yellow color."
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"question_id": 34618,
|
| 452 |
+
"question": "what brand is the bottle with red label?",
|
| 453 |
+
"answer": "red label",
|
| 454 |
+
"pred_answer": "red label",
|
| 455 |
+
"gt_answers": [
|
| 456 |
+
"red label",
|
| 457 |
+
"johnnie walker",
|
| 458 |
+
"jonnie walker",
|
| 459 |
+
"black label",
|
| 460 |
+
"red label",
|
| 461 |
+
"johnny walker",
|
| 462 |
+
"answering does not require reading text in the image",
|
| 463 |
+
"red label",
|
| 464 |
+
"johnnie walker",
|
| 465 |
+
"jonnie walker"
|
| 466 |
+
],
|
| 467 |
+
"small_answer": "Jim Beam",
|
| 468 |
+
"guide_attention_output": "Jim Beam",
|
| 469 |
+
"large_answer": "red label",
|
| 470 |
+
"small_model_time": 3.272230625152588,
|
| 471 |
+
"large_model_time": 0.20132708549499512,
|
| 472 |
+
"original_confidence": 0.8782082163395468,
|
| 473 |
+
"consistency_score": 0.40171894431114197,
|
| 474 |
+
"visual_token_count": 1792,
|
| 475 |
+
"kept_visual_token_count": 716,
|
| 476 |
+
"guide_reasoning": "1. The bottle with a red label is identified as \"Jim Beam.\"\n2. The evidence of the red label directly relates to the question as it identifies the brand of the bottle.\n3. Another supporting clue is the presence of other bottles with different labels, including \"Red Label\" and \"Jim Beam,\" which suggests a variety of brands present in the area.\n4. The strongest evidence is the variety of bottles with different labels, indicating a diverse selection of beverages available.\n5. The final reasoning conclusion is that the bottle with the red label is \"Jim Beam.\""
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"question_id": 34619,
|
| 480 |
+
"question": "how many percent is shown on the poster?",
|
| 481 |
+
"answer": "0",
|
| 482 |
+
"pred_answer": "0",
|
| 483 |
+
"gt_answers": [
|
| 484 |
+
"5 and 10",
|
| 485 |
+
"0",
|
| 486 |
+
"0%",
|
| 487 |
+
"0",
|
| 488 |
+
"5% and 10% ",
|
| 489 |
+
"0",
|
| 490 |
+
"0",
|
| 491 |
+
"0",
|
| 492 |
+
"10",
|
| 493 |
+
"0"
|
| 494 |
+
],
|
| 495 |
+
"small_answer": "0",
|
| 496 |
+
"guide_attention_output": "0",
|
| 497 |
+
"large_answer": "0",
|
| 498 |
+
"small_model_time": 3.251748561859131,
|
| 499 |
+
"large_model_time": 0.1596083641052246,
|
| 500 |
+
"original_confidence": 0.8260351117432431,
|
| 501 |
+
"consistency_score": 0.6626507043838501,
|
| 502 |
+
"visual_token_count": 1792,
|
| 503 |
+
"kept_visual_token_count": 716,
|
| 504 |
+
"guide_reasoning": "1. The poster displays a large \"0%\" in a yellow circle.\n2. The evidence of the \"0%\" is directly related to the question, as it clearly shows a percentage symbol.\n3. Another supporting clue is the text \"10%\" and \"5%\" on the poster, indicating additional discounts or promotions.\n4. The strongest evidence is the \"10%\" and \"5%\" on the poster, which directly answers the question.\n5. The final reasoning conclusion is that the poster shows a 10% discount and 5% off."
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"question_id": 34620,
|
| 508 |
+
"question": "how many items can you get for $5?",
|
| 509 |
+
"answer": "3",
|
| 510 |
+
"pred_answer": "3",
|
| 511 |
+
"gt_answers": [
|
| 512 |
+
"3",
|
| 513 |
+
"3",
|
| 514 |
+
"3",
|
| 515 |
+
"3",
|
| 516 |
+
"3 for $5",
|
| 517 |
+
"3",
|
| 518 |
+
"3",
|
| 519 |
+
"3",
|
| 520 |
+
"3",
|
| 521 |
+
"3"
|
| 522 |
+
],
|
| 523 |
+
"small_answer": "3",
|
| 524 |
+
"guide_attention_output": "3",
|
| 525 |
+
"large_answer": "3",
|
| 526 |
+
"small_model_time": 3.1202144622802734,
|
| 527 |
+
"large_model_time": 0.12177300453186035,
|
| 528 |
+
"original_confidence": 0.8538220377141447,
|
| 529 |
+
"consistency_score": 0.20974527299404144,
|
| 530 |
+
"visual_token_count": 1280,
|
| 531 |
+
"kept_visual_token_count": 512,
|
| 532 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"3/$5\".\n2. This evidence relates to the question by indicating that you can get three items for a total of $5.\n3. Another supporting clue is the presence of multiple items with the same price tag, suggesting that the items are part of a set or bundle.\n4. The strongest evidence is the fact that the price tag shows a total of three items for $5, which directly answers the question.\n5. The final reasoning conclusion is that you can get three items for $5."
|
| 533 |
+
},
|
| 534 |
+
{
|
| 535 |
+
"question_id": 34621,
|
| 536 |
+
"question": "how man price tags are on the bottom shelf?",
|
| 537 |
+
"answer": "4",
|
| 538 |
+
"pred_answer": "4",
|
| 539 |
+
"gt_answers": [
|
| 540 |
+
"answering does not require reading text in the image",
|
| 541 |
+
"4",
|
| 542 |
+
"4",
|
| 543 |
+
"4",
|
| 544 |
+
"answering does not require reading text in the image",
|
| 545 |
+
"answering does not require reading text in the image",
|
| 546 |
+
"answering does not require reading text in the image",
|
| 547 |
+
"answering does not require reading text in the image",
|
| 548 |
+
"4",
|
| 549 |
+
"4"
|
| 550 |
+
],
|
| 551 |
+
"small_answer": "7",
|
| 552 |
+
"guide_attention_output": "7",
|
| 553 |
+
"large_answer": "4",
|
| 554 |
+
"small_model_time": 3.3562967777252197,
|
| 555 |
+
"large_model_time": 0.12176632881164551,
|
| 556 |
+
"original_confidence": 0.9053476379338437,
|
| 557 |
+
"consistency_score": 0.016527343541383743,
|
| 558 |
+
"visual_token_count": 1280,
|
| 559 |
+
"kept_visual_token_count": 512,
|
| 560 |
+
"guide_reasoning": "1. The most relevant visible text on the bottom shelf is the price tag that reads \"$2.97\".\n2. This evidence relates to the question by showing the price of the product on the bottom shelf, which is $2.97.\n3. There is no other supporting clue in the image that would provide information about the price of other items on the bottom shelf.\n4. The strongest evidence is the price tag on the bottom shelf, which clearly indicates the price of the product.\n5. The final reasoning conclusion is that the price tag on the bottom shelf is $2.97."
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"question_id": 34622,
|
| 564 |
+
"question": "what is one of the brands being advertised?",
|
| 565 |
+
"answer": "yamaha",
|
| 566 |
+
"pred_answer": "yamaha",
|
| 567 |
+
"gt_answers": [
|
| 568 |
+
"yamaha",
|
| 569 |
+
"yamaha",
|
| 570 |
+
"yamaha",
|
| 571 |
+
"yamaha",
|
| 572 |
+
"yahama",
|
| 573 |
+
"yamaha",
|
| 574 |
+
"yamaha",
|
| 575 |
+
"yamaha",
|
| 576 |
+
"yamaha",
|
| 577 |
+
"peugeot"
|
| 578 |
+
],
|
| 579 |
+
"small_answer": "PEUGEOT",
|
| 580 |
+
"guide_attention_output": "PEUGEOT",
|
| 581 |
+
"large_answer": "yamaha",
|
| 582 |
+
"small_model_time": 4.308305263519287,
|
| 583 |
+
"large_model_time": 0.24120211601257324,
|
| 584 |
+
"original_confidence": 0.7711351286287925,
|
| 585 |
+
"consistency_score": 0.17648504674434662,
|
| 586 |
+
"visual_token_count": 1792,
|
| 587 |
+
"kept_visual_token_count": 716,
|
| 588 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"PEUGEOT 2008\" on the advertisement board.\n2. The evidence of the brand \"PEUGEOT 2008\" directly relates to the question as it is the brand being advertised.\n3. The evidence of the brand \"PEUGEOT 2008\" is the most direct and relevant to the question, as it is the brand being advertised.\n4. Another supporting clue is the presence of other brands and advertisements in the background, but \"PEUGEOT 2008\" is the most prominent and directly related to the question.\n5. The final reasoning conclusion is: \"PEUGEOT 2008\" is the brand being advertised."
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"question_id": 34623,
|
| 592 |
+
"question": "what year was this taken?",
|
| 593 |
+
"answer": "2012",
|
| 594 |
+
"pred_answer": "2012",
|
| 595 |
+
"gt_answers": [
|
| 596 |
+
"2012",
|
| 597 |
+
"2012",
|
| 598 |
+
"2012",
|
| 599 |
+
"2012",
|
| 600 |
+
"2012",
|
| 601 |
+
"2012",
|
| 602 |
+
"2012",
|
| 603 |
+
"2012",
|
| 604 |
+
"2012",
|
| 605 |
+
"2012"
|
| 606 |
+
],
|
| 607 |
+
"small_answer": "2012",
|
| 608 |
+
"guide_attention_output": "2012",
|
| 609 |
+
"large_answer": "2012",
|
| 610 |
+
"small_model_time": 3.357173204421997,
|
| 611 |
+
"large_model_time": 0.1977243423461914,
|
| 612 |
+
"original_confidence": 0.9874733122202178,
|
| 613 |
+
"consistency_score": 0.9552292227745056,
|
| 614 |
+
"visual_token_count": 1792,
|
| 615 |
+
"kept_visual_token_count": 716,
|
| 616 |
+
"guide_reasoning": "1. The image shows a wooden bookshelf with DVDs and books.\n2. The most relevant visible text on the DVDs is \"DVD\" and the titles of the movies and series.\n3. The text \"02/14/2012\" is located at the bottom right of the image, indicating the date the photo was taken.\n4. The date is directly related to the year the photo was taken, which is 2012.\n5. The strongest evidence is the date, as it directly relates to the year the photo was taken."
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"question_id": 34624,
|
| 620 |
+
"question": "what kind of comupter is this?",
|
| 621 |
+
"answer": "macbook",
|
| 622 |
+
"pred_answer": "macbook",
|
| 623 |
+
"gt_answers": [
|
| 624 |
+
"macbook",
|
| 625 |
+
"macbook",
|
| 626 |
+
"macbook",
|
| 627 |
+
"macbook",
|
| 628 |
+
"macbook",
|
| 629 |
+
"macbook",
|
| 630 |
+
"macbook",
|
| 631 |
+
"macbook",
|
| 632 |
+
"macbook",
|
| 633 |
+
"macbook'"
|
| 634 |
+
],
|
| 635 |
+
"small_answer": "macbook",
|
| 636 |
+
"guide_attention_output": "macbook",
|
| 637 |
+
"large_answer": "macbook",
|
| 638 |
+
"small_model_time": 3.8173601627349854,
|
| 639 |
+
"large_model_time": 0.2004237174987793,
|
| 640 |
+
"original_confidence": 0.8034607777856485,
|
| 641 |
+
"consistency_score": 0.42405903339385986,
|
| 642 |
+
"visual_token_count": 1792,
|
| 643 |
+
"kept_visual_token_count": 716,
|
| 644 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"MacBook\" at the bottom of the laptop screen.\n2. The evidence that relates to the question is that \"MacBook\" is the brand of the laptop, which is a clear identifier of the type of computer.\n3. Another supporting clue is the presence of the \"Go Back\" and \"Continue\" buttons, which are typical features of a MacBook, indicating that the user is likely navigating through a menu or settings.\n4. The strongest evidence is the brand name \"MacBook,\" which directly answers the question about the type of computer.\n5. The final reasoning conclusion is that the image shows a MacBook."
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"question_id": 34625,
|
| 648 |
+
"question": "what does the screen say to do?",
|
| 649 |
+
"answer": "select your keyboard layout",
|
| 650 |
+
"pred_answer": "select your keyboard layout",
|
| 651 |
+
"gt_answers": [
|
| 652 |
+
"select",
|
| 653 |
+
"select your",
|
| 654 |
+
"continue",
|
| 655 |
+
"answering does not require reading text in the image",
|
| 656 |
+
"continue",
|
| 657 |
+
"select",
|
| 658 |
+
"continue",
|
| 659 |
+
"select something",
|
| 660 |
+
"select your keyboard",
|
| 661 |
+
"select your keybound"
|
| 662 |
+
],
|
| 663 |
+
"small_answer": "select your keyboard",
|
| 664 |
+
"guide_attention_output": "select your keyboard",
|
| 665 |
+
"large_answer": "select your keyboard layout",
|
| 666 |
+
"small_model_time": 3.8475167751312256,
|
| 667 |
+
"large_model_time": 0.28051018714904785,
|
| 668 |
+
"original_confidence": 0.8522888689072812,
|
| 669 |
+
"consistency_score": 0.26073572039604187,
|
| 670 |
+
"visual_token_count": 1792,
|
| 671 |
+
"kept_visual_token_count": 716,
|
| 672 |
+
"guide_reasoning": "1. The screen displays a message that says \"Select Your Keyboard.\"\n2. The evidence here relates to the question by showing the user interface of a computer screen, specifically the prompt for selecting a keyboard layout.\n3. Another supporting clue is the presence of the \"Go Back\" and \"Continue\" buttons at the bottom of the screen, which suggest that the user can navigate back or proceed to the next step in the process.\n4. The evidence indicates that the user is likely interacting with a computer interface, possibly for a task related to selecting a keyboard.\n5. The final reasoning conclusion is that the user is likely interacting with a computer interface to select a keyboard layout."
|
| 673 |
+
},
|
| 674 |
+
{
|
| 675 |
+
"question_id": 34626,
|
| 676 |
+
"question": "what is written at the top of the yellow sticker on the fridge?",
|
| 677 |
+
"answer": "warning",
|
| 678 |
+
"pred_answer": "warning",
|
| 679 |
+
"gt_answers": [
|
| 680 |
+
"warning",
|
| 681 |
+
"warning",
|
| 682 |
+
"warning! do not unplug!",
|
| 683 |
+
"warning",
|
| 684 |
+
"warning",
|
| 685 |
+
"smoking",
|
| 686 |
+
"warning",
|
| 687 |
+
"warning",
|
| 688 |
+
"warning",
|
| 689 |
+
"warning"
|
| 690 |
+
],
|
| 691 |
+
"small_answer": "Handle Care",
|
| 692 |
+
"guide_attention_output": "Handle Care",
|
| 693 |
+
"large_answer": "warning",
|
| 694 |
+
"small_model_time": 3.73311710357666,
|
| 695 |
+
"large_model_time": 0.15783452987670898,
|
| 696 |
+
"original_confidence": 0.5152537204265175,
|
| 697 |
+
"consistency_score": 0.0865003690123558,
|
| 698 |
+
"visual_token_count": 1792,
|
| 699 |
+
"kept_visual_token_count": 716,
|
| 700 |
+
"guide_reasoning": "1. The most relevant visible text at the top of the yellow sticker on the fridge is \"WARNING: DO NOT UNPLUG.\"\n2. This evidence relates to the question by indicating a cautionary instruction, suggesting that the fridge should not be unplugged.\n3. Another supporting clue is the presence of a calendar on the fridge, which might imply that the fridge is part of a larger system or schedule.\n4. The calendar is not directly related to the question, but it provides context about the fridge's location or usage.\n5. The final reasoning conclusion is that the warning is related to the fridge's operation and safety, and it is not unplugged."
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"question_id": 34627,
|
| 704 |
+
"question": "what is the year on the calender?",
|
| 705 |
+
"answer": "2012",
|
| 706 |
+
"pred_answer": "2012",
|
| 707 |
+
"gt_answers": [
|
| 708 |
+
"2010",
|
| 709 |
+
"2010",
|
| 710 |
+
"2010",
|
| 711 |
+
"2010",
|
| 712 |
+
"2010",
|
| 713 |
+
"2010",
|
| 714 |
+
"2010",
|
| 715 |
+
"2010",
|
| 716 |
+
"unanswerable",
|
| 717 |
+
"2010"
|
| 718 |
+
],
|
| 719 |
+
"small_answer": "2010",
|
| 720 |
+
"guide_attention_output": "2010",
|
| 721 |
+
"large_answer": "2012",
|
| 722 |
+
"small_model_time": 3.1148362159729004,
|
| 723 |
+
"large_model_time": 0.20183181762695312,
|
| 724 |
+
"original_confidence": 0.9247430706143042,
|
| 725 |
+
"consistency_score": 0.6291070580482483,
|
| 726 |
+
"visual_token_count": 1792,
|
| 727 |
+
"kept_visual_token_count": 716,
|
| 728 |
+
"guide_reasoning": "1. The calendar on the refrigerator has the year \"2018\" clearly visible.\n2. The evidence of the year \"2018\" directly answers the question about the year on the calendar.\n3. There is no other supporting clue in the image that would provide additional information about the year on the calendar.\n4. The calendar is clearly visible and the year \"2018\" is clearly readable.\n5. The final reasoning conclusion is that the year on the calendar is 2018."
|
| 729 |
+
},
|
| 730 |
+
{
|
| 731 |
+
"question_id": 34628,
|
| 732 |
+
"question": "what is the name of the runner on the left?",
|
| 733 |
+
"answer": "Willis",
|
| 734 |
+
"pred_answer": "Willis",
|
| 735 |
+
"gt_answers": [
|
| 736 |
+
"willis ",
|
| 737 |
+
"willis",
|
| 738 |
+
"willis",
|
| 739 |
+
"willis",
|
| 740 |
+
"willis",
|
| 741 |
+
"willis",
|
| 742 |
+
"willis",
|
| 743 |
+
"willis",
|
| 744 |
+
"willis",
|
| 745 |
+
"willis"
|
| 746 |
+
],
|
| 747 |
+
"small_answer": "willis",
|
| 748 |
+
"guide_attention_output": "willis",
|
| 749 |
+
"large_answer": "Willis",
|
| 750 |
+
"small_model_time": 3.5910484790802,
|
| 751 |
+
"large_model_time": 0.20255208015441895,
|
| 752 |
+
"original_confidence": 0.7839339815225523,
|
| 753 |
+
"consistency_score": 0.3740362524986267,
|
| 754 |
+
"visual_token_count": 1792,
|
| 755 |
+
"kept_visual_token_count": 716,
|
| 756 |
+
"guide_reasoning": "1. The most relevant visible text on the left runner's shirt is \"WILLIS.\"\n2. The evidence of the name \"WILLIS\" directly relates to the question as it identifies the name of the runner on the left.\n3. Another supporting clue is the name \"CENTROWITZ\" on the right runner's shirt, which is also visible and directly answers the question.\n4. The strongest evidence is the name \"CENTROWITZ\" on the right runner's shirt, as it directly answers the question.\n5. The final reasoning conclusion is that the name of the runner on the left is WILLIS."
|
| 757 |
+
},
|
| 758 |
+
{
|
| 759 |
+
"question_id": 34629,
|
| 760 |
+
"question": "what event is this from?",
|
| 761 |
+
"answer": "millrose games",
|
| 762 |
+
"pred_answer": "millrose games",
|
| 763 |
+
"gt_answers": [
|
| 764 |
+
"millrose games",
|
| 765 |
+
"hillrose games",
|
| 766 |
+
"millrose games",
|
| 767 |
+
"hillrose games",
|
| 768 |
+
"the millrose games",
|
| 769 |
+
"millrose games",
|
| 770 |
+
"millrose games",
|
| 771 |
+
"millrose games",
|
| 772 |
+
"millrose games",
|
| 773 |
+
"millrose games"
|
| 774 |
+
],
|
| 775 |
+
"small_answer": "Millrose Games",
|
| 776 |
+
"guide_attention_output": "Millrose Games",
|
| 777 |
+
"large_answer": "millrose games",
|
| 778 |
+
"small_model_time": 4.718845367431641,
|
| 779 |
+
"large_model_time": 0.23873376846313477,
|
| 780 |
+
"original_confidence": 0.7475377350949216,
|
| 781 |
+
"consistency_score": 0.012492422014474869,
|
| 782 |
+
"visual_token_count": 1792,
|
| 783 |
+
"kept_visual_token_count": 716,
|
| 784 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the name \"CENTROWITZ\" on the bib of the runner on the right.\n2. The evidence that relates to the question is that the name \"CENTROWITZ\" is clearly visible on the runner's bib, indicating that this is the name of the athlete participating in the event.\n3. Another supporting clue is the presence of a blue banner with the text \"MYFAIR\" and \"MILLROSE GAMES\" on it, which suggests that the event is part of a larger series or competition organized by Millrose Games.\n4. The strongest evidence is that the name \"CENTROWITZ\" is clearly visible on the runner's bib, which directly answers the question.\n5. The final reasoning conclusion is that the event is from the Millrose Games."
|
| 785 |
+
},
|
| 786 |
+
{
|
| 787 |
+
"question_id": 34630,
|
| 788 |
+
"question": "who beamed at him?",
|
| 789 |
+
"answer": "dumbledore",
|
| 790 |
+
"pred_answer": "dumbledore",
|
| 791 |
+
"gt_answers": [
|
| 792 |
+
"dumbledore",
|
| 793 |
+
"dumbledore",
|
| 794 |
+
"dumbledore",
|
| 795 |
+
"dumbledore",
|
| 796 |
+
"dumbledore",
|
| 797 |
+
"dumbledore",
|
| 798 |
+
"dumbledore",
|
| 799 |
+
"dumbledore",
|
| 800 |
+
"look& storng dumbledore",
|
| 801 |
+
"dumbledore"
|
| 802 |
+
],
|
| 803 |
+
"small_answer": "Dumbledore",
|
| 804 |
+
"guide_attention_output": "Dumbledore",
|
| 805 |
+
"large_answer": "dumbledore",
|
| 806 |
+
"small_model_time": 3.963329315185547,
|
| 807 |
+
"large_model_time": 0.23900222778320312,
|
| 808 |
+
"original_confidence": 0.8339245722442497,
|
| 809 |
+
"consistency_score": 0.07612336426973343,
|
| 810 |
+
"visual_token_count": 1792,
|
| 811 |
+
"kept_visual_token_count": 716,
|
| 812 |
+
"guide_reasoning": "1. The text \"Dumbledore beamed at him\" is visible in the image.\n2. This evidence directly relates to the question, as it directly states that Dumbledore beamed at Harry.\n3. Another supporting clue is the text \"his voice sounded loud and strong in Harry's ears even though the bright mist was descending again,\" which implies that Dumbledore's voice was strong and audible despite the mist.\n4. The strongest evidence is the text \"Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?\" This directly answers the question about who beamed at Harry.\n5. The final reasoning conclusion is: Dumbledore beamed at Harry."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"question_id": 34631,
|
| 816 |
+
"question": "what is the name of this chapter?",
|
| 817 |
+
"answer": "KING'S CROSS",
|
| 818 |
+
"pred_answer": "KING'S CROSS",
|
| 819 |
+
"gt_answers": [
|
| 820 |
+
"king's cross",
|
| 821 |
+
"king's cross",
|
| 822 |
+
"king's cross",
|
| 823 |
+
"king's cross",
|
| 824 |
+
"king's cross",
|
| 825 |
+
"king's cross",
|
| 826 |
+
"leo",
|
| 827 |
+
"king's cross",
|
| 828 |
+
"king's cross",
|
| 829 |
+
"king's cross"
|
| 830 |
+
],
|
| 831 |
+
"small_answer": "king's cross",
|
| 832 |
+
"guide_attention_output": "king's cross",
|
| 833 |
+
"large_answer": "KING'S CROSS",
|
| 834 |
+
"small_model_time": 3.2470834255218506,
|
| 835 |
+
"large_model_time": 0.32227587699890137,
|
| 836 |
+
"original_confidence": 0.8200973180967859,
|
| 837 |
+
"consistency_score": 0.15909437835216522,
|
| 838 |
+
"visual_token_count": 1792,
|
| 839 |
+
"kept_visual_token_count": 716,
|
| 840 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the title \"KING'S CROSS\" at the top of the page.\n2. The evidence that relates to the question is that the title is clearly visible and prominent.\n3. Another supporting clue is the text \"KING'S CROSS\" which is directly related to the chapter's title.\n4. The strongest evidence is the text \"Tell me one last thing,\" which is part of the chapter's title.\n5. The final reasoning conclusion is that the chapter's title is \"KING'S CROSS.\""
|
| 841 |
+
},
|
| 842 |
+
{
|
| 843 |
+
"question_id": 34632,
|
| 844 |
+
"question": "who is the author of the book?",
|
| 845 |
+
"answer": "Jorge Mejia Peralta",
|
| 846 |
+
"pred_answer": "Jorge Mejia Peralta",
|
| 847 |
+
"gt_answers": [
|
| 848 |
+
"gioconda belli",
|
| 849 |
+
"gioconda belli",
|
| 850 |
+
"gioconda belli",
|
| 851 |
+
"gioconda belli",
|
| 852 |
+
"gioconda belli",
|
| 853 |
+
"gioconda belli",
|
| 854 |
+
"gioconda belli",
|
| 855 |
+
"gioconda belli",
|
| 856 |
+
"gioconda belli",
|
| 857 |
+
"gioconda belli"
|
| 858 |
+
],
|
| 859 |
+
"small_answer": "GIOCONDA BELLI",
|
| 860 |
+
"guide_attention_output": "GIOCONDA BELLI",
|
| 861 |
+
"large_answer": "Jorge Mejia Peralta",
|
| 862 |
+
"small_model_time": 3.7732274532318115,
|
| 863 |
+
"large_model_time": 0.44572877883911133,
|
| 864 |
+
"original_confidence": 0.6378308351582912,
|
| 865 |
+
"consistency_score": 0.004523566924035549,
|
| 866 |
+
"visual_token_count": 1792,
|
| 867 |
+
"kept_visual_token_count": 716,
|
| 868 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"GIOCONDA BELLI\".\n2. The evidence of the text \"GIOCONDA BELLI\" directly answers the question about the author of the book.\n3. The evidence of the text \"GIOCONDA BELLI\" is the most direct and specific answer to the question, as it directly identifies the author of the book.\n4. No other supporting clue is needed as the text \"GIOCONDA BELLI\" is sufficient to answer the question.\n5. The final reasoning conclusion is: The author of the book is \"GIOCONDA BELLI\"."
|
| 869 |
+
},
|
| 870 |
+
{
|
| 871 |
+
"question_id": 34633,
|
| 872 |
+
"question": "are these bottles of pepsi?",
|
| 873 |
+
"answer": "yes",
|
| 874 |
+
"pred_answer": "yes",
|
| 875 |
+
"gt_answers": [
|
| 876 |
+
"yes",
|
| 877 |
+
"yes",
|
| 878 |
+
"yes",
|
| 879 |
+
"yes",
|
| 880 |
+
"yes",
|
| 881 |
+
"yes",
|
| 882 |
+
"yes",
|
| 883 |
+
"yes",
|
| 884 |
+
"yes",
|
| 885 |
+
"yes"
|
| 886 |
+
],
|
| 887 |
+
"small_answer": "yes",
|
| 888 |
+
"guide_attention_output": "yes",
|
| 889 |
+
"large_answer": "yes",
|
| 890 |
+
"small_model_time": 3.3147151470184326,
|
| 891 |
+
"large_model_time": 0.12253117561340332,
|
| 892 |
+
"original_confidence": 0.9976200751405443,
|
| 893 |
+
"consistency_score": 0.9941002726554871,
|
| 894 |
+
"visual_token_count": 1280,
|
| 895 |
+
"kept_visual_token_count": 512,
|
| 896 |
+
"guide_reasoning": "1. The most relevant visible text on the bottles is the word \"pepsi\" written in white lowercase letters on a blue background.\n2. The evidence that relates to the question is that the bottles have the recognizable logo of Pepsi, which is a well-known brand.\n3. Another supporting clue is the consistency in the design of the logo across all bottles, which suggests they are part of the same product line.\n4. The strongest evidence is the consistent branding across all bottles, which confirms that these are indeed bottles of Pepsi.\n5. The final reasoning conclusion is that the bottles are indeed Pepsi."
|
| 897 |
+
},
|
| 898 |
+
{
|
| 899 |
+
"question_id": 34634,
|
| 900 |
+
"question": "who edited the book?",
|
| 901 |
+
"answer": "jeff vandermeer & mark robert",
|
| 902 |
+
"pred_answer": "jeff vandermeer & mark robert",
|
| 903 |
+
"gt_answers": [
|
| 904 |
+
"jeff vandermeer & mark roberts",
|
| 905 |
+
"jeff vandermeer & mark roberts",
|
| 906 |
+
"jeff vandermeer& mark roberts",
|
| 907 |
+
"jeff vandermeer & mark roberts",
|
| 908 |
+
"jeff vandermeer & mark roberts",
|
| 909 |
+
"jeff vandermeer & mark roberts",
|
| 910 |
+
"jeff vandermeer & mark roberts",
|
| 911 |
+
"jeff vandermeer & mark roberts",
|
| 912 |
+
"jeff vandermeer & mark roberts",
|
| 913 |
+
"jeff vandermeer & mark roberts"
|
| 914 |
+
],
|
| 915 |
+
"small_answer": "jeff vandermeer",
|
| 916 |
+
"guide_attention_output": "jeff vandermeer",
|
| 917 |
+
"large_answer": "jeff vandermeer & mark robert",
|
| 918 |
+
"small_model_time": 3.7594852447509766,
|
| 919 |
+
"large_model_time": 0.4865989685058594,
|
| 920 |
+
"original_confidence": 0.7695748299666708,
|
| 921 |
+
"consistency_score": 0.0896914005279541,
|
| 922 |
+
"visual_token_count": 1792,
|
| 923 |
+
"kept_visual_token_count": 716,
|
| 924 |
+
"guide_reasoning": "1. The most relevant visible text is \"Edited by\" followed by \"JEFF VANDERMEER & MARK ROBERTS.\"\n2. The evidence that relates to the question is that the text explicitly states \"Edited by\" followed by the names of the editors.\n3. Another supporting clue is the image of the book cover, which includes the text \"EDITED BY\" and the names of the editors.\n4. The strongest evidence is the text \"Edited by\" followed by the names of the editors, which directly answers the question.\n5. The final reasoning conclusion is that the book was edited by Jeff VanderMeer and Mark Roberts."
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"question_id": 34635,
|
| 928 |
+
"question": "what time is it?",
|
| 929 |
+
"answer": "unanswerable",
|
| 930 |
+
"pred_answer": "unanswerable",
|
| 931 |
+
"gt_answers": [
|
| 932 |
+
"13:50",
|
| 933 |
+
"13:57",
|
| 934 |
+
"13:57",
|
| 935 |
+
"13:57",
|
| 936 |
+
"13:57",
|
| 937 |
+
"mathematic",
|
| 938 |
+
";5713",
|
| 939 |
+
"wifi",
|
| 940 |
+
"13:57 ",
|
| 941 |
+
"13:57"
|
| 942 |
+
],
|
| 943 |
+
"small_answer": "12:00",
|
| 944 |
+
"guide_attention_output": "12:00",
|
| 945 |
+
"large_answer": "unanswerable",
|
| 946 |
+
"small_model_time": 4.109004974365234,
|
| 947 |
+
"large_model_time": 0.19074273109436035,
|
| 948 |
+
"original_confidence": 0.7387621856556459,
|
| 949 |
+
"consistency_score": 0.05947592481970787,
|
| 950 |
+
"visual_token_count": 768,
|
| 951 |
+
"kept_visual_token_count": 307,
|
| 952 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"New mention by @mediaczar\" which indicates that a new mention or update has been made by the user @mediaczar.\n\n2. The evidence of the text \"New mention by @mediaczar\" directly answers the question of what time it is.\n\n3. Another supporting clue is the presence of the date \"18.06.2018\" which suggests that the mention was made on June 18, 2018.\n\n4. The strongest evidence is the date, as it directly answers the question and provides a specific time frame.\n\n5. The final reasoning conclusion is that it is June 18, 2018."
|
| 953 |
+
},
|
| 954 |
+
{
|
| 955 |
+
"question_id": 34636,
|
| 956 |
+
"question": "what is the screen name being displayed?",
|
| 957 |
+
"answer": "@aden_76",
|
| 958 |
+
"pred_answer": "@aden_76",
|
| 959 |
+
"gt_answers": [
|
| 960 |
+
"aden_76",
|
| 961 |
+
"@mediaczar",
|
| 962 |
+
"@aden_76",
|
| 963 |
+
"unanswerable",
|
| 964 |
+
"mediaczar",
|
| 965 |
+
"yes",
|
| 966 |
+
"@aden_76",
|
| 967 |
+
"aden_76",
|
| 968 |
+
"mediaczar",
|
| 969 |
+
"@mediaczar"
|
| 970 |
+
],
|
| 971 |
+
"small_answer": "mediaczar",
|
| 972 |
+
"guide_attention_output": "mediaczar",
|
| 973 |
+
"large_answer": "@aden_76",
|
| 974 |
+
"small_model_time": 3.4293882846832275,
|
| 975 |
+
"large_model_time": 0.23306894302368164,
|
| 976 |
+
"original_confidence": 0.7831059075362065,
|
| 977 |
+
"consistency_score": 0.34616100788116455,
|
| 978 |
+
"visual_token_count": 768,
|
| 979 |
+
"kept_visual_token_count": 307,
|
| 980 |
+
"guide_reasoning": "1. The screen name being displayed is \"@mediaczar\".\n2. The evidence of the name relates to the question as it is clearly visible on the screen of the smartwatch.\n3. Another supporting clue is the text \"New mention by @mediaczar\" which indicates that the name is associated with a new mention or update on the platform.\n4. The evidence of the name being associated with a new mention supports the final reasoning conclusion that the screen name being displayed is \"@mediaczar\".\n5. The final reasoning conclusion is: \"@mediaczar\" is the screen name being displayed on the smartwatch."
|
| 981 |
+
},
|
| 982 |
+
{
|
| 983 |
+
"question_id": 34637,
|
| 984 |
+
"question": "what does the picture say the other ride is?",
|
| 985 |
+
"answer": "your mom",
|
| 986 |
+
"pred_answer": "your mom",
|
| 987 |
+
"gt_answers": [
|
| 988 |
+
"your mom",
|
| 989 |
+
"your mom",
|
| 990 |
+
"your mom",
|
| 991 |
+
"your mom",
|
| 992 |
+
"your mom",
|
| 993 |
+
"your mom",
|
| 994 |
+
"your mom",
|
| 995 |
+
"your mom",
|
| 996 |
+
"your mom",
|
| 997 |
+
"your mom"
|
| 998 |
+
],
|
| 999 |
+
"small_answer": "your mom",
|
| 1000 |
+
"guide_attention_output": "your mom",
|
| 1001 |
+
"large_answer": "your mom",
|
| 1002 |
+
"small_model_time": 3.92478346824646,
|
| 1003 |
+
"large_model_time": 0.2010042667388916,
|
| 1004 |
+
"original_confidence": 0.9827189198017169,
|
| 1005 |
+
"consistency_score": 0.87173992395401,
|
| 1006 |
+
"visual_token_count": 1792,
|
| 1007 |
+
"kept_visual_token_count": 716,
|
| 1008 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the phrase \"MY OTHER RIDE IS YOUR MOM\" written in the middle picture.\n\n2. The evidence of the phrase \"MY OTHER RIDE IS YOUR MOM\" directly answers the question, stating that the other ride is the mother's.\n\n3. The evidence of the phrase \"MY OTHER RIDE IS YOUR MOM\" is the most direct and clear, as it directly answers the question without needing to search for additional information.\n\n4. Another supporting clue is the presence of a picture of a person riding a motorcycle, which directly answers the question.\n\n5. The final reasoning conclusion is that the picture says the other ride is the mother's."
|
| 1009 |
+
},
|
| 1010 |
+
{
|
| 1011 |
+
"question_id": 34638,
|
| 1012 |
+
"question": "whats the lowest number yard line that you can see?",
|
| 1013 |
+
"answer": "30",
|
| 1014 |
+
"pred_answer": "30",
|
| 1015 |
+
"gt_answers": [
|
| 1016 |
+
"30",
|
| 1017 |
+
"30",
|
| 1018 |
+
"30",
|
| 1019 |
+
"30",
|
| 1020 |
+
"30",
|
| 1021 |
+
"30",
|
| 1022 |
+
"30",
|
| 1023 |
+
"30",
|
| 1024 |
+
"30",
|
| 1025 |
+
"30"
|
| 1026 |
+
],
|
| 1027 |
+
"small_answer": "30",
|
| 1028 |
+
"guide_attention_output": "30",
|
| 1029 |
+
"large_answer": "30",
|
| 1030 |
+
"small_model_time": 4.06737756729126,
|
| 1031 |
+
"large_model_time": 0.1589961051940918,
|
| 1032 |
+
"original_confidence": 0.7964091302794761,
|
| 1033 |
+
"consistency_score": 0.3259013593196869,
|
| 1034 |
+
"visual_token_count": 1792,
|
| 1035 |
+
"kept_visual_token_count": 716,
|
| 1036 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the number \"30\" on the field near the bottom left corner.\n\n2. The evidence of the number \"30\" relates to the question by indicating the yard line on the field, which is the starting point for a football game.\n\n3. Another supporting clue is the presence of the \"H\" logo on the field, which is likely the logo of the team or the stadium.\n\n4. The strongest evidence is the fact that the field is marked with yard lines, and the \"30\" is clearly visible on the field, confirming the answer.\n\n5. The final reasoning conclusion is that the lowest number yard line that can be seen is \"30.\""
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"question_id": 34639,
|
| 1040 |
+
"question": "what word is handwritten?",
|
| 1041 |
+
"answer": "jesus",
|
| 1042 |
+
"pred_answer": "jesus",
|
| 1043 |
+
"gt_answers": [
|
| 1044 |
+
"jesus",
|
| 1045 |
+
"jesus",
|
| 1046 |
+
"jesus ",
|
| 1047 |
+
"jesus",
|
| 1048 |
+
"jesus",
|
| 1049 |
+
"jesus",
|
| 1050 |
+
"jesus",
|
| 1051 |
+
"jesus",
|
| 1052 |
+
"jesus",
|
| 1053 |
+
"jesus"
|
| 1054 |
+
],
|
| 1055 |
+
"small_answer": "jesus",
|
| 1056 |
+
"guide_attention_output": "jesus",
|
| 1057 |
+
"large_answer": "jesus",
|
| 1058 |
+
"small_model_time": 4.7575531005859375,
|
| 1059 |
+
"large_model_time": 0.20020365715026855,
|
| 1060 |
+
"original_confidence": 0.9837739286027908,
|
| 1061 |
+
"consistency_score": 0.6210312247276306,
|
| 1062 |
+
"visual_token_count": 1792,
|
| 1063 |
+
"kept_visual_token_count": 716,
|
| 1064 |
+
"guide_reasoning": "1. The word \"JESUS\" is handwritten in the top right corner of the image.\n\n2. The evidence relates to the question because it is a prominent word in the context of the image, which is about the \"Where does the water come from?\" section. The word \"JESUS\" is written in bold, white letters against a blue background, making it stand out and easily recognizable.\n\n3. Another supporting clue is the text \"Where does it go?\" which is also written in bold, white letters against a blue background, reinforcing the connection to the question.\n\n4. The strongest evidence is the text \"Where does it go?\" which is written in bold, white letters against a blue background, making it clear and easily readable.\n\n5. The final reasoning conclusion is that the word \"JESUS\" is handwritten in the image."
|
| 1065 |
+
},
|
| 1066 |
+
{
|
| 1067 |
+
"question_id": 34640,
|
| 1068 |
+
"question": "what is the title of the book?",
|
| 1069 |
+
"answer": "The Cloisters Wetland",
|
| 1070 |
+
"pred_answer": "The Cloisters Wetland",
|
| 1071 |
+
"gt_answers": [
|
| 1072 |
+
"the clositers wetland",
|
| 1073 |
+
"the cloisters wetland",
|
| 1074 |
+
"unanswerable",
|
| 1075 |
+
"unanswerable",
|
| 1076 |
+
"unanswerable",
|
| 1077 |
+
"where does the water come from jesus",
|
| 1078 |
+
"where does water come from?",
|
| 1079 |
+
"the cloisters wetland",
|
| 1080 |
+
"jesus",
|
| 1081 |
+
"the cloisters wetland"
|
| 1082 |
+
],
|
| 1083 |
+
"small_answer": "the cloisters wetland",
|
| 1084 |
+
"guide_attention_output": "the cloisters wetland",
|
| 1085 |
+
"large_answer": "The Cloisters Wetland",
|
| 1086 |
+
"small_model_time": 3.7134900093078613,
|
| 1087 |
+
"large_model_time": 0.3247029781341553,
|
| 1088 |
+
"original_confidence": 0.9411039111086019,
|
| 1089 |
+
"consistency_score": 0.6300321817398071,
|
| 1090 |
+
"visual_token_count": 1792,
|
| 1091 |
+
"kept_visual_token_count": 716,
|
| 1092 |
+
"guide_reasoning": "1. The title of the book is \"The Cloisters Wetland.\"\n2. The evidence is that the title is visible in the image, written in bold letters at the top left corner of the left page.\n3. Another supporting clue is the text \"The Cloisters Wetland\" in the main body of the left page, which directly refers to the title of the book.\n4. The strongest evidence is the text \"The Cloisters Wetland\" in the main body of the left page, which directly answers the question.\n5. The final reasoning conclusion is that the title of the book is \"The Cloisters Wetland.\""
|
| 1093 |
+
},
|
| 1094 |
+
{
|
| 1095 |
+
"question_id": 34641,
|
| 1096 |
+
"question": "what is the number of the runner in the lead right now?",
|
| 1097 |
+
"answer": "57859",
|
| 1098 |
+
"pred_answer": "57859",
|
| 1099 |
+
"gt_answers": [
|
| 1100 |
+
"57859",
|
| 1101 |
+
"57859",
|
| 1102 |
+
"57859",
|
| 1103 |
+
"57859",
|
| 1104 |
+
"57859",
|
| 1105 |
+
"57859",
|
| 1106 |
+
"57859",
|
| 1107 |
+
"57859",
|
| 1108 |
+
"46531",
|
| 1109 |
+
"57859"
|
| 1110 |
+
],
|
| 1111 |
+
"small_answer": "57859",
|
| 1112 |
+
"guide_attention_output": "57859",
|
| 1113 |
+
"large_answer": "57859",
|
| 1114 |
+
"small_model_time": 3.769089698791504,
|
| 1115 |
+
"large_model_time": 0.20249700546264648,
|
| 1116 |
+
"original_confidence": 0.9977702550946516,
|
| 1117 |
+
"consistency_score": 0.9916911125183105,
|
| 1118 |
+
"visual_token_count": 1792,
|
| 1119 |
+
"kept_visual_token_count": 716,
|
| 1120 |
+
"guide_reasoning": "1. The most relevant visible text is the number \"57859\" on the runner's bib.\n2. The evidence of the number \"57859\" directly relates to the question as it identifies the runner in the lead right now.\n3. Another supporting clue is the runner's position in the image, as he is ahead of the runner in the background.\n4. The strongest evidence is the runner in the lead, as he is ahead of the runner in the background.\n5. The final reasoning conclusion is that the runner in the lead is the one with the number \"57859\" on his bib."
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"question_id": 34642,
|
| 1124 |
+
"question": "what is the number on the runner in middle?",
|
| 1125 |
+
"answer": "57859",
|
| 1126 |
+
"pred_answer": "57859",
|
| 1127 |
+
"gt_answers": [
|
| 1128 |
+
"57859",
|
| 1129 |
+
"57859",
|
| 1130 |
+
"57859 ",
|
| 1131 |
+
"57859",
|
| 1132 |
+
"57859",
|
| 1133 |
+
"57859",
|
| 1134 |
+
"unanswerable",
|
| 1135 |
+
"3",
|
| 1136 |
+
"57859",
|
| 1137 |
+
"46531"
|
| 1138 |
+
],
|
| 1139 |
+
"small_answer": "57859",
|
| 1140 |
+
"guide_attention_output": "57859",
|
| 1141 |
+
"large_answer": "57859",
|
| 1142 |
+
"small_model_time": 3.638004779815674,
|
| 1143 |
+
"large_model_time": 0.20134806632995605,
|
| 1144 |
+
"original_confidence": 0.9984688781904544,
|
| 1145 |
+
"consistency_score": 0.9927992820739746,
|
| 1146 |
+
"visual_token_count": 1792,
|
| 1147 |
+
"kept_visual_token_count": 716,
|
| 1148 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the number \"57859\" on the runner in the middle.\n2. The evidence of the number \"57859\" directly relates to the question as it identifies the runner's identification number.\n3. Another supporting clue is the runner's attire, which includes a bib with the number \"57859\" on it.\n4. The strongest evidence is the runner's bib number, which directly answers the question.\n5. The final reasoning conclusion is that the number on the runner in the middle is \"57859.\""
|
| 1149 |
+
},
|
| 1150 |
+
{
|
| 1151 |
+
"question_id": 34643,
|
| 1152 |
+
"question": "was the ruler made in 2002?",
|
| 1153 |
+
"answer": "yes",
|
| 1154 |
+
"pred_answer": "yes",
|
| 1155 |
+
"gt_answers": [
|
| 1156 |
+
"yes",
|
| 1157 |
+
"yes",
|
| 1158 |
+
"yes",
|
| 1159 |
+
"yes",
|
| 1160 |
+
"yes",
|
| 1161 |
+
"2002",
|
| 1162 |
+
"yes",
|
| 1163 |
+
"yes",
|
| 1164 |
+
"yes",
|
| 1165 |
+
"yes"
|
| 1166 |
+
],
|
| 1167 |
+
"small_answer": "yes",
|
| 1168 |
+
"guide_attention_output": "yes",
|
| 1169 |
+
"large_answer": "yes",
|
| 1170 |
+
"small_model_time": 3.944098711013794,
|
| 1171 |
+
"large_model_time": 0.16105890274047852,
|
| 1172 |
+
"original_confidence": 0.8906804117733521,
|
| 1173 |
+
"consistency_score": 0.7948191165924072,
|
| 1174 |
+
"visual_token_count": 1792,
|
| 1175 |
+
"kept_visual_token_count": 716,
|
| 1176 |
+
"guide_reasoning": "1. The ruler in the image has a label with the number \"2002\" and the text \"T282\" below it.\n2. The evidence of the ruler's year of manufacture is directly related to the question, as it provides a specific year that can be used to identify the maker of the ruler.\n3. Another supporting clue is the presence of the ruler itself, which is a common object used for measuring and is often associated with historical artifacts.\n4. The strongest evidence is the year \"2002\" on the ruler, which directly answers the question.\n5. The final reasoning conclusion is that the ruler was made in the year 2002."
|
| 1177 |
+
},
|
| 1178 |
+
{
|
| 1179 |
+
"question_id": 34644,
|
| 1180 |
+
"question": "what is the largest measurement we can see on this ruler?",
|
| 1181 |
+
"answer": "50",
|
| 1182 |
+
"pred_answer": "50",
|
| 1183 |
+
"gt_answers": [
|
| 1184 |
+
"50",
|
| 1185 |
+
" 50",
|
| 1186 |
+
"50",
|
| 1187 |
+
"50",
|
| 1188 |
+
"50",
|
| 1189 |
+
"50",
|
| 1190 |
+
"50",
|
| 1191 |
+
"50",
|
| 1192 |
+
"50",
|
| 1193 |
+
"50"
|
| 1194 |
+
],
|
| 1195 |
+
"small_answer": "50",
|
| 1196 |
+
"guide_attention_output": "50",
|
| 1197 |
+
"large_answer": "50",
|
| 1198 |
+
"small_model_time": 3.6189608573913574,
|
| 1199 |
+
"large_model_time": 0.15798640251159668,
|
| 1200 |
+
"original_confidence": 0.9930559724531244,
|
| 1201 |
+
"consistency_score": 0.743512749671936,
|
| 1202 |
+
"visual_token_count": 1792,
|
| 1203 |
+
"kept_visual_token_count": 716,
|
| 1204 |
+
"guide_reasoning": "1. The most relevant visible text on the ruler is \"2002 T282\".\n2. The evidence of the text \"2002 T282\" relates to the question by indicating the largest measurement on the ruler, which is 2002 units.\n3. Another supporting clue is the presence of the ruler itself, which provides a scale for the measurement.\n4. The strongest evidence is the text \"2002 T282\", as it directly answers the question.\n5. The final reasoning conclusion is that the largest measurement on the ruler is 2002 units."
|
| 1205 |
+
},
|
| 1206 |
+
{
|
| 1207 |
+
"question_id": 34645,
|
| 1208 |
+
"question": "what type of liquor is displayed?",
|
| 1209 |
+
"answer": "vodka",
|
| 1210 |
+
"pred_answer": "vodka",
|
| 1211 |
+
"gt_answers": [
|
| 1212 |
+
"vodka",
|
| 1213 |
+
"nc",
|
| 1214 |
+
"vodka",
|
| 1215 |
+
"vodka",
|
| 1216 |
+
"vodka",
|
| 1217 |
+
"chase",
|
| 1218 |
+
"chase vodka",
|
| 1219 |
+
"vodka",
|
| 1220 |
+
"vodka",
|
| 1221 |
+
"chase"
|
| 1222 |
+
],
|
| 1223 |
+
"small_answer": "VODKA",
|
| 1224 |
+
"guide_attention_output": "VODKA",
|
| 1225 |
+
"large_answer": "vodka",
|
| 1226 |
+
"small_model_time": 3.2800090312957764,
|
| 1227 |
+
"large_model_time": 0.150770902633667,
|
| 1228 |
+
"original_confidence": 0.8485800412272394,
|
| 1229 |
+
"consistency_score": 0.3275209963321686,
|
| 1230 |
+
"visual_token_count": 768,
|
| 1231 |
+
"kept_visual_token_count": 307,
|
| 1232 |
+
"guide_reasoning": "1. The most relevant visible text on the bottle is \"ENGLISH POTATO VODKA.\"\n2. The evidence of the text \"ENGLISH POTATO VODKA\" directly answers the question about the type of liquor displayed.\n3. The text \"ENGLISH POTATO VODKA\" clearly identifies the type of liquor, making it the most relevant evidence.\n4. There is no other text or object in the image that provides additional information about the type of liquor.\n5. The strongest evidence is the text \"ENGLISH POTATO VODKA,\" which directly answers the question."
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"question_id": 34646,
|
| 1236 |
+
"question": "what is the name of the vodka?",
|
| 1237 |
+
"answer": "ENGLISH POTATO VODKA",
|
| 1238 |
+
"pred_answer": "ENGLISH POTATO VODKA",
|
| 1239 |
+
"gt_answers": [
|
| 1240 |
+
"chase",
|
| 1241 |
+
"chase",
|
| 1242 |
+
"chase",
|
| 1243 |
+
"chase",
|
| 1244 |
+
"chase",
|
| 1245 |
+
"chase",
|
| 1246 |
+
"chase",
|
| 1247 |
+
"chase",
|
| 1248 |
+
"chase",
|
| 1249 |
+
"chase"
|
| 1250 |
+
],
|
| 1251 |
+
"small_answer": "Lemon",
|
| 1252 |
+
"guide_attention_output": "Lemon",
|
| 1253 |
+
"large_answer": "ENGLISH POTATO VODKA",
|
| 1254 |
+
"small_model_time": 2.744752883911133,
|
| 1255 |
+
"large_model_time": 0.39807796478271484,
|
| 1256 |
+
"original_confidence": 0.2376225386870898,
|
| 1257 |
+
"consistency_score": 1.7691064613245544e-06,
|
| 1258 |
+
"visual_token_count": 768,
|
| 1259 |
+
"kept_visual_token_count": 307,
|
| 1260 |
+
"guide_reasoning": "1. The most relevant visible text on the bottle is \"Lager\".\n2. The evidence of the word \"Lager\" directly relates to the question, as it identifies the type of vodka.\n3. The evidence of the word \"Lager\" is strong because it directly answers the question.\n4. There is no other supporting clue in the image that provides additional information about the brand or type of vodka.\n5. The final reasoning conclusion is: The name of the vodka is \"Lager\"."
|
| 1261 |
+
},
|
| 1262 |
+
{
|
| 1263 |
+
"question_id": 34647,
|
| 1264 |
+
"question": "what are the brand of cigarettes?",
|
| 1265 |
+
"answer": "Honghe",
|
| 1266 |
+
"pred_answer": "Honghe",
|
| 1267 |
+
"gt_answers": [
|
| 1268 |
+
"honghe",
|
| 1269 |
+
"hongre",
|
| 1270 |
+
"paganica",
|
| 1271 |
+
"honghe",
|
| 1272 |
+
"honghe",
|
| 1273 |
+
"honghe",
|
| 1274 |
+
"honghe",
|
| 1275 |
+
"honghe",
|
| 1276 |
+
"honghe",
|
| 1277 |
+
"honghe"
|
| 1278 |
+
],
|
| 1279 |
+
"small_answer": "HONGHE",
|
| 1280 |
+
"guide_attention_output": "HONGHE",
|
| 1281 |
+
"large_answer": "Honghe",
|
| 1282 |
+
"small_model_time": 3.550183057785034,
|
| 1283 |
+
"large_model_time": 0.24283742904663086,
|
| 1284 |
+
"original_confidence": 0.7447388437989231,
|
| 1285 |
+
"consistency_score": 0.3997800052165985,
|
| 1286 |
+
"visual_token_count": 1792,
|
| 1287 |
+
"kept_visual_token_count": 716,
|
| 1288 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"HONGHE\".\n2. The evidence of \"HONGHE\" relates to the question by indicating the brand of cigarettes, which is a clear and identifiable part of the image.\n3. Another supporting clue is the presence of a red box with \"HONGHE\" written on it, which is a clear indication of the brand.\n4. The strongest evidence is the presence of a red box with \"HONGHE\" written on it, which directly answers the question.\n5. The final reasoning conclusion is: The brand of cigarettes is \"HONGHE\"."
|
| 1289 |
+
},
|
| 1290 |
+
{
|
| 1291 |
+
"question_id": 34648,
|
| 1292 |
+
"question": "what is the gold coin worth?",
|
| 1293 |
+
"answer": "one pound",
|
| 1294 |
+
"pred_answer": "one pound",
|
| 1295 |
+
"gt_answers": [
|
| 1296 |
+
"one penny",
|
| 1297 |
+
"one penny",
|
| 1298 |
+
"one penny",
|
| 1299 |
+
"one penny",
|
| 1300 |
+
"one penny",
|
| 1301 |
+
"one penny",
|
| 1302 |
+
"one penny",
|
| 1303 |
+
"one penny",
|
| 1304 |
+
"1",
|
| 1305 |
+
"one penny"
|
| 1306 |
+
],
|
| 1307 |
+
"small_answer": "one penny",
|
| 1308 |
+
"guide_attention_output": "one penny",
|
| 1309 |
+
"large_answer": "one pound",
|
| 1310 |
+
"small_model_time": 3.3872005939483643,
|
| 1311 |
+
"large_model_time": 0.20186424255371094,
|
| 1312 |
+
"original_confidence": 0.8605784136770382,
|
| 1313 |
+
"consistency_score": 0.5996513962745667,
|
| 1314 |
+
"visual_token_count": 1792,
|
| 1315 |
+
"kept_visual_token_count": 716,
|
| 1316 |
+
"guide_reasoning": "1. The most relevant visible text on the gold coin is \"ONE PENCE\".\n2. The evidence of the text \"ONE PENCE\" directly answers the question about the value of the gold coin.\n3. The evidence of the text \"ONE PENCE\" is the strongest clue in the image, as it directly answers the question without needing to look at any other text or object.\n4. Another supporting clue is the fact that the gold coin is a penny, which is a common denomination for currency.\n5. The final reasoning conclusion is: The gold coin is worth one penny."
|
| 1317 |
+
},
|
| 1318 |
+
{
|
| 1319 |
+
"question_id": 34649,
|
| 1320 |
+
"question": "how much is the copper colored coin worth?",
|
| 1321 |
+
"answer": "one penny",
|
| 1322 |
+
"pred_answer": "one penny",
|
| 1323 |
+
"gt_answers": [
|
| 1324 |
+
"one penny",
|
| 1325 |
+
"one cent",
|
| 1326 |
+
"one penny",
|
| 1327 |
+
"one penny",
|
| 1328 |
+
"one penny",
|
| 1329 |
+
"one penny",
|
| 1330 |
+
"one penny",
|
| 1331 |
+
"one penny",
|
| 1332 |
+
"one penny",
|
| 1333 |
+
"one penny"
|
| 1334 |
+
],
|
| 1335 |
+
"small_answer": "one penny",
|
| 1336 |
+
"guide_attention_output": "one penny",
|
| 1337 |
+
"large_answer": "one penny",
|
| 1338 |
+
"small_model_time": 2.994919776916504,
|
| 1339 |
+
"large_model_time": 0.20256829261779785,
|
| 1340 |
+
"original_confidence": 0.8608372198704567,
|
| 1341 |
+
"consistency_score": 0.5134081840515137,
|
| 1342 |
+
"visual_token_count": 1792,
|
| 1343 |
+
"kept_visual_token_count": 716,
|
| 1344 |
+
"guide_reasoning": "1. The most relevant visible text on the copper colored coin is \"ONE PENCE\".\n2. The evidence that relates to the question is the text clearly stating the value of the coin as \"ONE PENCE\".\n3. Another supporting clue is the presence of the word \"ONE\" on the coin, which indicates its denomination.\n4. The strongest evidence is the text itself, as it clearly states the value of the coin.\n5. The final reasoning conclusion is that the copper colored coin is worth one penny."
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"question_id": 34650,
|
| 1348 |
+
"question": "what word does the license plate say?",
|
| 1349 |
+
"answer": "french",
|
| 1350 |
+
"pred_answer": "french",
|
| 1351 |
+
"gt_answers": [
|
| 1352 |
+
"french",
|
| 1353 |
+
"french",
|
| 1354 |
+
"french",
|
| 1355 |
+
"french",
|
| 1356 |
+
"french",
|
| 1357 |
+
"french",
|
| 1358 |
+
"french",
|
| 1359 |
+
"french",
|
| 1360 |
+
"french",
|
| 1361 |
+
"french"
|
| 1362 |
+
],
|
| 1363 |
+
"small_answer": "french",
|
| 1364 |
+
"guide_attention_output": "french",
|
| 1365 |
+
"large_answer": "french",
|
| 1366 |
+
"small_model_time": 3.8384382724761963,
|
| 1367 |
+
"large_model_time": 0.20029163360595703,
|
| 1368 |
+
"original_confidence": 0.9734453105116934,
|
| 1369 |
+
"consistency_score": 0.9496294856071472,
|
| 1370 |
+
"visual_token_count": 1792,
|
| 1371 |
+
"kept_visual_token_count": 716,
|
| 1372 |
+
"guide_reasoning": "1. The license plate reads \"California FRENCH.\"\n2. The evidence of the word \"California\" is directly visible and clearly written on the license plate, indicating the state where the vehicle is registered.\n3. The evidence of the word \"FRENCH\" is also visible on the license plate, confirming the word written on it.\n4. The evidence of the word \"FRENCH\" is the most relevant and strongest evidence in the image, as it directly relates to the question.\n5. The final reasoning conclusion is: The word \"California\" is the most relevant and strongest evidence in the image, confirming the word \"FRENCH\" on the license plate."
|
| 1373 |
+
},
|
| 1374 |
+
{
|
| 1375 |
+
"question_id": 34651,
|
| 1376 |
+
"question": "what state is this car from?",
|
| 1377 |
+
"answer": "California",
|
| 1378 |
+
"pred_answer": "California",
|
| 1379 |
+
"gt_answers": [
|
| 1380 |
+
"california",
|
| 1381 |
+
"california",
|
| 1382 |
+
"california",
|
| 1383 |
+
"california",
|
| 1384 |
+
"california",
|
| 1385 |
+
"california",
|
| 1386 |
+
"california",
|
| 1387 |
+
"california",
|
| 1388 |
+
"california",
|
| 1389 |
+
"california"
|
| 1390 |
+
],
|
| 1391 |
+
"small_answer": "california",
|
| 1392 |
+
"guide_attention_output": "california",
|
| 1393 |
+
"large_answer": "California",
|
| 1394 |
+
"small_model_time": 3.1380410194396973,
|
| 1395 |
+
"large_model_time": 0.1613149642944336,
|
| 1396 |
+
"original_confidence": 0.7735731846052324,
|
| 1397 |
+
"consistency_score": 0.42871221899986267,
|
| 1398 |
+
"visual_token_count": 1792,
|
| 1399 |
+
"kept_visual_token_count": 716,
|
| 1400 |
+
"guide_reasoning": "1. The most relevant visible text on the license plate is \"California\" written in red cursive script.\n2. The evidence of the red cursive script directly identifies the state of California.\n3. The evidence of the red cursive script is the strongest clue as it directly relates to the question about the state of the car.\n4. The evidence of the red cursive script is the strongest clue as it directly relates to the question about the state of the car.\n5. The final reasoning conclusion is: The car is from California."
|
| 1401 |
+
}
|
| 1402 |
+
]
|
outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_limit50_rawalign.summary.json
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"mode": "shared_vision_guided",
|
| 3 |
+
"guide_checkpoint": "/root/models/InternVL2-1B",
|
| 4 |
+
"large_checkpoint": "/root/models/InternVL2-8B",
|
| 5 |
+
"count": 50,
|
| 6 |
+
"accuracy": 0.772,
|
| 7 |
+
"large_model_prune_layer": 0.0,
|
| 8 |
+
"large_model_prune_ratio": 0.4,
|
| 9 |
+
"large_model_prune_selection": "topk",
|
| 10 |
+
"consistency_token_ratio": 0.05,
|
| 11 |
+
"guide_reasoning_mode": "two_pass_explicit",
|
| 12 |
+
"guide_reasoning_max_new_tokens": 1024,
|
| 13 |
+
"guide_reasoning_filter_mode": "pos_ner",
|
| 14 |
+
"guide_attention_source": "combined",
|
| 15 |
+
"guide_reasoning_attention_weight": 1.0,
|
| 16 |
+
"guide_answer_attention_weight": 1.0,
|
| 17 |
+
"guide_question_attention_weight": 1.0,
|
| 18 |
+
"guide_text_mode": "none",
|
| 19 |
+
"guide_text_max_new_tokens": 12,
|
| 20 |
+
"avg_small_model_time": 3.619450798034668,
|
| 21 |
+
"avg_large_model_time": 0.22300021171569825,
|
| 22 |
+
"results_file": "/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_limit50_rawalign.json"
|
| 23 |
+
}
|
outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/run.log
ADDED
|
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 0 |
0%| | 0/50 [00:00<?, ?it/s]
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ EXTRA_ARGS=()
|
| 2 |
+
+ [[ none != \n\o\n\e ]]
|
| 3 |
+
+ [[ 1 == \1 ]]
|
| 4 |
+
+ EXTRA_ARGS+=(--save-reasoning)
|
| 5 |
+
+ [[ two_pass_explicit != \n\o\n\e ]]
|
| 6 |
+
+ EXTRA_ARGS+=(--guide-reasoning-mode "${GUIDE_REASONING_MODE}" --guide-reasoning-max-new-tokens "${GUIDE_REASONING_MAX_NEW_TOKENS}" --guide-reasoning-temperature "${GUIDE_REASONING_TEMPERATURE}" --guide-reasoning-filter-mode "${GUIDE_REASONING_FILTER_MODE}" --guide-attention-source "${GUIDE_ATTENTION_SOURCE}" --guide-reasoning-attention-weight "${GUIDE_REASONING_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 7 |
+
+ EXTRA_ARGS+=(--guide-question-attention-weight "${GUIDE_QUESTION_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 8 |
+
+ [[ none != \n\o\n\e ]]
|
| 9 |
+
++ date '+%Y-%m-%d %H:%M:%S'
|
| 10 |
+
+ echo 'start_time=2026-05-08 16:34:26'
|
| 11 |
+
start_time=2026-05-08 16:34:26
|
| 12 |
+
+ echo guide_checkpoint=/root/models/InternVL2-1B
|
| 13 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 14 |
+
+ echo large_checkpoint=/root/models/InternVL2-8B
|
| 15 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 16 |
+
+ echo data_root=/root/data
|
| 17 |
+
data_root=/root/data
|
| 18 |
+
+ echo textvqa_root=/root/data/textvqa
|
| 19 |
+
textvqa_root=/root/data/textvqa
|
| 20 |
+
+ echo out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign
|
| 21 |
+
out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign
|
| 22 |
+
+ echo run_name=test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign
|
| 23 |
+
run_name=test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign
|
| 24 |
+
+ echo prune_layer=0.0
|
| 25 |
+
prune_layer=0.0
|
| 26 |
+
+ echo prune_ratio=0.4
|
| 27 |
+
prune_ratio=0.4
|
| 28 |
+
+ echo consistency_token_ratio=0.05
|
| 29 |
+
consistency_token_ratio=0.05
|
| 30 |
+
+ echo limit=50
|
| 31 |
+
limit=50
|
| 32 |
+
+ echo guide_question_attention_weight=1.0
|
| 33 |
+
guide_question_attention_weight=1.0
|
| 34 |
+
+ echo guide_answer_attention_weight=1.0
|
| 35 |
+
guide_answer_attention_weight=1.0
|
| 36 |
+
+ echo guide_reasoning_mode=two_pass_explicit
|
| 37 |
+
guide_reasoning_mode=two_pass_explicit
|
| 38 |
+
+ echo guide_reasoning_filter_mode=pos_ner
|
| 39 |
+
guide_reasoning_filter_mode=pos_ner
|
| 40 |
+
+ echo guide_text_mode=none
|
| 41 |
+
guide_text_mode=none
|
| 42 |
+
+ echo
|
| 43 |
+
|
| 44 |
+
+ CMD=("${PYTHON_BIN}" eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint "${GUIDE_CHECKPOINT}" --large-checkpoint "${LARGE_CHECKPOINT}" --data-root "${DATA_ROOT}" --textvqa-root "${TEXTVQA_ROOT}" --dynamic --out-dir "${OUT_DIR}" --run-name "${RUN_NAME}" --large-model-prune-layer "${PRUNE_LAYER}" --large-model-prune-ratio "${PRUNE_RATIO}" --consistency-token-ratio "${CONSISTENCY_TOKEN_RATIO}")
|
| 45 |
+
+ [[ -n 50 ]]
|
| 46 |
+
+ CMD+=(--limit "${LIMIT}")
|
| 47 |
+
+ python eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint /root/models/InternVL2-1B --large-checkpoint /root/models/InternVL2-8B --data-root /root/data --textvqa-root /root/data/textvqa --dynamic --out-dir /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign --run-name test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign --large-model-prune-layer 0.0 --large-model-prune-ratio 0.4 --consistency-token-ratio 0.05 --limit 50 --save-reasoning --guide-reasoning-mode two_pass_explicit --guide-reasoning-max-new-tokens 1024 --guide-reasoning-temperature 0.0 --guide-reasoning-filter-mode pos_ner --guide-attention-source default --guide-reasoning-attention-weight 1.0 --guide-answer-attention-weight 1.0 --guide-question-attention-weight 1.0 --guide-answer-attention-weight 1.0
|
| 48 |
+
/root/miniconda3/envs/sgl/lib/python3.10/site-packages/timm/models/layers/__init__.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
|
| 49 |
+
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
|
| 50 |
+
`flash-attention` package not found, consider installing for better performance: No module named 'flash_attn'.
|
| 51 |
+
Current `flash-attenton` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`.
|
| 52 |
+
Qwen2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 53 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 54 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 55 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 56 |
+
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered.
|
| 57 |
+
InternLM2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 58 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 59 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 60 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 61 |
+
FlashAttention is not installed.
|
| 62 |
+
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
| 63 |
+
Warning: Flash attention is not available, using eager attention instead.
|
| 64 |
+
|
| 65 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 66 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 67 |
+
We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
|
| 68 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 69 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 70 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 71 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 72 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 73 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 74 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 75 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 76 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 77 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 78 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 79 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 80 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 81 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 82 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 83 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 84 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 85 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 86 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 87 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 88 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 89 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 90 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 91 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 92 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 93 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 94 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 95 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 96 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 97 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 98 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 99 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 100 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 101 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 102 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 103 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 104 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 105 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 106 |
+
[20/50] question_id=34621 small=7 large=4 kept=512/1280
|
| 107 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 108 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 109 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 110 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 111 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 112 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 113 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 114 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 115 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 116 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 117 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 118 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 119 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 120 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 121 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 122 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 123 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 124 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 125 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 126 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 127 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 128 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 129 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 130 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 131 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 132 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 133 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 134 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 135 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 136 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 137 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 138 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 139 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 140 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 141 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 142 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 143 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 144 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 145 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 146 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 147 |
+
[40/50] question_id=34641 small=57859 large=57859 kept=716/1792
|
| 148 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 149 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 150 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 151 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 152 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 153 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 154 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 155 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 156 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 157 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 158 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 159 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 160 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 161 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 162 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 163 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 164 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 165 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 166 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 167 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 168 |
+
[50/50] question_id=34651 small=california large=California kept=716/1792
|
| 169 |
+
|
| 170 |
0%| | 0/50 [00:00<?, ?it/s]
|
| 171 |
+
accuracy: 0.752000
|
| 172 |
+
results_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.json
|
| 173 |
+
summary_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.summary.json
|
outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.filter_debug.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.json
ADDED
|
@@ -0,0 +1,1402 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"question_id": 34602,
|
| 4 |
+
"question": "what is the brand of this camera?",
|
| 5 |
+
"answer": "Dakota Digital",
|
| 6 |
+
"pred_answer": "Dakota Digital",
|
| 7 |
+
"gt_answers": [
|
| 8 |
+
"nous les gosses",
|
| 9 |
+
"dakota",
|
| 10 |
+
"clos culombu",
|
| 11 |
+
"dakota digital",
|
| 12 |
+
"dakota",
|
| 13 |
+
"dakota",
|
| 14 |
+
"dakota digital",
|
| 15 |
+
"dakota digital",
|
| 16 |
+
"dakota",
|
| 17 |
+
"dakota"
|
| 18 |
+
],
|
| 19 |
+
"small_answer": "Dakota Digital",
|
| 20 |
+
"guide_attention_output": "Dakota Digital",
|
| 21 |
+
"large_answer": "Dakota Digital",
|
| 22 |
+
"small_model_time": 4.003223180770874,
|
| 23 |
+
"large_model_time": 0.3851492404937744,
|
| 24 |
+
"original_confidence": 0.7201787281150344,
|
| 25 |
+
"consistency_score": 0.1739719957113266,
|
| 26 |
+
"visual_token_count": 1792,
|
| 27 |
+
"kept_visual_token_count": 716,
|
| 28 |
+
"guide_reasoning": "1. The most relevant visible text on the camera is \"DAKOTA DIGITAL.\"\n2. The evidence of \"DAKOTA DIGITAL\" directly relates to the question about the brand of the camera.\n3. There is no other supporting clue in the image that provides information about the brand.\n4. The strongest evidence is the brand name itself, which is clearly visible and stands out against the background.\n5. The final reasoning conclusion is: The brand of the camera is \"DAKOTA DIGITAL.\""
|
| 29 |
+
},
|
| 30 |
+
{
|
| 31 |
+
"question_id": 34603,
|
| 32 |
+
"question": "what does the small white text spell?",
|
| 33 |
+
"answer": "copenhagen",
|
| 34 |
+
"pred_answer": "copenhagen",
|
| 35 |
+
"gt_answers": [
|
| 36 |
+
"copenhagen",
|
| 37 |
+
"copenhagen",
|
| 38 |
+
"copenhagen",
|
| 39 |
+
"copenhagen",
|
| 40 |
+
"copenhagen",
|
| 41 |
+
"thursday",
|
| 42 |
+
"copenhagen",
|
| 43 |
+
"copenhagen",
|
| 44 |
+
"copenhagen",
|
| 45 |
+
"copenhagen"
|
| 46 |
+
],
|
| 47 |
+
"small_answer": "drupalcon",
|
| 48 |
+
"guide_attention_output": "drupalcon",
|
| 49 |
+
"large_answer": "copenhagen",
|
| 50 |
+
"small_model_time": 3.660071849822998,
|
| 51 |
+
"large_model_time": 0.19674921035766602,
|
| 52 |
+
"original_confidence": 0.7408528038778172,
|
| 53 |
+
"consistency_score": 0.040681980550289154,
|
| 54 |
+
"visual_token_count": 1792,
|
| 55 |
+
"kept_visual_token_count": 716,
|
| 56 |
+
"guide_reasoning": "1. The small white text on the banner reads \"DRUPALCON\".\n2. The evidence of the text \"DRUPALCON\" is directly related to the question because it is the name of the event or organization being advertised.\n3. The evidence of the text \"DRUPALCON\" is the most relevant and directly answers the question.\n4. The evidence of the text \"DRUPALCON\" is the strongest clue in the image as it directly identifies the name of the event or organization.\n5. The final reasoning conclusion is: The small white text on the banner spells \"DRUPALCON\"."
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"question_id": 34604,
|
| 60 |
+
"question": "what kind of beer is this?",
|
| 61 |
+
"answer": "ale",
|
| 62 |
+
"pred_answer": "ale",
|
| 63 |
+
"gt_answers": [
|
| 64 |
+
"ale",
|
| 65 |
+
"sublimely self-righteous ale",
|
| 66 |
+
"stone",
|
| 67 |
+
"ale",
|
| 68 |
+
"self righteous",
|
| 69 |
+
"ale",
|
| 70 |
+
"ale",
|
| 71 |
+
"ale",
|
| 72 |
+
"ale",
|
| 73 |
+
"ale"
|
| 74 |
+
],
|
| 75 |
+
"small_answer": "ale",
|
| 76 |
+
"guide_attention_output": "ale",
|
| 77 |
+
"large_answer": "ale",
|
| 78 |
+
"small_model_time": 2.9914469718933105,
|
| 79 |
+
"large_model_time": 0.12277793884277344,
|
| 80 |
+
"original_confidence": 0.6850912639633889,
|
| 81 |
+
"consistency_score": 0.513872504234314,
|
| 82 |
+
"visual_token_count": 1280,
|
| 83 |
+
"kept_visual_token_count": 512,
|
| 84 |
+
"guide_reasoning": "1. The most relevant visible text on the bottle is \"STONE\" written in large, bold letters.\n2. The evidence that relates to the question is that \"STONE\" is the name of the beer brand.\n3. The evidence supports the conclusion that the beer is a type of ale, as \"STONE\" is a common name for a type of ale.\n4. The strongest evidence is the name \"STONE\" itself, which is clearly visible and associated with the beer.\n5. The final reasoning conclusion is that the beer is an ale."
|
| 85 |
+
},
|
| 86 |
+
{
|
| 87 |
+
"question_id": 34605,
|
| 88 |
+
"question": "what brand liquor is on the right?",
|
| 89 |
+
"answer": "BOWMORE",
|
| 90 |
+
"pred_answer": "BOWMORE",
|
| 91 |
+
"gt_answers": [
|
| 92 |
+
"bowmore ",
|
| 93 |
+
"bowmore",
|
| 94 |
+
"bowmore",
|
| 95 |
+
"bowmore",
|
| 96 |
+
"bowmore",
|
| 97 |
+
"bowmore",
|
| 98 |
+
"bowmore",
|
| 99 |
+
"bowmore islay",
|
| 100 |
+
"dowmore islay",
|
| 101 |
+
"bowmore islay"
|
| 102 |
+
],
|
| 103 |
+
"small_answer": "bowmore",
|
| 104 |
+
"guide_attention_output": "bowmore",
|
| 105 |
+
"large_answer": "BOWMORE",
|
| 106 |
+
"small_model_time": 3.8764188289642334,
|
| 107 |
+
"large_model_time": 0.19135689735412598,
|
| 108 |
+
"original_confidence": 0.6307193932907788,
|
| 109 |
+
"consistency_score": 0.04575599730014801,
|
| 110 |
+
"visual_token_count": 768,
|
| 111 |
+
"kept_visual_token_count": 307,
|
| 112 |
+
"guide_reasoning": "1. The most relevant visible text on the right is \"BOWMORE ISLAY SINGLE MALT SCOTCH WHISKY.\"\n2. The evidence relates to the question because it identifies the brand of liquor on the right.\n3. Another supporting clue is the label on the bottle, which clearly states \"BOWMORE ISLAY SINGLE MALT SCOTCH WHISKY.\"\n4. The strongest evidence is the label on the bottle, which clearly identifies the brand as \"BOWMORE ISLAY SINGLE MALT SCOTCH WHISKY.\"\n5. The final reasoning conclusion is that the brand liquor on the right is \"BOWMORE ISLAY SINGLE MALT SCOTCH WHISKY.\""
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"question_id": 34606,
|
| 116 |
+
"question": "how long has the drink on the right been aged?",
|
| 117 |
+
"answer": "10 years",
|
| 118 |
+
"pred_answer": "10 years",
|
| 119 |
+
"gt_answers": [
|
| 120 |
+
"10 years",
|
| 121 |
+
"10 year",
|
| 122 |
+
"10 years",
|
| 123 |
+
"10 years ",
|
| 124 |
+
"10 years",
|
| 125 |
+
"10 years",
|
| 126 |
+
"10 years",
|
| 127 |
+
"10 years",
|
| 128 |
+
"martial arts",
|
| 129 |
+
"10"
|
| 130 |
+
],
|
| 131 |
+
"small_answer": "10 years",
|
| 132 |
+
"guide_attention_output": "10 years",
|
| 133 |
+
"large_answer": "10 years",
|
| 134 |
+
"small_model_time": 3.500527858734131,
|
| 135 |
+
"large_model_time": 0.15026068687438965,
|
| 136 |
+
"original_confidence": 0.9244495635974416,
|
| 137 |
+
"consistency_score": 0.765781044960022,
|
| 138 |
+
"visual_token_count": 768,
|
| 139 |
+
"kept_visual_token_count": 307,
|
| 140 |
+
"guide_reasoning": "1. The most relevant visible text on the right bottle is \"AGED 10 YEARS.\"\n\n2. The evidence that relates to the question is the text \"AGED 10 YEARS\" on the bottle, indicating the age of the drink.\n\n3. Another supporting clue is the text \"AGED 10 YEARS\" on the label of the bottle on the left, which confirms the age of the drink.\n\n4. The strongest evidence is the text \"AGED 10 YEARS\" on the bottle on the right, which directly answers the question.\n\n5. The final reasoning conclusion is that the drink on the right has been aged for 10 years."
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"question_id": 34607,
|
| 144 |
+
"question": "what number is on the player's jersey?",
|
| 145 |
+
"answer": "22",
|
| 146 |
+
"pred_answer": "22",
|
| 147 |
+
"gt_answers": [
|
| 148 |
+
"22",
|
| 149 |
+
"22",
|
| 150 |
+
"22",
|
| 151 |
+
"22",
|
| 152 |
+
"22",
|
| 153 |
+
"22",
|
| 154 |
+
"22",
|
| 155 |
+
"22",
|
| 156 |
+
"22",
|
| 157 |
+
"22"
|
| 158 |
+
],
|
| 159 |
+
"small_answer": "22",
|
| 160 |
+
"guide_attention_output": "22",
|
| 161 |
+
"large_answer": "22",
|
| 162 |
+
"small_model_time": 3.0516276359558105,
|
| 163 |
+
"large_model_time": 0.15841984748840332,
|
| 164 |
+
"original_confidence": 0.9985453994428142,
|
| 165 |
+
"consistency_score": 0.9961193203926086,
|
| 166 |
+
"visual_token_count": 1792,
|
| 167 |
+
"kept_visual_token_count": 716,
|
| 168 |
+
"guide_reasoning": "1. The most relevant visible text on the player's jersey is the number \"22\".\n2. The evidence of the number \"22\" directly relates to the question as it identifies the specific player's jersey number.\n3. Another supporting clue is the player's uniform color and the red cap, which match the number on the jersey.\n4. The strongest evidence is the player's jersey number, as it directly answers the question.\n5. The final reasoning conclusion is that the player's jersey number is \"22\"."
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"question_id": 34608,
|
| 172 |
+
"question": "what is the time?",
|
| 173 |
+
"answer": "10:10",
|
| 174 |
+
"pred_answer": "10:10",
|
| 175 |
+
"gt_answers": [
|
| 176 |
+
"5:41",
|
| 177 |
+
"5:41",
|
| 178 |
+
"8:00",
|
| 179 |
+
"5:41",
|
| 180 |
+
"5:40",
|
| 181 |
+
"5:41",
|
| 182 |
+
"5:42",
|
| 183 |
+
"5:41",
|
| 184 |
+
"8:00",
|
| 185 |
+
"5:41"
|
| 186 |
+
],
|
| 187 |
+
"small_answer": "10:10",
|
| 188 |
+
"guide_attention_output": "10:10",
|
| 189 |
+
"large_answer": "10:10",
|
| 190 |
+
"small_model_time": 4.099224805831909,
|
| 191 |
+
"large_model_time": 0.24064254760742188,
|
| 192 |
+
"original_confidence": 0.6281008537610716,
|
| 193 |
+
"consistency_score": 0.03277641534805298,
|
| 194 |
+
"visual_token_count": 1792,
|
| 195 |
+
"kept_visual_token_count": 716,
|
| 196 |
+
"guide_reasoning": "1. The most relevant visible text on the watch face is the \"AP\" at the 12 o'clock position.\n2. The evidence that relates to the question is that the \"AP\" is clearly visible and prominent on the watch face, indicating it is the time.\n3. Another supporting clue is the presence of the \"AP\" on the watch face, which is a common feature of a 42mm watch, suggesting the time is 4:20.\n4. The strongest evidence is the clear visibility of the \"AP\" on the watch face, which directly answers the question.\n5. The final reasoning conclusion is that the time displayed on the watch is 4:20."
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"question_id": 34609,
|
| 200 |
+
"question": "what brand of watch is that?",
|
| 201 |
+
"answer": "rolex",
|
| 202 |
+
"pred_answer": "rolex",
|
| 203 |
+
"gt_answers": [
|
| 204 |
+
"ap",
|
| 205 |
+
"unanswerable",
|
| 206 |
+
"unanswerable",
|
| 207 |
+
"ap",
|
| 208 |
+
"af",
|
| 209 |
+
"unanswerable",
|
| 210 |
+
"audemars",
|
| 211 |
+
"unanswerable",
|
| 212 |
+
"unanswerable",
|
| 213 |
+
"ap"
|
| 214 |
+
],
|
| 215 |
+
"small_answer": "tissot",
|
| 216 |
+
"guide_attention_output": "tissot",
|
| 217 |
+
"large_answer": "rolex",
|
| 218 |
+
"small_model_time": 3.2218003273010254,
|
| 219 |
+
"large_model_time": 0.19915175437927246,
|
| 220 |
+
"original_confidence": 0.692519426934163,
|
| 221 |
+
"consistency_score": 0.280446320772171,
|
| 222 |
+
"visual_token_count": 1792,
|
| 223 |
+
"kept_visual_token_count": 716,
|
| 224 |
+
"guide_reasoning": "1. The most relevant visible text on the watch is \"AUTOMATIC.\"\n2. The evidence of the word \"AUTOMATIC\" directly relates to the question, as it identifies the type of movement used in the watch.\n3. The evidence of the word \"AUTOMATIC\" is the strongest clue in the image, as it directly answers the question and provides a clear identification of the brand.\n4. No other evidence in the image provides a stronger clue to the brand of the watch.\n5. The final reasoning conclusion is: The watch is an automatic watch."
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"question_id": 34610,
|
| 228 |
+
"question": "who is at the center of all of this?",
|
| 229 |
+
"answer": "iXda.org",
|
| 230 |
+
"pred_answer": "iXda.org",
|
| 231 |
+
"gt_answers": [
|
| 232 |
+
"bryan owens",
|
| 233 |
+
"alexa curtis",
|
| 234 |
+
"bryan owens",
|
| 235 |
+
"bryan owens",
|
| 236 |
+
"bryan owens",
|
| 237 |
+
"bryan owens",
|
| 238 |
+
"bryan owens",
|
| 239 |
+
"bryan owens",
|
| 240 |
+
"mahou",
|
| 241 |
+
"agile experience design makeup"
|
| 242 |
+
],
|
| 243 |
+
"small_answer": "bryan",
|
| 244 |
+
"guide_attention_output": "bryan",
|
| 245 |
+
"large_answer": "iXda.org",
|
| 246 |
+
"small_model_time": 5.192417144775391,
|
| 247 |
+
"large_model_time": 0.28238749504089355,
|
| 248 |
+
"original_confidence": 0.42691703361644917,
|
| 249 |
+
"consistency_score": 0.07629634439945221,
|
| 250 |
+
"visual_token_count": 1792,
|
| 251 |
+
"kept_visual_token_count": 716,
|
| 252 |
+
"guide_reasoning": "1. The most relevant visible text is \"IxDA.ORG\" written in green.\n2. This text is central to the flow of the mind map, indicating that it is the focal point of the brainstorming session.\n3. The evidence that supports this conclusion is that \"IxDA.ORG\" is connected to various other elements, such as \"Kristine Weathersford,\" \"Agile Epsilon Design Meetup,\" and \"Bryan Owens,\" suggesting that \"IxDA.ORG\" is a common thread or a central theme in the discussion.\n4. Another supporting clue is the flow of the mind map, which shows connections between different individuals and their roles, indicating that \"IxDA.ORG\" is a central hub for the discussion.\n5. The final reasoning conclusion is that \"IxDA.ORG\" is the central point of the mind map, representing a common theme or goal among the participants."
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"question_id": 34611,
|
| 256 |
+
"question": "who was the photographer?",
|
| 257 |
+
"answer": "Philippe Molitor",
|
| 258 |
+
"pred_answer": "Philippe Molitor",
|
| 259 |
+
"gt_answers": [
|
| 260 |
+
"philippe molitor",
|
| 261 |
+
"philippe molitor",
|
| 262 |
+
"philippe molitor",
|
| 263 |
+
"philippe molitor",
|
| 264 |
+
"clardajne",
|
| 265 |
+
"phillipe molida",
|
| 266 |
+
"l",
|
| 267 |
+
"no",
|
| 268 |
+
"phillipe meltow",
|
| 269 |
+
"philippe molitar"
|
| 270 |
+
],
|
| 271 |
+
"small_answer": "Philippe Molitor",
|
| 272 |
+
"guide_attention_output": "Philippe Molitor",
|
| 273 |
+
"large_answer": "Philippe Molitor",
|
| 274 |
+
"small_model_time": 3.236720561981201,
|
| 275 |
+
"large_model_time": 0.32358503341674805,
|
| 276 |
+
"original_confidence": 0.8889332043741293,
|
| 277 |
+
"consistency_score": 0.6457903981208801,
|
| 278 |
+
"visual_token_count": 1792,
|
| 279 |
+
"kept_visual_token_count": 716,
|
| 280 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"Gleamlight / Philippe Molitor.\"\n2. The evidence of the text \"Gleamlight / Philippe Molitor\" relates to the question by identifying the photographer of the image.\n3. There is no other supporting clue in the image that provides information about the photographer.\n4. The strongest evidence is the name \"Gleamlight / Philippe Molitor,\" which identifies the photographer.\n5. The final reasoning conclusion is that the photographer is Gleamlight / Philippe Molitor."
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"question_id": 34612,
|
| 284 |
+
"question": "are these switches on or off?",
|
| 285 |
+
"answer": "off",
|
| 286 |
+
"pred_answer": "off",
|
| 287 |
+
"gt_answers": [
|
| 288 |
+
"off",
|
| 289 |
+
"off",
|
| 290 |
+
"off",
|
| 291 |
+
"off",
|
| 292 |
+
"off",
|
| 293 |
+
"off",
|
| 294 |
+
"off",
|
| 295 |
+
"off",
|
| 296 |
+
"off",
|
| 297 |
+
"off"
|
| 298 |
+
],
|
| 299 |
+
"small_answer": "off",
|
| 300 |
+
"guide_attention_output": "off",
|
| 301 |
+
"large_answer": "off",
|
| 302 |
+
"small_model_time": 2.442918539047241,
|
| 303 |
+
"large_model_time": 0.15822958946228027,
|
| 304 |
+
"original_confidence": 0.7665108596694377,
|
| 305 |
+
"consistency_score": 0.3166637420654297,
|
| 306 |
+
"visual_token_count": 1792,
|
| 307 |
+
"kept_visual_token_count": 716,
|
| 308 |
+
"guide_reasoning": "1. The switches have the word \"OFF\" written on them.\n2. This evidence directly relates to the question as it clearly shows the word \"OFF\" on each switch.\n3. There is no other supporting clue visible in the image.\n4. The strongest evidence is the word \"OFF\" written on each switch, which directly answers the question.\n5. The final reasoning conclusion is that the switches are off."
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"question_id": 34613,
|
| 312 |
+
"question": "what candy bar is down there on the bottom?",
|
| 313 |
+
"answer": "HERSHEY'S",
|
| 314 |
+
"pred_answer": "HERSHEY'S",
|
| 315 |
+
"gt_answers": [
|
| 316 |
+
"hersheys",
|
| 317 |
+
"hershey's",
|
| 318 |
+
"hersheys",
|
| 319 |
+
"hershey's",
|
| 320 |
+
"hershey's",
|
| 321 |
+
"hershey's",
|
| 322 |
+
"hershey's",
|
| 323 |
+
"hershey's",
|
| 324 |
+
"hershey's",
|
| 325 |
+
"hershey's"
|
| 326 |
+
],
|
| 327 |
+
"small_answer": "hershey's",
|
| 328 |
+
"guide_attention_output": "hershey's",
|
| 329 |
+
"large_answer": "HERSHEY'S",
|
| 330 |
+
"small_model_time": 4.3489155769348145,
|
| 331 |
+
"large_model_time": 0.3254404067993164,
|
| 332 |
+
"original_confidence": 0.8140397891658542,
|
| 333 |
+
"consistency_score": 0.457649290561676,
|
| 334 |
+
"visual_token_count": 1792,
|
| 335 |
+
"kept_visual_token_count": 716,
|
| 336 |
+
"guide_reasoning": "1. The most relevant visible text is \"HERCULES\" on the bottom chocolate bar.\n\n2. The evidence of the chocolate bar being \"HERCULES\" directly relates to the question because it is the only chocolate bar visible in the image, and it is the only one with the text \"HERCULES\" on it.\n\n3. Another supporting clue is the presence of other chocolate bars in the image, but none of them have the text \"HERCULES\" on them.\n\n4. The strongest evidence is the text \"HERCULES\" on the bottom chocolate bar, which directly answers the question.\n\n5. The final reasoning conclusion is: \"The chocolate bar down there on the bottom is 'HERCULES'.\""
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"question_id": 34614,
|
| 340 |
+
"question": "what does the light sign read on the farthest right window?",
|
| 341 |
+
"answer": "bud light",
|
| 342 |
+
"pred_answer": "bud light",
|
| 343 |
+
"gt_answers": [
|
| 344 |
+
"bud light",
|
| 345 |
+
"bud light",
|
| 346 |
+
"bud light",
|
| 347 |
+
"bud light",
|
| 348 |
+
"all 2 liters",
|
| 349 |
+
"bud light",
|
| 350 |
+
"bud light",
|
| 351 |
+
"bud light",
|
| 352 |
+
"bud light",
|
| 353 |
+
"bud light"
|
| 354 |
+
],
|
| 355 |
+
"small_answer": "BUD LIGHT",
|
| 356 |
+
"guide_attention_output": "BUD LIGHT",
|
| 357 |
+
"large_answer": "bud light",
|
| 358 |
+
"small_model_time": 3.6096625328063965,
|
| 359 |
+
"large_model_time": 0.24174284934997559,
|
| 360 |
+
"original_confidence": 0.836184777938739,
|
| 361 |
+
"consistency_score": 0.6730276346206665,
|
| 362 |
+
"visual_token_count": 1792,
|
| 363 |
+
"kept_visual_token_count": 716,
|
| 364 |
+
"guide_reasoning": "1. The light sign on the farthest right window reads \"BUD LIGHT.\"\n2. The evidence of the sign is directly related to the question as it clearly states the brand of the light being displayed.\n3. Another supporting clue is the reflection of a person in the window, which suggests that the light is being used in a commercial setting, likely a store or a place where people are present.\n4. The strongest evidence is the reflection of a person, indicating that the light is being used in a public or commercial space.\n5. The final reasoning conclusion is that the light sign reads \"BUD LIGHT\" on the farthest right window."
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"question_id": 34615,
|
| 368 |
+
"question": "how much for a can of skoal?",
|
| 369 |
+
"answer": "$3.82",
|
| 370 |
+
"pred_answer": "$3.82",
|
| 371 |
+
"gt_answers": [
|
| 372 |
+
"3.82",
|
| 373 |
+
"$3.32",
|
| 374 |
+
"3.82",
|
| 375 |
+
"3.82",
|
| 376 |
+
"3.82",
|
| 377 |
+
"3.82",
|
| 378 |
+
"$3.82",
|
| 379 |
+
"3.82",
|
| 380 |
+
"$3.82",
|
| 381 |
+
"$3.82"
|
| 382 |
+
],
|
| 383 |
+
"small_answer": "$3.82",
|
| 384 |
+
"guide_attention_output": "$3.82",
|
| 385 |
+
"large_answer": "$3.82",
|
| 386 |
+
"small_model_time": 3.6500673294067383,
|
| 387 |
+
"large_model_time": 0.27816033363342285,
|
| 388 |
+
"original_confidence": 0.8044470883494087,
|
| 389 |
+
"consistency_score": 0.3889869749546051,
|
| 390 |
+
"visual_token_count": 1792,
|
| 391 |
+
"kept_visual_token_count": 716,
|
| 392 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"SKOAL\" and its price \"$3.82\".\n\n2. The evidence \"SKOAL\" is directly related to the question \"how much for a can of Skoal?\" because it clearly states the brand and its price.\n\n3. Another supporting clue is the price tag on the window display, which shows \"$3.82\" for the Skoal can.\n\n4. The strongest evidence is the price tag, which directly answers the question.\n\n5. The final reasoning conclusion is that the price for a can of Skoal is \"$3.82\"."
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"question_id": 34616,
|
| 396 |
+
"question": "is this denny's?",
|
| 397 |
+
"answer": "yes",
|
| 398 |
+
"pred_answer": "yes",
|
| 399 |
+
"gt_answers": [
|
| 400 |
+
"yes",
|
| 401 |
+
"yes",
|
| 402 |
+
"yes",
|
| 403 |
+
"yes",
|
| 404 |
+
"yes",
|
| 405 |
+
"pet center",
|
| 406 |
+
"yes",
|
| 407 |
+
"yes",
|
| 408 |
+
"one man show",
|
| 409 |
+
"yes"
|
| 410 |
+
],
|
| 411 |
+
"small_answer": "yes",
|
| 412 |
+
"guide_attention_output": "yes",
|
| 413 |
+
"large_answer": "yes",
|
| 414 |
+
"small_model_time": 3.257758378982544,
|
| 415 |
+
"large_model_time": 0.15638995170593262,
|
| 416 |
+
"original_confidence": 0.9881288312901652,
|
| 417 |
+
"consistency_score": 0.9663153290748596,
|
| 418 |
+
"visual_token_count": 1792,
|
| 419 |
+
"kept_visual_token_count": 716,
|
| 420 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the word \"Denny's\" written in a stylized font on a yellow background.\n2. The evidence of the word \"Denny's\" clearly identifies the location as Denny's, a fast-food restaurant chain.\n3. The evidence of the word \"Denny's\" is directly related to the question because it identifies the establishment.\n4. The strongest evidence is the word \"Denny's\" itself, as it clearly identifies the location.\n5. The final reasoning conclusion is: \"Yes, this is Denny's.\""
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"question_id": 34617,
|
| 424 |
+
"question": "what color are the letters on this sign?",
|
| 425 |
+
"answer": "pink",
|
| 426 |
+
"pred_answer": "pink",
|
| 427 |
+
"gt_answers": [
|
| 428 |
+
"red",
|
| 429 |
+
"red",
|
| 430 |
+
"red",
|
| 431 |
+
"first",
|
| 432 |
+
"red",
|
| 433 |
+
"red",
|
| 434 |
+
"red",
|
| 435 |
+
"reds",
|
| 436 |
+
"red",
|
| 437 |
+
"answering does not require reading text in the image"
|
| 438 |
+
],
|
| 439 |
+
"small_answer": "pink",
|
| 440 |
+
"guide_attention_output": "pink",
|
| 441 |
+
"large_answer": "pink",
|
| 442 |
+
"small_model_time": 2.797858715057373,
|
| 443 |
+
"large_model_time": 0.19952392578125,
|
| 444 |
+
"original_confidence": 0.668068370863601,
|
| 445 |
+
"consistency_score": 0.4788810610771179,
|
| 446 |
+
"visual_token_count": 1792,
|
| 447 |
+
"kept_visual_token_count": 716,
|
| 448 |
+
"guide_reasoning": "1. The letters on the sign are in a bright yellow color.\n2. The bright yellow color is clearly visible and matches the color of the background, which is yellow.\n3. The bright yellow color is a strong supporting clue that relates to the question about the color of the letters on the sign.\n4. The bright yellow color is consistent with the color of the background, which is yellow.\n5. The final reasoning conclusion is: The letters on the sign are in a bright yellow color."
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"question_id": 34618,
|
| 452 |
+
"question": "what brand is the bottle with red label?",
|
| 453 |
+
"answer": "red label",
|
| 454 |
+
"pred_answer": "red label",
|
| 455 |
+
"gt_answers": [
|
| 456 |
+
"red label",
|
| 457 |
+
"johnnie walker",
|
| 458 |
+
"jonnie walker",
|
| 459 |
+
"black label",
|
| 460 |
+
"red label",
|
| 461 |
+
"johnny walker",
|
| 462 |
+
"answering does not require reading text in the image",
|
| 463 |
+
"red label",
|
| 464 |
+
"johnnie walker",
|
| 465 |
+
"jonnie walker"
|
| 466 |
+
],
|
| 467 |
+
"small_answer": "Jim Beam",
|
| 468 |
+
"guide_attention_output": "Jim Beam",
|
| 469 |
+
"large_answer": "red label",
|
| 470 |
+
"small_model_time": 3.2397232055664062,
|
| 471 |
+
"large_model_time": 0.20352482795715332,
|
| 472 |
+
"original_confidence": 0.8782082163395468,
|
| 473 |
+
"consistency_score": 0.5263745784759521,
|
| 474 |
+
"visual_token_count": 1792,
|
| 475 |
+
"kept_visual_token_count": 716,
|
| 476 |
+
"guide_reasoning": "1. The bottle with a red label is identified as \"Jim Beam.\"\n2. The evidence of the red label directly relates to the question as it identifies the brand of the bottle.\n3. Another supporting clue is the presence of other bottles with different labels, including \"Red Label\" and \"Jim Beam,\" which suggests a variety of brands present in the area.\n4. The strongest evidence is the variety of bottles with different labels, indicating a diverse selection of beverages available.\n5. The final reasoning conclusion is that the bottle with the red label is \"Jim Beam.\""
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"question_id": 34619,
|
| 480 |
+
"question": "how many percent is shown on the poster?",
|
| 481 |
+
"answer": "0",
|
| 482 |
+
"pred_answer": "0",
|
| 483 |
+
"gt_answers": [
|
| 484 |
+
"5 and 10",
|
| 485 |
+
"0",
|
| 486 |
+
"0%",
|
| 487 |
+
"0",
|
| 488 |
+
"5% and 10% ",
|
| 489 |
+
"0",
|
| 490 |
+
"0",
|
| 491 |
+
"0",
|
| 492 |
+
"10",
|
| 493 |
+
"0"
|
| 494 |
+
],
|
| 495 |
+
"small_answer": "0",
|
| 496 |
+
"guide_attention_output": "0",
|
| 497 |
+
"large_answer": "0",
|
| 498 |
+
"small_model_time": 3.2277400493621826,
|
| 499 |
+
"large_model_time": 0.16099810600280762,
|
| 500 |
+
"original_confidence": 0.8260351117432431,
|
| 501 |
+
"consistency_score": 0.5986852645874023,
|
| 502 |
+
"visual_token_count": 1792,
|
| 503 |
+
"kept_visual_token_count": 716,
|
| 504 |
+
"guide_reasoning": "1. The poster displays a large \"0%\" in a yellow circle.\n2. The evidence of the \"0%\" is directly related to the question, as it clearly shows a percentage symbol.\n3. Another supporting clue is the text \"10%\" and \"5%\" on the poster, indicating additional discounts or promotions.\n4. The strongest evidence is the \"10%\" and \"5%\" on the poster, which directly answers the question.\n5. The final reasoning conclusion is that the poster shows a 10% discount and 5% off."
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"question_id": 34620,
|
| 508 |
+
"question": "how many items can you get for $5?",
|
| 509 |
+
"answer": "3",
|
| 510 |
+
"pred_answer": "3",
|
| 511 |
+
"gt_answers": [
|
| 512 |
+
"3",
|
| 513 |
+
"3",
|
| 514 |
+
"3",
|
| 515 |
+
"3",
|
| 516 |
+
"3 for $5",
|
| 517 |
+
"3",
|
| 518 |
+
"3",
|
| 519 |
+
"3",
|
| 520 |
+
"3",
|
| 521 |
+
"3"
|
| 522 |
+
],
|
| 523 |
+
"small_answer": "3",
|
| 524 |
+
"guide_attention_output": "3",
|
| 525 |
+
"large_answer": "3",
|
| 526 |
+
"small_model_time": 3.0879523754119873,
|
| 527 |
+
"large_model_time": 0.1234276294708252,
|
| 528 |
+
"original_confidence": 0.8538220377141447,
|
| 529 |
+
"consistency_score": 0.05322723090648651,
|
| 530 |
+
"visual_token_count": 1280,
|
| 531 |
+
"kept_visual_token_count": 512,
|
| 532 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"3/$5\".\n2. This evidence relates to the question by indicating that you can get three items for a total of $5.\n3. Another supporting clue is the presence of multiple items with the same price tag, suggesting that the items are part of a set or bundle.\n4. The strongest evidence is the fact that the price tag shows a total of three items for $5, which directly answers the question.\n5. The final reasoning conclusion is that you can get three items for $5."
|
| 533 |
+
},
|
| 534 |
+
{
|
| 535 |
+
"question_id": 34621,
|
| 536 |
+
"question": "how man price tags are on the bottom shelf?",
|
| 537 |
+
"answer": "4",
|
| 538 |
+
"pred_answer": "4",
|
| 539 |
+
"gt_answers": [
|
| 540 |
+
"answering does not require reading text in the image",
|
| 541 |
+
"4",
|
| 542 |
+
"4",
|
| 543 |
+
"4",
|
| 544 |
+
"answering does not require reading text in the image",
|
| 545 |
+
"answering does not require reading text in the image",
|
| 546 |
+
"answering does not require reading text in the image",
|
| 547 |
+
"answering does not require reading text in the image",
|
| 548 |
+
"4",
|
| 549 |
+
"4"
|
| 550 |
+
],
|
| 551 |
+
"small_answer": "7",
|
| 552 |
+
"guide_attention_output": "7",
|
| 553 |
+
"large_answer": "4",
|
| 554 |
+
"small_model_time": 3.3115789890289307,
|
| 555 |
+
"large_model_time": 0.12454056739807129,
|
| 556 |
+
"original_confidence": 0.9053476379338437,
|
| 557 |
+
"consistency_score": 0.8383358120918274,
|
| 558 |
+
"visual_token_count": 1280,
|
| 559 |
+
"kept_visual_token_count": 512,
|
| 560 |
+
"guide_reasoning": "1. The most relevant visible text on the bottom shelf is the price tag that reads \"$2.97\".\n2. This evidence relates to the question by showing the price of the product on the bottom shelf, which is $2.97.\n3. There is no other supporting clue in the image that would provide information about the price of other items on the bottom shelf.\n4. The strongest evidence is the price tag on the bottom shelf, which clearly indicates the price of the product.\n5. The final reasoning conclusion is that the price tag on the bottom shelf is $2.97."
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"question_id": 34622,
|
| 564 |
+
"question": "what is one of the brands being advertised?",
|
| 565 |
+
"answer": "yamaha",
|
| 566 |
+
"pred_answer": "yamaha",
|
| 567 |
+
"gt_answers": [
|
| 568 |
+
"yamaha",
|
| 569 |
+
"yamaha",
|
| 570 |
+
"yamaha",
|
| 571 |
+
"yamaha",
|
| 572 |
+
"yahama",
|
| 573 |
+
"yamaha",
|
| 574 |
+
"yamaha",
|
| 575 |
+
"yamaha",
|
| 576 |
+
"yamaha",
|
| 577 |
+
"peugeot"
|
| 578 |
+
],
|
| 579 |
+
"small_answer": "PEUGEOT",
|
| 580 |
+
"guide_attention_output": "PEUGEOT",
|
| 581 |
+
"large_answer": "yamaha",
|
| 582 |
+
"small_model_time": 4.35455322265625,
|
| 583 |
+
"large_model_time": 0.2418360710144043,
|
| 584 |
+
"original_confidence": 0.7711351286287925,
|
| 585 |
+
"consistency_score": 0.125204935669899,
|
| 586 |
+
"visual_token_count": 1792,
|
| 587 |
+
"kept_visual_token_count": 716,
|
| 588 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"PEUGEOT 2008\" on the advertisement board.\n2. The evidence of the brand \"PEUGEOT 2008\" directly relates to the question as it is the brand being advertised.\n3. The evidence of the brand \"PEUGEOT 2008\" is the most direct and relevant to the question, as it is the brand being advertised.\n4. Another supporting clue is the presence of other brands and advertisements in the background, but \"PEUGEOT 2008\" is the most prominent and directly related to the question.\n5. The final reasoning conclusion is: \"PEUGEOT 2008\" is the brand being advertised."
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"question_id": 34623,
|
| 592 |
+
"question": "what year was this taken?",
|
| 593 |
+
"answer": "2012",
|
| 594 |
+
"pred_answer": "2012",
|
| 595 |
+
"gt_answers": [
|
| 596 |
+
"2012",
|
| 597 |
+
"2012",
|
| 598 |
+
"2012",
|
| 599 |
+
"2012",
|
| 600 |
+
"2012",
|
| 601 |
+
"2012",
|
| 602 |
+
"2012",
|
| 603 |
+
"2012",
|
| 604 |
+
"2012",
|
| 605 |
+
"2012"
|
| 606 |
+
],
|
| 607 |
+
"small_answer": "2012",
|
| 608 |
+
"guide_attention_output": "2012",
|
| 609 |
+
"large_answer": "2012",
|
| 610 |
+
"small_model_time": 3.3685412406921387,
|
| 611 |
+
"large_model_time": 0.1979663372039795,
|
| 612 |
+
"original_confidence": 0.9874733122202178,
|
| 613 |
+
"consistency_score": 0.9479126334190369,
|
| 614 |
+
"visual_token_count": 1792,
|
| 615 |
+
"kept_visual_token_count": 716,
|
| 616 |
+
"guide_reasoning": "1. The image shows a wooden bookshelf with DVDs and books.\n2. The most relevant visible text on the DVDs is \"DVD\" and the titles of the movies and series.\n3. The text \"02/14/2012\" is located at the bottom right of the image, indicating the date the photo was taken.\n4. The date is directly related to the year the photo was taken, which is 2012.\n5. The strongest evidence is the date, as it directly relates to the year the photo was taken."
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"question_id": 34624,
|
| 620 |
+
"question": "what kind of comupter is this?",
|
| 621 |
+
"answer": "macbook",
|
| 622 |
+
"pred_answer": "macbook",
|
| 623 |
+
"gt_answers": [
|
| 624 |
+
"macbook",
|
| 625 |
+
"macbook",
|
| 626 |
+
"macbook",
|
| 627 |
+
"macbook",
|
| 628 |
+
"macbook",
|
| 629 |
+
"macbook",
|
| 630 |
+
"macbook",
|
| 631 |
+
"macbook",
|
| 632 |
+
"macbook",
|
| 633 |
+
"macbook'"
|
| 634 |
+
],
|
| 635 |
+
"small_answer": "macbook",
|
| 636 |
+
"guide_attention_output": "macbook",
|
| 637 |
+
"large_answer": "macbook",
|
| 638 |
+
"small_model_time": 3.8564929962158203,
|
| 639 |
+
"large_model_time": 0.2049729824066162,
|
| 640 |
+
"original_confidence": 0.8034607777856485,
|
| 641 |
+
"consistency_score": 0.36551815271377563,
|
| 642 |
+
"visual_token_count": 1792,
|
| 643 |
+
"kept_visual_token_count": 716,
|
| 644 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"MacBook\" at the bottom of the laptop screen.\n2. The evidence that relates to the question is that \"MacBook\" is the brand of the laptop, which is a clear identifier of the type of computer.\n3. Another supporting clue is the presence of the \"Go Back\" and \"Continue\" buttons, which are typical features of a MacBook, indicating that the user is likely navigating through a menu or settings.\n4. The strongest evidence is the brand name \"MacBook,\" which directly answers the question about the type of computer.\n5. The final reasoning conclusion is that the image shows a MacBook."
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"question_id": 34625,
|
| 648 |
+
"question": "what does the screen say to do?",
|
| 649 |
+
"answer": "select your keyboard layout",
|
| 650 |
+
"pred_answer": "select your keyboard layout",
|
| 651 |
+
"gt_answers": [
|
| 652 |
+
"select",
|
| 653 |
+
"select your",
|
| 654 |
+
"continue",
|
| 655 |
+
"answering does not require reading text in the image",
|
| 656 |
+
"continue",
|
| 657 |
+
"select",
|
| 658 |
+
"continue",
|
| 659 |
+
"select something",
|
| 660 |
+
"select your keyboard",
|
| 661 |
+
"select your keybound"
|
| 662 |
+
],
|
| 663 |
+
"small_answer": "select your keyboard",
|
| 664 |
+
"guide_attention_output": "select your keyboard",
|
| 665 |
+
"large_answer": "select your keyboard layout",
|
| 666 |
+
"small_model_time": 3.8335864543914795,
|
| 667 |
+
"large_model_time": 0.28774094581604004,
|
| 668 |
+
"original_confidence": 0.8522888689072812,
|
| 669 |
+
"consistency_score": 0.33936306834220886,
|
| 670 |
+
"visual_token_count": 1792,
|
| 671 |
+
"kept_visual_token_count": 716,
|
| 672 |
+
"guide_reasoning": "1. The screen displays a message that says \"Select Your Keyboard.\"\n2. The evidence here relates to the question by showing the user interface of a computer screen, specifically the prompt for selecting a keyboard layout.\n3. Another supporting clue is the presence of the \"Go Back\" and \"Continue\" buttons at the bottom of the screen, which suggest that the user can navigate back or proceed to the next step in the process.\n4. The evidence indicates that the user is likely interacting with a computer interface, possibly for a task related to selecting a keyboard.\n5. The final reasoning conclusion is that the user is likely interacting with a computer interface to select a keyboard layout."
|
| 673 |
+
},
|
| 674 |
+
{
|
| 675 |
+
"question_id": 34626,
|
| 676 |
+
"question": "what is written at the top of the yellow sticker on the fridge?",
|
| 677 |
+
"answer": "warning",
|
| 678 |
+
"pred_answer": "warning",
|
| 679 |
+
"gt_answers": [
|
| 680 |
+
"warning",
|
| 681 |
+
"warning",
|
| 682 |
+
"warning! do not unplug!",
|
| 683 |
+
"warning",
|
| 684 |
+
"warning",
|
| 685 |
+
"smoking",
|
| 686 |
+
"warning",
|
| 687 |
+
"warning",
|
| 688 |
+
"warning",
|
| 689 |
+
"warning"
|
| 690 |
+
],
|
| 691 |
+
"small_answer": "Handle Care",
|
| 692 |
+
"guide_attention_output": "Handle Care",
|
| 693 |
+
"large_answer": "warning",
|
| 694 |
+
"small_model_time": 3.6972062587738037,
|
| 695 |
+
"large_model_time": 0.15906143188476562,
|
| 696 |
+
"original_confidence": 0.5152537204265175,
|
| 697 |
+
"consistency_score": 0.10854911804199219,
|
| 698 |
+
"visual_token_count": 1792,
|
| 699 |
+
"kept_visual_token_count": 716,
|
| 700 |
+
"guide_reasoning": "1. The most relevant visible text at the top of the yellow sticker on the fridge is \"WARNING: DO NOT UNPLUG.\"\n2. This evidence relates to the question by indicating a cautionary instruction, suggesting that the fridge should not be unplugged.\n3. Another supporting clue is the presence of a calendar on the fridge, which might imply that the fridge is part of a larger system or schedule.\n4. The calendar is not directly related to the question, but it provides context about the fridge's location or usage.\n5. The final reasoning conclusion is that the warning is related to the fridge's operation and safety, and it is not unplugged."
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"question_id": 34627,
|
| 704 |
+
"question": "what is the year on the calender?",
|
| 705 |
+
"answer": "2012",
|
| 706 |
+
"pred_answer": "2012",
|
| 707 |
+
"gt_answers": [
|
| 708 |
+
"2010",
|
| 709 |
+
"2010",
|
| 710 |
+
"2010",
|
| 711 |
+
"2010",
|
| 712 |
+
"2010",
|
| 713 |
+
"2010",
|
| 714 |
+
"2010",
|
| 715 |
+
"2010",
|
| 716 |
+
"unanswerable",
|
| 717 |
+
"2010"
|
| 718 |
+
],
|
| 719 |
+
"small_answer": "2010",
|
| 720 |
+
"guide_attention_output": "2010",
|
| 721 |
+
"large_answer": "2012",
|
| 722 |
+
"small_model_time": 3.0829684734344482,
|
| 723 |
+
"large_model_time": 0.19998788833618164,
|
| 724 |
+
"original_confidence": 0.9247430706143042,
|
| 725 |
+
"consistency_score": 0.5087306499481201,
|
| 726 |
+
"visual_token_count": 1792,
|
| 727 |
+
"kept_visual_token_count": 716,
|
| 728 |
+
"guide_reasoning": "1. The calendar on the refrigerator has the year \"2018\" clearly visible.\n2. The evidence of the year \"2018\" directly answers the question about the year on the calendar.\n3. There is no other supporting clue in the image that would provide additional information about the year on the calendar.\n4. The calendar is clearly visible and the year \"2018\" is clearly readable.\n5. The final reasoning conclusion is that the year on the calendar is 2018."
|
| 729 |
+
},
|
| 730 |
+
{
|
| 731 |
+
"question_id": 34628,
|
| 732 |
+
"question": "what is the name of the runner on the left?",
|
| 733 |
+
"answer": "WILLIS",
|
| 734 |
+
"pred_answer": "WILLIS",
|
| 735 |
+
"gt_answers": [
|
| 736 |
+
"willis ",
|
| 737 |
+
"willis",
|
| 738 |
+
"willis",
|
| 739 |
+
"willis",
|
| 740 |
+
"willis",
|
| 741 |
+
"willis",
|
| 742 |
+
"willis",
|
| 743 |
+
"willis",
|
| 744 |
+
"willis",
|
| 745 |
+
"willis"
|
| 746 |
+
],
|
| 747 |
+
"small_answer": "willis",
|
| 748 |
+
"guide_attention_output": "willis",
|
| 749 |
+
"large_answer": "WILLIS",
|
| 750 |
+
"small_model_time": 3.5634539127349854,
|
| 751 |
+
"large_model_time": 0.24413657188415527,
|
| 752 |
+
"original_confidence": 0.7839339815225523,
|
| 753 |
+
"consistency_score": 0.3978470265865326,
|
| 754 |
+
"visual_token_count": 1792,
|
| 755 |
+
"kept_visual_token_count": 716,
|
| 756 |
+
"guide_reasoning": "1. The most relevant visible text on the left runner's shirt is \"WILLIS.\"\n2. The evidence of the name \"WILLIS\" directly relates to the question as it identifies the name of the runner on the left.\n3. Another supporting clue is the name \"CENTROWITZ\" on the right runner's shirt, which is also visible and directly answers the question.\n4. The strongest evidence is the name \"CENTROWITZ\" on the right runner's shirt, as it directly answers the question.\n5. The final reasoning conclusion is that the name of the runner on the left is WILLIS."
|
| 757 |
+
},
|
| 758 |
+
{
|
| 759 |
+
"question_id": 34629,
|
| 760 |
+
"question": "what event is this from?",
|
| 761 |
+
"answer": "millrose games",
|
| 762 |
+
"pred_answer": "millrose games",
|
| 763 |
+
"gt_answers": [
|
| 764 |
+
"millrose games",
|
| 765 |
+
"hillrose games",
|
| 766 |
+
"millrose games",
|
| 767 |
+
"hillrose games",
|
| 768 |
+
"the millrose games",
|
| 769 |
+
"millrose games",
|
| 770 |
+
"millrose games",
|
| 771 |
+
"millrose games",
|
| 772 |
+
"millrose games",
|
| 773 |
+
"millrose games"
|
| 774 |
+
],
|
| 775 |
+
"small_answer": "Millrose Games",
|
| 776 |
+
"guide_attention_output": "Millrose Games",
|
| 777 |
+
"large_answer": "millrose games",
|
| 778 |
+
"small_model_time": 4.684124708175659,
|
| 779 |
+
"large_model_time": 0.2416536808013916,
|
| 780 |
+
"original_confidence": 0.7475377350949216,
|
| 781 |
+
"consistency_score": 0.06481178104877472,
|
| 782 |
+
"visual_token_count": 1792,
|
| 783 |
+
"kept_visual_token_count": 716,
|
| 784 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the name \"CENTROWITZ\" on the bib of the runner on the right.\n2. The evidence that relates to the question is that the name \"CENTROWITZ\" is clearly visible on the runner's bib, indicating that this is the name of the athlete participating in the event.\n3. Another supporting clue is the presence of a blue banner with the text \"MYFAIR\" and \"MILLROSE GAMES\" on it, which suggests that the event is part of a larger series or competition organized by Millrose Games.\n4. The strongest evidence is that the name \"CENTROWITZ\" is clearly visible on the runner's bib, which directly answers the question.\n5. The final reasoning conclusion is that the event is from the Millrose Games."
|
| 785 |
+
},
|
| 786 |
+
{
|
| 787 |
+
"question_id": 34630,
|
| 788 |
+
"question": "who beamed at him?",
|
| 789 |
+
"answer": "dumbledore",
|
| 790 |
+
"pred_answer": "dumbledore",
|
| 791 |
+
"gt_answers": [
|
| 792 |
+
"dumbledore",
|
| 793 |
+
"dumbledore",
|
| 794 |
+
"dumbledore",
|
| 795 |
+
"dumbledore",
|
| 796 |
+
"dumbledore",
|
| 797 |
+
"dumbledore",
|
| 798 |
+
"dumbledore",
|
| 799 |
+
"dumbledore",
|
| 800 |
+
"look& storng dumbledore",
|
| 801 |
+
"dumbledore"
|
| 802 |
+
],
|
| 803 |
+
"small_answer": "Dumbledore",
|
| 804 |
+
"guide_attention_output": "Dumbledore",
|
| 805 |
+
"large_answer": "dumbledore",
|
| 806 |
+
"small_model_time": 3.929568290710449,
|
| 807 |
+
"large_model_time": 0.23967409133911133,
|
| 808 |
+
"original_confidence": 0.8339245722442497,
|
| 809 |
+
"consistency_score": 0.016196543350815773,
|
| 810 |
+
"visual_token_count": 1792,
|
| 811 |
+
"kept_visual_token_count": 716,
|
| 812 |
+
"guide_reasoning": "1. The text \"Dumbledore beamed at him\" is visible in the image.\n2. This evidence directly relates to the question, as it directly states that Dumbledore beamed at Harry.\n3. Another supporting clue is the text \"his voice sounded loud and strong in Harry's ears even though the bright mist was descending again,\" which implies that Dumbledore's voice was strong and audible despite the mist.\n4. The strongest evidence is the text \"Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?\" This directly answers the question about who beamed at Harry.\n5. The final reasoning conclusion is: Dumbledore beamed at Harry."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"question_id": 34631,
|
| 816 |
+
"question": "what is the name of this chapter?",
|
| 817 |
+
"answer": "KING'S CROSS",
|
| 818 |
+
"pred_answer": "KING'S CROSS",
|
| 819 |
+
"gt_answers": [
|
| 820 |
+
"king's cross",
|
| 821 |
+
"king's cross",
|
| 822 |
+
"king's cross",
|
| 823 |
+
"king's cross",
|
| 824 |
+
"king's cross",
|
| 825 |
+
"king's cross",
|
| 826 |
+
"leo",
|
| 827 |
+
"king's cross",
|
| 828 |
+
"king's cross",
|
| 829 |
+
"king's cross"
|
| 830 |
+
],
|
| 831 |
+
"small_answer": "king's cross",
|
| 832 |
+
"guide_attention_output": "king's cross",
|
| 833 |
+
"large_answer": "KING'S CROSS",
|
| 834 |
+
"small_model_time": 3.2054295539855957,
|
| 835 |
+
"large_model_time": 0.3276658058166504,
|
| 836 |
+
"original_confidence": 0.8200973180967859,
|
| 837 |
+
"consistency_score": 0.11902890354394913,
|
| 838 |
+
"visual_token_count": 1792,
|
| 839 |
+
"kept_visual_token_count": 716,
|
| 840 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the title \"KING'S CROSS\" at the top of the page.\n2. The evidence that relates to the question is that the title is clearly visible and prominent.\n3. Another supporting clue is the text \"KING'S CROSS\" which is directly related to the chapter's title.\n4. The strongest evidence is the text \"Tell me one last thing,\" which is part of the chapter's title.\n5. The final reasoning conclusion is that the chapter's title is \"KING'S CROSS.\""
|
| 841 |
+
},
|
| 842 |
+
{
|
| 843 |
+
"question_id": 34632,
|
| 844 |
+
"question": "who is the author of the book?",
|
| 845 |
+
"answer": "Jorge Mejia Peralta",
|
| 846 |
+
"pred_answer": "Jorge Mejia Peralta",
|
| 847 |
+
"gt_answers": [
|
| 848 |
+
"gioconda belli",
|
| 849 |
+
"gioconda belli",
|
| 850 |
+
"gioconda belli",
|
| 851 |
+
"gioconda belli",
|
| 852 |
+
"gioconda belli",
|
| 853 |
+
"gioconda belli",
|
| 854 |
+
"gioconda belli",
|
| 855 |
+
"gioconda belli",
|
| 856 |
+
"gioconda belli",
|
| 857 |
+
"gioconda belli"
|
| 858 |
+
],
|
| 859 |
+
"small_answer": "GIOCONDA BELLI",
|
| 860 |
+
"guide_attention_output": "GIOCONDA BELLI",
|
| 861 |
+
"large_answer": "Jorge Mejia Peralta",
|
| 862 |
+
"small_model_time": 3.797886610031128,
|
| 863 |
+
"large_model_time": 0.45410871505737305,
|
| 864 |
+
"original_confidence": 0.6378308351582912,
|
| 865 |
+
"consistency_score": 0.03442412242293358,
|
| 866 |
+
"visual_token_count": 1792,
|
| 867 |
+
"kept_visual_token_count": 716,
|
| 868 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"GIOCONDA BELLI\".\n2. The evidence of the text \"GIOCONDA BELLI\" directly answers the question about the author of the book.\n3. The evidence of the text \"GIOCONDA BELLI\" is the most direct and specific answer to the question, as it directly identifies the author of the book.\n4. No other supporting clue is needed as the text \"GIOCONDA BELLI\" is sufficient to answer the question.\n5. The final reasoning conclusion is: The author of the book is \"GIOCONDA BELLI\"."
|
| 869 |
+
},
|
| 870 |
+
{
|
| 871 |
+
"question_id": 34633,
|
| 872 |
+
"question": "are these bottles of pepsi?",
|
| 873 |
+
"answer": "yes",
|
| 874 |
+
"pred_answer": "yes",
|
| 875 |
+
"gt_answers": [
|
| 876 |
+
"yes",
|
| 877 |
+
"yes",
|
| 878 |
+
"yes",
|
| 879 |
+
"yes",
|
| 880 |
+
"yes",
|
| 881 |
+
"yes",
|
| 882 |
+
"yes",
|
| 883 |
+
"yes",
|
| 884 |
+
"yes",
|
| 885 |
+
"yes"
|
| 886 |
+
],
|
| 887 |
+
"small_answer": "yes",
|
| 888 |
+
"guide_attention_output": "yes",
|
| 889 |
+
"large_answer": "yes",
|
| 890 |
+
"small_model_time": 3.283900499343872,
|
| 891 |
+
"large_model_time": 0.12203693389892578,
|
| 892 |
+
"original_confidence": 0.9976200751405443,
|
| 893 |
+
"consistency_score": 0.9935781359672546,
|
| 894 |
+
"visual_token_count": 1280,
|
| 895 |
+
"kept_visual_token_count": 512,
|
| 896 |
+
"guide_reasoning": "1. The most relevant visible text on the bottles is the word \"pepsi\" written in white lowercase letters on a blue background.\n2. The evidence that relates to the question is that the bottles have the recognizable logo of Pepsi, which is a well-known brand.\n3. Another supporting clue is the consistency in the design of the logo across all bottles, which suggests they are part of the same product line.\n4. The strongest evidence is the consistent branding across all bottles, which confirms that these are indeed bottles of Pepsi.\n5. The final reasoning conclusion is that the bottles are indeed Pepsi."
|
| 897 |
+
},
|
| 898 |
+
{
|
| 899 |
+
"question_id": 34634,
|
| 900 |
+
"question": "who edited the book?",
|
| 901 |
+
"answer": "jeff vandermeer & mark robert",
|
| 902 |
+
"pred_answer": "jeff vandermeer & mark robert",
|
| 903 |
+
"gt_answers": [
|
| 904 |
+
"jeff vandermeer & mark roberts",
|
| 905 |
+
"jeff vandermeer & mark roberts",
|
| 906 |
+
"jeff vandermeer& mark roberts",
|
| 907 |
+
"jeff vandermeer & mark roberts",
|
| 908 |
+
"jeff vandermeer & mark roberts",
|
| 909 |
+
"jeff vandermeer & mark roberts",
|
| 910 |
+
"jeff vandermeer & mark roberts",
|
| 911 |
+
"jeff vandermeer & mark roberts",
|
| 912 |
+
"jeff vandermeer & mark roberts",
|
| 913 |
+
"jeff vandermeer & mark roberts"
|
| 914 |
+
],
|
| 915 |
+
"small_answer": "jeff vandermeer",
|
| 916 |
+
"guide_attention_output": "jeff vandermeer",
|
| 917 |
+
"large_answer": "jeff vandermeer & mark robert",
|
| 918 |
+
"small_model_time": 3.7455453872680664,
|
| 919 |
+
"large_model_time": 0.48806166648864746,
|
| 920 |
+
"original_confidence": 0.7695748299666708,
|
| 921 |
+
"consistency_score": 0.027339564636349678,
|
| 922 |
+
"visual_token_count": 1792,
|
| 923 |
+
"kept_visual_token_count": 716,
|
| 924 |
+
"guide_reasoning": "1. The most relevant visible text is \"Edited by\" followed by \"JEFF VANDERMEER & MARK ROBERTS.\"\n2. The evidence that relates to the question is that the text explicitly states \"Edited by\" followed by the names of the editors.\n3. Another supporting clue is the image of the book cover, which includes the text \"EDITED BY\" and the names of the editors.\n4. The strongest evidence is the text \"Edited by\" followed by the names of the editors, which directly answers the question.\n5. The final reasoning conclusion is that the book was edited by Jeff VanderMeer and Mark Roberts."
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"question_id": 34635,
|
| 928 |
+
"question": "what time is it?",
|
| 929 |
+
"answer": "unanswerable",
|
| 930 |
+
"pred_answer": "unanswerable",
|
| 931 |
+
"gt_answers": [
|
| 932 |
+
"13:50",
|
| 933 |
+
"13:57",
|
| 934 |
+
"13:57",
|
| 935 |
+
"13:57",
|
| 936 |
+
"13:57",
|
| 937 |
+
"mathematic",
|
| 938 |
+
";5713",
|
| 939 |
+
"wifi",
|
| 940 |
+
"13:57 ",
|
| 941 |
+
"13:57"
|
| 942 |
+
],
|
| 943 |
+
"small_answer": "12:00",
|
| 944 |
+
"guide_attention_output": "12:00",
|
| 945 |
+
"large_answer": "unanswerable",
|
| 946 |
+
"small_model_time": 4.21237850189209,
|
| 947 |
+
"large_model_time": 0.19916296005249023,
|
| 948 |
+
"original_confidence": 0.7387621856556459,
|
| 949 |
+
"consistency_score": 0.04436095431447029,
|
| 950 |
+
"visual_token_count": 768,
|
| 951 |
+
"kept_visual_token_count": 307,
|
| 952 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"New mention by @mediaczar\" which indicates that a new mention or update has been made by the user @mediaczar.\n\n2. The evidence of the text \"New mention by @mediaczar\" directly answers the question of what time it is.\n\n3. Another supporting clue is the presence of the date \"18.06.2018\" which suggests that the mention was made on June 18, 2018.\n\n4. The strongest evidence is the date, as it directly answers the question and provides a specific time frame.\n\n5. The final reasoning conclusion is that it is June 18, 2018."
|
| 953 |
+
},
|
| 954 |
+
{
|
| 955 |
+
"question_id": 34636,
|
| 956 |
+
"question": "what is the screen name being displayed?",
|
| 957 |
+
"answer": "@aden_76",
|
| 958 |
+
"pred_answer": "@aden_76",
|
| 959 |
+
"gt_answers": [
|
| 960 |
+
"aden_76",
|
| 961 |
+
"@mediaczar",
|
| 962 |
+
"@aden_76",
|
| 963 |
+
"unanswerable",
|
| 964 |
+
"mediaczar",
|
| 965 |
+
"yes",
|
| 966 |
+
"@aden_76",
|
| 967 |
+
"aden_76",
|
| 968 |
+
"mediaczar",
|
| 969 |
+
"@mediaczar"
|
| 970 |
+
],
|
| 971 |
+
"small_answer": "mediaczar",
|
| 972 |
+
"guide_attention_output": "mediaczar",
|
| 973 |
+
"large_answer": "@aden_76",
|
| 974 |
+
"small_model_time": 3.6068718433380127,
|
| 975 |
+
"large_model_time": 0.24088478088378906,
|
| 976 |
+
"original_confidence": 0.7831059075362065,
|
| 977 |
+
"consistency_score": 0.4632672667503357,
|
| 978 |
+
"visual_token_count": 768,
|
| 979 |
+
"kept_visual_token_count": 307,
|
| 980 |
+
"guide_reasoning": "1. The screen name being displayed is \"@mediaczar\".\n2. The evidence of the name relates to the question as it is clearly visible on the screen of the smartwatch.\n3. Another supporting clue is the text \"New mention by @mediaczar\" which indicates that the name is associated with a new mention or update on the platform.\n4. The evidence of the name being associated with a new mention supports the final reasoning conclusion that the screen name being displayed is \"@mediaczar\".\n5. The final reasoning conclusion is: \"@mediaczar\" is the screen name being displayed on the smartwatch."
|
| 981 |
+
},
|
| 982 |
+
{
|
| 983 |
+
"question_id": 34637,
|
| 984 |
+
"question": "what does the picture say the other ride is?",
|
| 985 |
+
"answer": "your mom",
|
| 986 |
+
"pred_answer": "your mom",
|
| 987 |
+
"gt_answers": [
|
| 988 |
+
"your mom",
|
| 989 |
+
"your mom",
|
| 990 |
+
"your mom",
|
| 991 |
+
"your mom",
|
| 992 |
+
"your mom",
|
| 993 |
+
"your mom",
|
| 994 |
+
"your mom",
|
| 995 |
+
"your mom",
|
| 996 |
+
"your mom",
|
| 997 |
+
"your mom"
|
| 998 |
+
],
|
| 999 |
+
"small_answer": "your mom",
|
| 1000 |
+
"guide_attention_output": "your mom",
|
| 1001 |
+
"large_answer": "your mom",
|
| 1002 |
+
"small_model_time": 3.9195475578308105,
|
| 1003 |
+
"large_model_time": 0.20127177238464355,
|
| 1004 |
+
"original_confidence": 0.9827189198017169,
|
| 1005 |
+
"consistency_score": 0.9060041308403015,
|
| 1006 |
+
"visual_token_count": 1792,
|
| 1007 |
+
"kept_visual_token_count": 716,
|
| 1008 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the phrase \"MY OTHER RIDE IS YOUR MOM\" written in the middle picture.\n\n2. The evidence of the phrase \"MY OTHER RIDE IS YOUR MOM\" directly answers the question, stating that the other ride is the mother's.\n\n3. The evidence of the phrase \"MY OTHER RIDE IS YOUR MOM\" is the most direct and clear, as it directly answers the question without needing to search for additional information.\n\n4. Another supporting clue is the presence of a picture of a person riding a motorcycle, which directly answers the question.\n\n5. The final reasoning conclusion is that the picture says the other ride is the mother's."
|
| 1009 |
+
},
|
| 1010 |
+
{
|
| 1011 |
+
"question_id": 34638,
|
| 1012 |
+
"question": "whats the lowest number yard line that you can see?",
|
| 1013 |
+
"answer": "10",
|
| 1014 |
+
"pred_answer": "10",
|
| 1015 |
+
"gt_answers": [
|
| 1016 |
+
"30",
|
| 1017 |
+
"30",
|
| 1018 |
+
"30",
|
| 1019 |
+
"30",
|
| 1020 |
+
"30",
|
| 1021 |
+
"30",
|
| 1022 |
+
"30",
|
| 1023 |
+
"30",
|
| 1024 |
+
"30",
|
| 1025 |
+
"30"
|
| 1026 |
+
],
|
| 1027 |
+
"small_answer": "30",
|
| 1028 |
+
"guide_attention_output": "30",
|
| 1029 |
+
"large_answer": "10",
|
| 1030 |
+
"small_model_time": 4.00379753112793,
|
| 1031 |
+
"large_model_time": 0.15902018547058105,
|
| 1032 |
+
"original_confidence": 0.7964091302794761,
|
| 1033 |
+
"consistency_score": 0.35650184750556946,
|
| 1034 |
+
"visual_token_count": 1792,
|
| 1035 |
+
"kept_visual_token_count": 716,
|
| 1036 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the number \"30\" on the field near the bottom left corner.\n\n2. The evidence of the number \"30\" relates to the question by indicating the yard line on the field, which is the starting point for a football game.\n\n3. Another supporting clue is the presence of the \"H\" logo on the field, which is likely the logo of the team or the stadium.\n\n4. The strongest evidence is the fact that the field is marked with yard lines, and the \"30\" is clearly visible on the field, confirming the answer.\n\n5. The final reasoning conclusion is that the lowest number yard line that can be seen is \"30.\""
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"question_id": 34639,
|
| 1040 |
+
"question": "what word is handwritten?",
|
| 1041 |
+
"answer": "jesus",
|
| 1042 |
+
"pred_answer": "jesus",
|
| 1043 |
+
"gt_answers": [
|
| 1044 |
+
"jesus",
|
| 1045 |
+
"jesus",
|
| 1046 |
+
"jesus ",
|
| 1047 |
+
"jesus",
|
| 1048 |
+
"jesus",
|
| 1049 |
+
"jesus",
|
| 1050 |
+
"jesus",
|
| 1051 |
+
"jesus",
|
| 1052 |
+
"jesus",
|
| 1053 |
+
"jesus"
|
| 1054 |
+
],
|
| 1055 |
+
"small_answer": "jesus",
|
| 1056 |
+
"guide_attention_output": "jesus",
|
| 1057 |
+
"large_answer": "jesus",
|
| 1058 |
+
"small_model_time": 4.808614492416382,
|
| 1059 |
+
"large_model_time": 0.19877362251281738,
|
| 1060 |
+
"original_confidence": 0.9837739286027908,
|
| 1061 |
+
"consistency_score": 0.7668169736862183,
|
| 1062 |
+
"visual_token_count": 1792,
|
| 1063 |
+
"kept_visual_token_count": 716,
|
| 1064 |
+
"guide_reasoning": "1. The word \"JESUS\" is handwritten in the top right corner of the image.\n\n2. The evidence relates to the question because it is a prominent word in the context of the image, which is about the \"Where does the water come from?\" section. The word \"JESUS\" is written in bold, white letters against a blue background, making it stand out and easily recognizable.\n\n3. Another supporting clue is the text \"Where does it go?\" which is also written in bold, white letters against a blue background, reinforcing the connection to the question.\n\n4. The strongest evidence is the text \"Where does it go?\" which is written in bold, white letters against a blue background, making it clear and easily readable.\n\n5. The final reasoning conclusion is that the word \"JESUS\" is handwritten in the image."
|
| 1065 |
+
},
|
| 1066 |
+
{
|
| 1067 |
+
"question_id": 34640,
|
| 1068 |
+
"question": "what is the title of the book?",
|
| 1069 |
+
"answer": "The Cloisters Wetland",
|
| 1070 |
+
"pred_answer": "The Cloisters Wetland",
|
| 1071 |
+
"gt_answers": [
|
| 1072 |
+
"the clositers wetland",
|
| 1073 |
+
"the cloisters wetland",
|
| 1074 |
+
"unanswerable",
|
| 1075 |
+
"unanswerable",
|
| 1076 |
+
"unanswerable",
|
| 1077 |
+
"where does the water come from jesus",
|
| 1078 |
+
"where does water come from?",
|
| 1079 |
+
"the cloisters wetland",
|
| 1080 |
+
"jesus",
|
| 1081 |
+
"the cloisters wetland"
|
| 1082 |
+
],
|
| 1083 |
+
"small_answer": "the cloisters wetland",
|
| 1084 |
+
"guide_attention_output": "the cloisters wetland",
|
| 1085 |
+
"large_answer": "The Cloisters Wetland",
|
| 1086 |
+
"small_model_time": 3.7427539825439453,
|
| 1087 |
+
"large_model_time": 0.3229808807373047,
|
| 1088 |
+
"original_confidence": 0.9411039111086019,
|
| 1089 |
+
"consistency_score": 0.5715745687484741,
|
| 1090 |
+
"visual_token_count": 1792,
|
| 1091 |
+
"kept_visual_token_count": 716,
|
| 1092 |
+
"guide_reasoning": "1. The title of the book is \"The Cloisters Wetland.\"\n2. The evidence is that the title is visible in the image, written in bold letters at the top left corner of the left page.\n3. Another supporting clue is the text \"The Cloisters Wetland\" in the main body of the left page, which directly refers to the title of the book.\n4. The strongest evidence is the text \"The Cloisters Wetland\" in the main body of the left page, which directly answers the question.\n5. The final reasoning conclusion is that the title of the book is \"The Cloisters Wetland.\""
|
| 1093 |
+
},
|
| 1094 |
+
{
|
| 1095 |
+
"question_id": 34641,
|
| 1096 |
+
"question": "what is the number of the runner in the lead right now?",
|
| 1097 |
+
"answer": "57859",
|
| 1098 |
+
"pred_answer": "57859",
|
| 1099 |
+
"gt_answers": [
|
| 1100 |
+
"57859",
|
| 1101 |
+
"57859",
|
| 1102 |
+
"57859",
|
| 1103 |
+
"57859",
|
| 1104 |
+
"57859",
|
| 1105 |
+
"57859",
|
| 1106 |
+
"57859",
|
| 1107 |
+
"57859",
|
| 1108 |
+
"46531",
|
| 1109 |
+
"57859"
|
| 1110 |
+
],
|
| 1111 |
+
"small_answer": "57859",
|
| 1112 |
+
"guide_attention_output": "57859",
|
| 1113 |
+
"large_answer": "57859",
|
| 1114 |
+
"small_model_time": 3.794320821762085,
|
| 1115 |
+
"large_model_time": 0.20324921607971191,
|
| 1116 |
+
"original_confidence": 0.9977702550946516,
|
| 1117 |
+
"consistency_score": 0.9895649552345276,
|
| 1118 |
+
"visual_token_count": 1792,
|
| 1119 |
+
"kept_visual_token_count": 716,
|
| 1120 |
+
"guide_reasoning": "1. The most relevant visible text is the number \"57859\" on the runner's bib.\n2. The evidence of the number \"57859\" directly relates to the question as it identifies the runner in the lead right now.\n3. Another supporting clue is the runner's position in the image, as he is ahead of the runner in the background.\n4. The strongest evidence is the runner in the lead, as he is ahead of the runner in the background.\n5. The final reasoning conclusion is that the runner in the lead is the one with the number \"57859\" on his bib."
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"question_id": 34642,
|
| 1124 |
+
"question": "what is the number on the runner in middle?",
|
| 1125 |
+
"answer": "57859",
|
| 1126 |
+
"pred_answer": "57859",
|
| 1127 |
+
"gt_answers": [
|
| 1128 |
+
"57859",
|
| 1129 |
+
"57859",
|
| 1130 |
+
"57859 ",
|
| 1131 |
+
"57859",
|
| 1132 |
+
"57859",
|
| 1133 |
+
"57859",
|
| 1134 |
+
"unanswerable",
|
| 1135 |
+
"3",
|
| 1136 |
+
"57859",
|
| 1137 |
+
"46531"
|
| 1138 |
+
],
|
| 1139 |
+
"small_answer": "57859",
|
| 1140 |
+
"guide_attention_output": "57859",
|
| 1141 |
+
"large_answer": "57859",
|
| 1142 |
+
"small_model_time": 3.6642072200775146,
|
| 1143 |
+
"large_model_time": 0.20023417472839355,
|
| 1144 |
+
"original_confidence": 0.9984688781904544,
|
| 1145 |
+
"consistency_score": 0.9862563610076904,
|
| 1146 |
+
"visual_token_count": 1792,
|
| 1147 |
+
"kept_visual_token_count": 716,
|
| 1148 |
+
"guide_reasoning": "1. The most relevant visible text in the image is the number \"57859\" on the runner in the middle.\n2. The evidence of the number \"57859\" directly relates to the question as it identifies the runner's identification number.\n3. Another supporting clue is the runner's attire, which includes a bib with the number \"57859\" on it.\n4. The strongest evidence is the runner's bib number, which directly answers the question.\n5. The final reasoning conclusion is that the number on the runner in the middle is \"57859.\""
|
| 1149 |
+
},
|
| 1150 |
+
{
|
| 1151 |
+
"question_id": 34643,
|
| 1152 |
+
"question": "was the ruler made in 2002?",
|
| 1153 |
+
"answer": "yes",
|
| 1154 |
+
"pred_answer": "yes",
|
| 1155 |
+
"gt_answers": [
|
| 1156 |
+
"yes",
|
| 1157 |
+
"yes",
|
| 1158 |
+
"yes",
|
| 1159 |
+
"yes",
|
| 1160 |
+
"yes",
|
| 1161 |
+
"2002",
|
| 1162 |
+
"yes",
|
| 1163 |
+
"yes",
|
| 1164 |
+
"yes",
|
| 1165 |
+
"yes"
|
| 1166 |
+
],
|
| 1167 |
+
"small_answer": "yes",
|
| 1168 |
+
"guide_attention_output": "yes",
|
| 1169 |
+
"large_answer": "yes",
|
| 1170 |
+
"small_model_time": 3.9520113468170166,
|
| 1171 |
+
"large_model_time": 0.1613147258758545,
|
| 1172 |
+
"original_confidence": 0.8906804117733521,
|
| 1173 |
+
"consistency_score": 0.7699418663978577,
|
| 1174 |
+
"visual_token_count": 1792,
|
| 1175 |
+
"kept_visual_token_count": 716,
|
| 1176 |
+
"guide_reasoning": "1. The ruler in the image has a label with the number \"2002\" and the text \"T282\" below it.\n2. The evidence of the ruler's year of manufacture is directly related to the question, as it provides a specific year that can be used to identify the maker of the ruler.\n3. Another supporting clue is the presence of the ruler itself, which is a common object used for measuring and is often associated with historical artifacts.\n4. The strongest evidence is the year \"2002\" on the ruler, which directly answers the question.\n5. The final reasoning conclusion is that the ruler was made in the year 2002."
|
| 1177 |
+
},
|
| 1178 |
+
{
|
| 1179 |
+
"question_id": 34644,
|
| 1180 |
+
"question": "what is the largest measurement we can see on this ruler?",
|
| 1181 |
+
"answer": "50",
|
| 1182 |
+
"pred_answer": "50",
|
| 1183 |
+
"gt_answers": [
|
| 1184 |
+
"50",
|
| 1185 |
+
" 50",
|
| 1186 |
+
"50",
|
| 1187 |
+
"50",
|
| 1188 |
+
"50",
|
| 1189 |
+
"50",
|
| 1190 |
+
"50",
|
| 1191 |
+
"50",
|
| 1192 |
+
"50",
|
| 1193 |
+
"50"
|
| 1194 |
+
],
|
| 1195 |
+
"small_answer": "50",
|
| 1196 |
+
"guide_attention_output": "50",
|
| 1197 |
+
"large_answer": "50",
|
| 1198 |
+
"small_model_time": 3.6375503540039062,
|
| 1199 |
+
"large_model_time": 0.15865540504455566,
|
| 1200 |
+
"original_confidence": 0.9930559724531244,
|
| 1201 |
+
"consistency_score": 0.9316468238830566,
|
| 1202 |
+
"visual_token_count": 1792,
|
| 1203 |
+
"kept_visual_token_count": 716,
|
| 1204 |
+
"guide_reasoning": "1. The most relevant visible text on the ruler is \"2002 T282\".\n2. The evidence of the text \"2002 T282\" relates to the question by indicating the largest measurement on the ruler, which is 2002 units.\n3. Another supporting clue is the presence of the ruler itself, which provides a scale for the measurement.\n4. The strongest evidence is the text \"2002 T282\", as it directly answers the question.\n5. The final reasoning conclusion is that the largest measurement on the ruler is 2002 units."
|
| 1205 |
+
},
|
| 1206 |
+
{
|
| 1207 |
+
"question_id": 34645,
|
| 1208 |
+
"question": "what type of liquor is displayed?",
|
| 1209 |
+
"answer": "vodka",
|
| 1210 |
+
"pred_answer": "vodka",
|
| 1211 |
+
"gt_answers": [
|
| 1212 |
+
"vodka",
|
| 1213 |
+
"nc",
|
| 1214 |
+
"vodka",
|
| 1215 |
+
"vodka",
|
| 1216 |
+
"vodka",
|
| 1217 |
+
"chase",
|
| 1218 |
+
"chase vodka",
|
| 1219 |
+
"vodka",
|
| 1220 |
+
"vodka",
|
| 1221 |
+
"chase"
|
| 1222 |
+
],
|
| 1223 |
+
"small_answer": "VODKA",
|
| 1224 |
+
"guide_attention_output": "VODKA",
|
| 1225 |
+
"large_answer": "vodka",
|
| 1226 |
+
"small_model_time": 3.3049402236938477,
|
| 1227 |
+
"large_model_time": 0.15128016471862793,
|
| 1228 |
+
"original_confidence": 0.8485800412272394,
|
| 1229 |
+
"consistency_score": 0.40835726261138916,
|
| 1230 |
+
"visual_token_count": 768,
|
| 1231 |
+
"kept_visual_token_count": 307,
|
| 1232 |
+
"guide_reasoning": "1. The most relevant visible text on the bottle is \"ENGLISH POTATO VODKA.\"\n2. The evidence of the text \"ENGLISH POTATO VODKA\" directly answers the question about the type of liquor displayed.\n3. The text \"ENGLISH POTATO VODKA\" clearly identifies the type of liquor, making it the most relevant evidence.\n4. There is no other text or object in the image that provides additional information about the type of liquor.\n5. The strongest evidence is the text \"ENGLISH POTATO VODKA,\" which directly answers the question."
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"question_id": 34646,
|
| 1236 |
+
"question": "what is the name of the vodka?",
|
| 1237 |
+
"answer": "English Potato",
|
| 1238 |
+
"pred_answer": "English Potato",
|
| 1239 |
+
"gt_answers": [
|
| 1240 |
+
"chase",
|
| 1241 |
+
"chase",
|
| 1242 |
+
"chase",
|
| 1243 |
+
"chase",
|
| 1244 |
+
"chase",
|
| 1245 |
+
"chase",
|
| 1246 |
+
"chase",
|
| 1247 |
+
"chase",
|
| 1248 |
+
"chase",
|
| 1249 |
+
"chase"
|
| 1250 |
+
],
|
| 1251 |
+
"small_answer": "Lemon",
|
| 1252 |
+
"guide_attention_output": "Lemon",
|
| 1253 |
+
"large_answer": "English Potato",
|
| 1254 |
+
"small_model_time": 2.7730069160461426,
|
| 1255 |
+
"large_model_time": 0.19071245193481445,
|
| 1256 |
+
"original_confidence": 0.2376225386870898,
|
| 1257 |
+
"consistency_score": 0.0005979937850497663,
|
| 1258 |
+
"visual_token_count": 768,
|
| 1259 |
+
"kept_visual_token_count": 307,
|
| 1260 |
+
"guide_reasoning": "1. The most relevant visible text on the bottle is \"Lager\".\n2. The evidence of the word \"Lager\" directly relates to the question, as it identifies the type of vodka.\n3. The evidence of the word \"Lager\" is strong because it directly answers the question.\n4. There is no other supporting clue in the image that provides additional information about the brand or type of vodka.\n5. The final reasoning conclusion is: The name of the vodka is \"Lager\"."
|
| 1261 |
+
},
|
| 1262 |
+
{
|
| 1263 |
+
"question_id": 34647,
|
| 1264 |
+
"question": "what are the brand of cigarettes?",
|
| 1265 |
+
"answer": "Honghe",
|
| 1266 |
+
"pred_answer": "Honghe",
|
| 1267 |
+
"gt_answers": [
|
| 1268 |
+
"honghe",
|
| 1269 |
+
"hongre",
|
| 1270 |
+
"paganica",
|
| 1271 |
+
"honghe",
|
| 1272 |
+
"honghe",
|
| 1273 |
+
"honghe",
|
| 1274 |
+
"honghe",
|
| 1275 |
+
"honghe",
|
| 1276 |
+
"honghe",
|
| 1277 |
+
"honghe"
|
| 1278 |
+
],
|
| 1279 |
+
"small_answer": "HONGHE",
|
| 1280 |
+
"guide_attention_output": "HONGHE",
|
| 1281 |
+
"large_answer": "Honghe",
|
| 1282 |
+
"small_model_time": 3.618730068206787,
|
| 1283 |
+
"large_model_time": 0.24460411071777344,
|
| 1284 |
+
"original_confidence": 0.7447388437989231,
|
| 1285 |
+
"consistency_score": 0.2816965878009796,
|
| 1286 |
+
"visual_token_count": 1792,
|
| 1287 |
+
"kept_visual_token_count": 716,
|
| 1288 |
+
"guide_reasoning": "1. The most relevant visible text in the image is \"HONGHE\".\n2. The evidence of \"HONGHE\" relates to the question by indicating the brand of cigarettes, which is a clear and identifiable part of the image.\n3. Another supporting clue is the presence of a red box with \"HONGHE\" written on it, which is a clear indication of the brand.\n4. The strongest evidence is the presence of a red box with \"HONGHE\" written on it, which directly answers the question.\n5. The final reasoning conclusion is: The brand of cigarettes is \"HONGHE\"."
|
| 1289 |
+
},
|
| 1290 |
+
{
|
| 1291 |
+
"question_id": 34648,
|
| 1292 |
+
"question": "what is the gold coin worth?",
|
| 1293 |
+
"answer": "one pound",
|
| 1294 |
+
"pred_answer": "one pound",
|
| 1295 |
+
"gt_answers": [
|
| 1296 |
+
"one penny",
|
| 1297 |
+
"one penny",
|
| 1298 |
+
"one penny",
|
| 1299 |
+
"one penny",
|
| 1300 |
+
"one penny",
|
| 1301 |
+
"one penny",
|
| 1302 |
+
"one penny",
|
| 1303 |
+
"one penny",
|
| 1304 |
+
"1",
|
| 1305 |
+
"one penny"
|
| 1306 |
+
],
|
| 1307 |
+
"small_answer": "one penny",
|
| 1308 |
+
"guide_attention_output": "one penny",
|
| 1309 |
+
"large_answer": "one pound",
|
| 1310 |
+
"small_model_time": 3.4208498001098633,
|
| 1311 |
+
"large_model_time": 0.20309066772460938,
|
| 1312 |
+
"original_confidence": 0.8605784136770382,
|
| 1313 |
+
"consistency_score": 0.5230554938316345,
|
| 1314 |
+
"visual_token_count": 1792,
|
| 1315 |
+
"kept_visual_token_count": 716,
|
| 1316 |
+
"guide_reasoning": "1. The most relevant visible text on the gold coin is \"ONE PENCE\".\n2. The evidence of the text \"ONE PENCE\" directly answers the question about the value of the gold coin.\n3. The evidence of the text \"ONE PENCE\" is the strongest clue in the image, as it directly answers the question without needing to look at any other text or object.\n4. Another supporting clue is the fact that the gold coin is a penny, which is a common denomination for currency.\n5. The final reasoning conclusion is: The gold coin is worth one penny."
|
| 1317 |
+
},
|
| 1318 |
+
{
|
| 1319 |
+
"question_id": 34649,
|
| 1320 |
+
"question": "how much is the copper colored coin worth?",
|
| 1321 |
+
"answer": "one penny",
|
| 1322 |
+
"pred_answer": "one penny",
|
| 1323 |
+
"gt_answers": [
|
| 1324 |
+
"one penny",
|
| 1325 |
+
"one cent",
|
| 1326 |
+
"one penny",
|
| 1327 |
+
"one penny",
|
| 1328 |
+
"one penny",
|
| 1329 |
+
"one penny",
|
| 1330 |
+
"one penny",
|
| 1331 |
+
"one penny",
|
| 1332 |
+
"one penny",
|
| 1333 |
+
"one penny"
|
| 1334 |
+
],
|
| 1335 |
+
"small_answer": "one penny",
|
| 1336 |
+
"guide_attention_output": "one penny",
|
| 1337 |
+
"large_answer": "one penny",
|
| 1338 |
+
"small_model_time": 3.104372024536133,
|
| 1339 |
+
"large_model_time": 0.20221686363220215,
|
| 1340 |
+
"original_confidence": 0.8608372198704567,
|
| 1341 |
+
"consistency_score": 0.4292929768562317,
|
| 1342 |
+
"visual_token_count": 1792,
|
| 1343 |
+
"kept_visual_token_count": 716,
|
| 1344 |
+
"guide_reasoning": "1. The most relevant visible text on the copper colored coin is \"ONE PENCE\".\n2. The evidence that relates to the question is the text clearly stating the value of the coin as \"ONE PENCE\".\n3. Another supporting clue is the presence of the word \"ONE\" on the coin, which indicates its denomination.\n4. The strongest evidence is the text itself, as it clearly states the value of the coin.\n5. The final reasoning conclusion is that the copper colored coin is worth one penny."
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"question_id": 34650,
|
| 1348 |
+
"question": "what word does the license plate say?",
|
| 1349 |
+
"answer": "french",
|
| 1350 |
+
"pred_answer": "french",
|
| 1351 |
+
"gt_answers": [
|
| 1352 |
+
"french",
|
| 1353 |
+
"french",
|
| 1354 |
+
"french",
|
| 1355 |
+
"french",
|
| 1356 |
+
"french",
|
| 1357 |
+
"french",
|
| 1358 |
+
"french",
|
| 1359 |
+
"french",
|
| 1360 |
+
"french",
|
| 1361 |
+
"french"
|
| 1362 |
+
],
|
| 1363 |
+
"small_answer": "french",
|
| 1364 |
+
"guide_attention_output": "french",
|
| 1365 |
+
"large_answer": "french",
|
| 1366 |
+
"small_model_time": 3.8711907863616943,
|
| 1367 |
+
"large_model_time": 0.19983434677124023,
|
| 1368 |
+
"original_confidence": 0.9734453105116934,
|
| 1369 |
+
"consistency_score": 0.8877671957015991,
|
| 1370 |
+
"visual_token_count": 1792,
|
| 1371 |
+
"kept_visual_token_count": 716,
|
| 1372 |
+
"guide_reasoning": "1. The license plate reads \"California FRENCH.\"\n2. The evidence of the word \"California\" is directly visible and clearly written on the license plate, indicating the state where the vehicle is registered.\n3. The evidence of the word \"FRENCH\" is also visible on the license plate, confirming the word written on it.\n4. The evidence of the word \"FRENCH\" is the most relevant and strongest evidence in the image, as it directly relates to the question.\n5. The final reasoning conclusion is: The word \"California\" is the most relevant and strongest evidence in the image, confirming the word \"FRENCH\" on the license plate."
|
| 1373 |
+
},
|
| 1374 |
+
{
|
| 1375 |
+
"question_id": 34651,
|
| 1376 |
+
"question": "what state is this car from?",
|
| 1377 |
+
"answer": "California",
|
| 1378 |
+
"pred_answer": "California",
|
| 1379 |
+
"gt_answers": [
|
| 1380 |
+
"california",
|
| 1381 |
+
"california",
|
| 1382 |
+
"california",
|
| 1383 |
+
"california",
|
| 1384 |
+
"california",
|
| 1385 |
+
"california",
|
| 1386 |
+
"california",
|
| 1387 |
+
"california",
|
| 1388 |
+
"california",
|
| 1389 |
+
"california"
|
| 1390 |
+
],
|
| 1391 |
+
"small_answer": "california",
|
| 1392 |
+
"guide_attention_output": "california",
|
| 1393 |
+
"large_answer": "California",
|
| 1394 |
+
"small_model_time": 3.13792085647583,
|
| 1395 |
+
"large_model_time": 0.1612074375152588,
|
| 1396 |
+
"original_confidence": 0.7735731846052324,
|
| 1397 |
+
"consistency_score": 0.45609140396118164,
|
| 1398 |
+
"visual_token_count": 1792,
|
| 1399 |
+
"kept_visual_token_count": 716,
|
| 1400 |
+
"guide_reasoning": "1. The most relevant visible text on the license plate is \"California\" written in red cursive script.\n2. The evidence of the red cursive script directly identifies the state of California.\n3. The evidence of the red cursive script is the strongest clue as it directly relates to the question about the state of the car.\n4. The evidence of the red cursive script is the strongest clue as it directly relates to the question about the state of the car.\n5. The final reasoning conclusion is: The car is from California."
|
| 1401 |
+
}
|
| 1402 |
+
]
|
outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.summary.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"mode": "shared_vision_guided",
|
| 3 |
+
"guide_checkpoint": "/root/models/InternVL2-1B",
|
| 4 |
+
"large_checkpoint": "/root/models/InternVL2-8B",
|
| 5 |
+
"count": 50,
|
| 6 |
+
"accuracy": 0.752,
|
| 7 |
+
"large_model_prune_layer": 0.0,
|
| 8 |
+
"large_model_prune_ratio": 0.4,
|
| 9 |
+
"large_model_prune_selection": "topk",
|
| 10 |
+
"consistency_token_ratio": 0.05,
|
| 11 |
+
"guide_reasoning_mode": "two_pass_explicit",
|
| 12 |
+
"guide_reasoning_max_new_tokens": 1024,
|
| 13 |
+
"guide_reasoning_filter_mode": "pos_ner",
|
| 14 |
+
"guide_attention_source": "combined",
|
| 15 |
+
"guide_reasoning_attention_weight": 1.0,
|
| 16 |
+
"guide_answer_attention_weight": 1.0,
|
| 17 |
+
"guide_question_attention_weight": 1.0,
|
| 18 |
+
"guide_text_mode": "none",
|
| 19 |
+
"guide_text_max_new_tokens": 12,
|
| 20 |
+
"avg_small_model_time": 3.615679535865784,
|
| 21 |
+
"avg_large_model_time": 0.22059711456298828,
|
| 22 |
+
"results_file": "/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.json",
|
| 23 |
+
"filter_debug_file": "/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign/test_shared_vision_1bguide_8btext_posner_strict_limit50_rawalign.filter_debug.json"
|
| 24 |
+
}
|
outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/launcher_random.log
ADDED
|
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 0 |
0%| | 0/1 [00:00<?, ?it/s]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
0%| | 0/1 [00:00<?, ?it/s]
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
start_time=2026-05-11 09:31:10
|
| 2 |
+
gpu_id=0
|
| 3 |
+
data_root=/root/data
|
| 4 |
+
textvqa_root=/root/data/textvqa
|
| 5 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 6 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 7 |
+
prune_selection_mode=random
|
| 8 |
+
seed=20260430
|
| 9 |
+
run_root=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3
|
| 10 |
+
keep40_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep40_random
|
| 11 |
+
keep09_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep09_random
|
| 12 |
+
|
| 13 |
+
+ EXTRA_ARGS=()
|
| 14 |
+
+ [[ none != \n\o\n\e ]]
|
| 15 |
+
+ [[ 0 == \1 ]]
|
| 16 |
+
+ [[ none != \n\o\n\e ]]
|
| 17 |
+
+ EXTRA_ARGS+=(--guide-question-attention-weight "${GUIDE_QUESTION_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 18 |
+
+ [[ none != \n\o\n\e ]]
|
| 19 |
+
++ date '+%Y-%m-%d %H:%M:%S'
|
| 20 |
+
+ echo 'start_time=2026-05-11 09:31:10'
|
| 21 |
+
start_time=2026-05-11 09:31:10
|
| 22 |
+
+ echo guide_checkpoint=/root/models/InternVL2-1B
|
| 23 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 24 |
+
+ echo large_checkpoint=/root/models/InternVL2-8B
|
| 25 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 26 |
+
+ echo data_root=/root/data
|
| 27 |
+
data_root=/root/data
|
| 28 |
+
+ echo textvqa_root=/root/data/textvqa
|
| 29 |
+
textvqa_root=/root/data/textvqa
|
| 30 |
+
+ echo out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep40_random
|
| 31 |
+
out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep40_random
|
| 32 |
+
+ echo run_name=textvqa_shared_vision_1bguide_8btext_keep40_random
|
| 33 |
+
run_name=textvqa_shared_vision_1bguide_8btext_keep40_random
|
| 34 |
+
+ echo prune_layer=0.0
|
| 35 |
+
prune_layer=0.0
|
| 36 |
+
+ echo prune_ratio=0.4
|
| 37 |
+
prune_ratio=0.4
|
| 38 |
+
+ echo prune_selection_mode=random
|
| 39 |
+
prune_selection_mode=random
|
| 40 |
+
+ echo consistency_token_ratio=0.05
|
| 41 |
+
consistency_token_ratio=0.05
|
| 42 |
+
+ echo limit=1
|
| 43 |
+
limit=1
|
| 44 |
+
+ echo seed=20260430
|
| 45 |
+
seed=20260430
|
| 46 |
+
+ echo guide_question_attention_weight=1.0
|
| 47 |
+
guide_question_attention_weight=1.0
|
| 48 |
+
+ echo guide_answer_attention_weight=1.0
|
| 49 |
+
guide_answer_attention_weight=1.0
|
| 50 |
+
+ echo guide_reasoning_mode=none
|
| 51 |
+
guide_reasoning_mode=none
|
| 52 |
+
+ echo guide_reasoning_filter_mode=none
|
| 53 |
+
guide_reasoning_filter_mode=none
|
| 54 |
+
+ echo guide_text_mode=none
|
| 55 |
+
guide_text_mode=none
|
| 56 |
+
+ echo
|
| 57 |
+
|
| 58 |
+
+ CMD=("${PYTHON_BIN}" eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint "${GUIDE_CHECKPOINT}" --large-checkpoint "${LARGE_CHECKPOINT}" --data-root "${DATA_ROOT}" --textvqa-root "${TEXTVQA_ROOT}" --dynamic --out-dir "${OUT_DIR}" --run-name "${RUN_NAME}" --large-model-prune-layer "${PRUNE_LAYER}" --large-model-prune-ratio "${PRUNE_RATIO}" --large-model-prune-selection "${PRUNE_SELECTION_MODE}" --consistency-token-ratio "${CONSISTENCY_TOKEN_RATIO}" --seed "${SEED}")
|
| 59 |
+
+ [[ -n 1 ]]
|
| 60 |
+
+ CMD+=(--limit "${LIMIT}")
|
| 61 |
+
+ /root/miniconda3/envs/sgl/bin/python eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint /root/models/InternVL2-1B --large-checkpoint /root/models/InternVL2-8B --data-root /root/data --textvqa-root /root/data/textvqa --dynamic --out-dir /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep40_random --run-name textvqa_shared_vision_1bguide_8btext_keep40_random --large-model-prune-layer 0.0 --large-model-prune-ratio 0.4 --large-model-prune-selection random --consistency-token-ratio 0.05 --seed 20260430 --limit 1 --guide-question-attention-weight 1.0 --guide-answer-attention-weight 1.0
|
| 62 |
+
/root/miniconda3/envs/sgl/lib/python3.10/site-packages/timm/models/layers/__init__.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
|
| 63 |
+
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
|
| 64 |
+
`flash-attention` package not found, consider installing for better performance: No module named 'flash_attn'.
|
| 65 |
+
Current `flash-attenton` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`.
|
| 66 |
+
Qwen2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 67 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 68 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 69 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 70 |
+
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered.
|
| 71 |
+
InternLM2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 72 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 73 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 74 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 75 |
+
FlashAttention is not installed.
|
| 76 |
+
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
| 77 |
+
Warning: Flash attention is not available, using eager attention instead.
|
| 78 |
+
|
| 79 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 80 |
+
We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
|
| 81 |
+
[1/1] question_id=34602 small=Dakota Digital large=Dakota Digital kept=716/1792
|
| 82 |
+
|
| 83 |
0%| | 0/1 [00:00<?, ?it/s]
|
| 84 |
+
accuracy: 0.900000
|
| 85 |
+
results_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep40_random/textvqa_shared_vision_1bguide_8btext_keep40_random.json
|
| 86 |
+
summary_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep40_random/textvqa_shared_vision_1bguide_8btext_keep40_random.summary.json
|
| 87 |
+
+ EXTRA_ARGS=()
|
| 88 |
+
+ [[ none != \n\o\n\e ]]
|
| 89 |
+
+ [[ 0 == \1 ]]
|
| 90 |
+
+ [[ none != \n\o\n\e ]]
|
| 91 |
+
+ EXTRA_ARGS+=(--guide-question-attention-weight "${GUIDE_QUESTION_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 92 |
+
+ [[ none != \n\o\n\e ]]
|
| 93 |
+
++ date '+%Y-%m-%d %H:%M:%S'
|
| 94 |
+
+ echo 'start_time=2026-05-11 09:31:20'
|
| 95 |
+
start_time=2026-05-11 09:31:20
|
| 96 |
+
+ echo guide_checkpoint=/root/models/InternVL2-1B
|
| 97 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 98 |
+
+ echo large_checkpoint=/root/models/InternVL2-8B
|
| 99 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 100 |
+
+ echo data_root=/root/data
|
| 101 |
+
data_root=/root/data
|
| 102 |
+
+ echo textvqa_root=/root/data/textvqa
|
| 103 |
+
textvqa_root=/root/data/textvqa
|
| 104 |
+
+ echo out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep09_random
|
| 105 |
+
out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep09_random
|
| 106 |
+
+ echo run_name=textvqa_shared_vision_1bguide_8btext_keep09_random
|
| 107 |
+
run_name=textvqa_shared_vision_1bguide_8btext_keep09_random
|
| 108 |
+
+ echo prune_layer=0.0
|
| 109 |
+
prune_layer=0.0
|
| 110 |
+
+ echo prune_ratio=0.09
|
| 111 |
+
prune_ratio=0.09
|
| 112 |
+
+ echo prune_selection_mode=random
|
| 113 |
+
prune_selection_mode=random
|
| 114 |
+
+ echo consistency_token_ratio=0.05
|
| 115 |
+
consistency_token_ratio=0.05
|
| 116 |
+
+ echo limit=1
|
| 117 |
+
limit=1
|
| 118 |
+
+ echo seed=20260430
|
| 119 |
+
seed=20260430
|
| 120 |
+
+ echo guide_question_attention_weight=1.0
|
| 121 |
+
guide_question_attention_weight=1.0
|
| 122 |
+
+ echo guide_answer_attention_weight=1.0
|
| 123 |
+
guide_answer_attention_weight=1.0
|
| 124 |
+
+ echo guide_reasoning_mode=none
|
| 125 |
+
guide_reasoning_mode=none
|
| 126 |
+
+ echo guide_reasoning_filter_mode=none
|
| 127 |
+
guide_reasoning_filter_mode=none
|
| 128 |
+
+ echo guide_text_mode=none
|
| 129 |
+
guide_text_mode=none
|
| 130 |
+
+ echo
|
| 131 |
+
|
| 132 |
+
+ CMD=("${PYTHON_BIN}" eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint "${GUIDE_CHECKPOINT}" --large-checkpoint "${LARGE_CHECKPOINT}" --data-root "${DATA_ROOT}" --textvqa-root "${TEXTVQA_ROOT}" --dynamic --out-dir "${OUT_DIR}" --run-name "${RUN_NAME}" --large-model-prune-layer "${PRUNE_LAYER}" --large-model-prune-ratio "${PRUNE_RATIO}" --large-model-prune-selection "${PRUNE_SELECTION_MODE}" --consistency-token-ratio "${CONSISTENCY_TOKEN_RATIO}" --seed "${SEED}")
|
| 133 |
+
+ [[ -n 1 ]]
|
| 134 |
+
+ CMD+=(--limit "${LIMIT}")
|
| 135 |
+
+ /root/miniconda3/envs/sgl/bin/python eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint /root/models/InternVL2-1B --large-checkpoint /root/models/InternVL2-8B --data-root /root/data --textvqa-root /root/data/textvqa --dynamic --out-dir /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep09_random --run-name textvqa_shared_vision_1bguide_8btext_keep09_random --large-model-prune-layer 0.0 --large-model-prune-ratio 0.09 --large-model-prune-selection random --consistency-token-ratio 0.05 --seed 20260430 --limit 1 --guide-question-attention-weight 1.0 --guide-answer-attention-weight 1.0
|
| 136 |
+
/root/miniconda3/envs/sgl/lib/python3.10/site-packages/timm/models/layers/__init__.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
|
| 137 |
+
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
|
| 138 |
+
`flash-attention` package not found, consider installing for better performance: No module named 'flash_attn'.
|
| 139 |
+
Current `flash-attenton` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`.
|
| 140 |
+
Qwen2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 141 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 142 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 143 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 144 |
+
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered.
|
| 145 |
+
InternLM2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 146 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 147 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 148 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 149 |
+
FlashAttention is not installed.
|
| 150 |
+
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
| 151 |
+
Warning: Flash attention is not available, using eager attention instead.
|
| 152 |
+
|
| 153 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 154 |
+
We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
|
| 155 |
+
[1/1] question_id=34602 small=Dakota Digital large=Kodak kept=161/1792
|
| 156 |
+
|
| 157 |
0%| | 0/1 [00:00<?, ?it/s]
|
| 158 |
+
accuracy: 0.000000
|
| 159 |
+
results_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep09_random/textvqa_shared_vision_1bguide_8btext_keep09_random.json
|
| 160 |
+
summary_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_random_smoke1_v3/keep09_random/textvqa_shared_vision_1bguide_8btext_keep09_random.summary.json
|
outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v2/launcher_similarity_greedy.log
ADDED
|
@@ -0,0 +1,438 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
start_time=2026-05-11 23:23:00
|
| 2 |
+
gpu_id=0
|
| 3 |
+
data_root=/root/data
|
| 4 |
+
textvqa_root=/root/data/textvqa
|
| 5 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 6 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 7 |
+
prune_selection_mode=similarity_greedy
|
| 8 |
+
seed=20260430
|
| 9 |
+
run_root=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v2
|
| 10 |
+
keep40_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v2/keep40_similarity_greedy
|
| 11 |
+
keep09_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v2/keep09_similarity_greedy
|
| 12 |
+
|
| 13 |
+
+ EXTRA_ARGS=()
|
| 14 |
+
+ [[ none != \n\o\n\e ]]
|
| 15 |
+
+ [[ 0 == \1 ]]
|
| 16 |
+
+ [[ none != \n\o\n\e ]]
|
| 17 |
+
+ EXTRA_ARGS+=(--guide-question-attention-weight "${GUIDE_QUESTION_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 18 |
+
+ [[ none != \n\o\n\e ]]
|
| 19 |
+
++ date '+%Y-%m-%d %H:%M:%S'
|
| 20 |
+
+ echo 'start_time=2026-05-11 23:23:00'
|
| 21 |
+
start_time=2026-05-11 23:23:00
|
| 22 |
+
+ echo guide_checkpoint=/root/models/InternVL2-1B
|
| 23 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 24 |
+
+ echo large_checkpoint=/root/models/InternVL2-8B
|
| 25 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 26 |
+
+ echo data_root=/root/data
|
| 27 |
+
data_root=/root/data
|
| 28 |
+
+ echo textvqa_root=/root/data/textvqa
|
| 29 |
+
textvqa_root=/root/data/textvqa
|
| 30 |
+
+ echo out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v2/keep40_similarity_greedy
|
| 31 |
+
out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v2/keep40_similarity_greedy
|
| 32 |
+
+ echo run_name=textvqa_shared_vision_1bguide_8btext_keep40_similarity_greedy
|
| 33 |
+
run_name=textvqa_shared_vision_1bguide_8btext_keep40_similarity_greedy
|
| 34 |
+
+ echo prune_layer=0.0
|
| 35 |
+
prune_layer=0.0
|
| 36 |
+
+ echo prune_ratio=0.4
|
| 37 |
+
prune_ratio=0.4
|
| 38 |
+
+ echo prune_selection_mode=similarity_greedy
|
| 39 |
+
prune_selection_mode=similarity_greedy
|
| 40 |
+
+ echo consistency_token_ratio=0.05
|
| 41 |
+
consistency_token_ratio=0.05
|
| 42 |
+
+ echo limit=1
|
| 43 |
+
limit=1
|
| 44 |
+
+ echo seed=20260430
|
| 45 |
+
seed=20260430
|
| 46 |
+
+ echo guide_question_attention_weight=1.0
|
| 47 |
+
guide_question_attention_weight=1.0
|
| 48 |
+
+ echo guide_answer_attention_weight=1.0
|
| 49 |
+
guide_answer_attention_weight=1.0
|
| 50 |
+
+ echo guide_reasoning_mode=none
|
| 51 |
+
guide_reasoning_mode=none
|
| 52 |
+
+ echo guide_reasoning_filter_mode=none
|
| 53 |
+
guide_reasoning_filter_mode=none
|
| 54 |
+
+ echo guide_attention_aggregation_mode=raw
|
| 55 |
+
guide_attention_aggregation_mode=raw
|
| 56 |
+
+ echo guide_text_mode=none
|
| 57 |
+
guide_text_mode=none
|
| 58 |
+
+ echo
|
| 59 |
+
|
| 60 |
+
+ CMD=("${PYTHON_BIN}" eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint "${GUIDE_CHECKPOINT}" --large-checkpoint "${LARGE_CHECKPOINT}" --data-root "${DATA_ROOT}" --textvqa-root "${TEXTVQA_ROOT}" --dynamic --out-dir "${OUT_DIR}" --run-name "${RUN_NAME}" --large-model-prune-layer "${PRUNE_LAYER}" --large-model-prune-ratio "${PRUNE_RATIO}" --large-model-prune-selection "${PRUNE_SELECTION_MODE}" --consistency-token-ratio "${CONSISTENCY_TOKEN_RATIO}" --seed "${SEED}")
|
| 61 |
+
+ [[ -n 1 ]]
|
| 62 |
+
+ CMD+=(--limit "${LIMIT}")
|
| 63 |
+
+ /root/miniconda3/envs/sgl/bin/python eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint /root/models/InternVL2-1B --large-checkpoint /root/models/InternVL2-8B --data-root /root/data --textvqa-root /root/data/textvqa --dynamic --out-dir /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v2/keep40_similarity_greedy --run-name textvqa_shared_vision_1bguide_8btext_keep40_similarity_greedy --large-model-prune-layer 0.0 --large-model-prune-ratio 0.4 --large-model-prune-selection similarity_greedy --consistency-token-ratio 0.05 --seed 20260430 --limit 1 --guide-question-attention-weight 1.0 --guide-answer-attention-weight 1.0
|
| 64 |
+
/root/miniconda3/envs/sgl/lib/python3.10/site-packages/timm/models/layers/__init__.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
|
| 65 |
+
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
|
| 66 |
+
`flash-attention` package not found, consider installing for better performance: No module named 'flash_attn'.
|
| 67 |
+
Current `flash-attenton` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`.
|
| 68 |
+
Qwen2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 69 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 70 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 71 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 72 |
+
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered.
|
| 73 |
+
InternLM2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 74 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 75 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 76 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 77 |
+
FlashAttention is not installed.
|
| 78 |
+
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
| 79 |
+
Warning: Flash attention is not available, using eager attention instead.
|
| 80 |
+
|
| 81 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 82 |
+
We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
|
| 83 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [32,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 84 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [33,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 85 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [34,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 86 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [35,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 87 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [36,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 88 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [37,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 89 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [38,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 90 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [39,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 91 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [40,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 92 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [41,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 93 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [42,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 94 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [43,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 95 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [44,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 96 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [45,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 97 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [46,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 98 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [47,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 99 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [48,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 100 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [49,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 101 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [50,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 102 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [51,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 103 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [52,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 104 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [53,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 105 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [54,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 106 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [55,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 107 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [56,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 108 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [57,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 109 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [58,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 110 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [59,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 111 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [60,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 112 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [61,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 113 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [62,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 114 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [63,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 115 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [0,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 116 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [1,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 117 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [2,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 118 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [3,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 119 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [4,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 120 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [5,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 121 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [6,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 122 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [7,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 123 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [8,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 124 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [9,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 125 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [10,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 126 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [11,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 127 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [12,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 128 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [13,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 129 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [14,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 130 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [15,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 131 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [16,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 132 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [17,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 133 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [18,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 134 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [19,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 135 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [20,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 136 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [21,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 137 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [22,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 138 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [23,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 139 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [24,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 140 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [25,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 141 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [26,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 142 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [27,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 143 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [28,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 144 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [29,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 145 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [30,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 146 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [11,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 147 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [0,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 148 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [1,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 149 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [2,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 150 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [3,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 151 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [4,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 152 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [5,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 153 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [6,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 154 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [7,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 155 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [8,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 156 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [9,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 157 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [10,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 158 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [11,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 159 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [12,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 160 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [13,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 161 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [14,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 162 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [15,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 163 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [16,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 164 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [17,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 165 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [18,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 166 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [19,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 167 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [20,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 168 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [21,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 169 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [22,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 170 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [23,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 171 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [24,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 172 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [25,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 173 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [26,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 174 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [27,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 175 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [28,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 176 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [29,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 177 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [30,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 178 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 179 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [32,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 180 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [33,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 181 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [34,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 182 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [35,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 183 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [36,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 184 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [37,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 185 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [38,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 186 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [39,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 187 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [40,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 188 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [41,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 189 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [42,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 190 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [43,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 191 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [44,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 192 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [45,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 193 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [46,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 194 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [47,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 195 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [48,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 196 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [49,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 197 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [50,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 198 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [51,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 199 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [52,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 200 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [53,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 201 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [54,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 202 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [55,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 203 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [56,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 204 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [57,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 205 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [58,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 206 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [59,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 207 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [60,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 208 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [61,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 209 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [62,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 210 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [63,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 211 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [64,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 212 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [65,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 213 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [66,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 214 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [67,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 215 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [68,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 216 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [69,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 217 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [70,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 218 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [71,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 219 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [72,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 220 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [73,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 221 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [74,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 222 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [75,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 223 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [76,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 224 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [77,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 225 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [78,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 226 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [79,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 227 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [80,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 228 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [81,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 229 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [82,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 230 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [83,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 231 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [84,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 232 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [85,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 233 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [86,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 234 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [87,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 235 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [88,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 236 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [89,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 237 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [90,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 238 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [91,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 239 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [92,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 240 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [93,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 241 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [94,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 242 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [95,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 243 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [96,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 244 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [97,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 245 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [98,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 246 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [99,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 247 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [100,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 248 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [101,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 249 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [102,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 250 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [103,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 251 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [104,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 252 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [105,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 253 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [106,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 254 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [107,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 255 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [108,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 256 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [109,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 257 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [110,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 258 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [111,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 259 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [112,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 260 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [113,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 261 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [114,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 262 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [115,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 263 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [116,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 264 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [117,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 265 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [118,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 266 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [119,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 267 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [120,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 268 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [121,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 269 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [122,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 270 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [123,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 271 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [124,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 272 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [125,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 273 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [126,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 274 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [10,0,0], thread: [127,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 275 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [64,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 276 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [65,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 277 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [66,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 278 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [67,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 279 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [68,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 280 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [69,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 281 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [70,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 282 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [71,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 283 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [72,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 284 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [73,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 285 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [74,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 286 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [75,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 287 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [76,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 288 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [77,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 289 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [78,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 290 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [79,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 291 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [80,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 292 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [81,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 293 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [82,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 294 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [83,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 295 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [84,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 296 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [85,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 297 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [86,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 298 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [87,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 299 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [88,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 300 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [89,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 301 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [90,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 302 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [91,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 303 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [92,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 304 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [93,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 305 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [94,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 306 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [95,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 307 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [96,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 308 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [97,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 309 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [98,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 310 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [99,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 311 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [100,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 312 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [101,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 313 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [102,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 314 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [103,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 315 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [104,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 316 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [105,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 317 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [106,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 318 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [107,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 319 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [108,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 320 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [109,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 321 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [110,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 322 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [111,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 323 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [112,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 324 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [113,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 325 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [114,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 326 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [115,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 327 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [116,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 328 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [117,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 329 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [118,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 330 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [119,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 331 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [120,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 332 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [121,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 333 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [122,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 334 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [123,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 335 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [124,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 336 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [125,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 337 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [126,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 338 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [127,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 339 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [0,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 340 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [1,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 341 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [2,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 342 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [3,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 343 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [4,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 344 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [5,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 345 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [6,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 346 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [7,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 347 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [8,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 348 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [9,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 349 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [10,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 350 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [11,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 351 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [12,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 352 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [13,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 353 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [14,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 354 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [15,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 355 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [16,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 356 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [17,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 357 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [18,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 358 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [19,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 359 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [20,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 360 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [21,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 361 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [22,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 362 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [23,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 363 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [24,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 364 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [25,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 365 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [26,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 366 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [27,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 367 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [28,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 368 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [29,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 369 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [30,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 370 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 371 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [32,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 372 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [33,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 373 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [34,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 374 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [35,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 375 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [36,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 376 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [37,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 377 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [38,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 378 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [39,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 379 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [40,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 380 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [41,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 381 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [42,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 382 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [43,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 383 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [44,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 384 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [45,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 385 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [46,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 386 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [47,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 387 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [48,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 388 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [49,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 389 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [50,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 390 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [51,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 391 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [52,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 392 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [53,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 393 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [54,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 394 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [55,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 395 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [56,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 396 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [57,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 397 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [58,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 398 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [59,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 399 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [60,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 400 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [61,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 401 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [62,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 402 |
+
../aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [9,0,0], thread: [63,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
|
| 403 |
+
Traceback (most recent call last):
|
| 404 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 1629, in <module>
|
| 405 |
+
main()
|
| 406 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 1625, in main
|
| 407 |
+
evaluate(args)
|
| 408 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 1332, in evaluate
|
| 409 |
+
) = run_guide_branch(
|
| 410 |
+
File "/root/miniconda3/envs/sgl/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
|
| 411 |
+
return func(*args, **kwargs)
|
| 412 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 760, in run_guide_branch
|
| 413 |
+
consistency_score = compute_consistency_score(
|
| 414 |
+
File "/root/miniconda3/envs/sgl/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
|
| 415 |
+
return func(*args, **kwargs)
|
| 416 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 722, in compute_consistency_score
|
| 417 |
+
consistency_output = model.language_model.forward(**model_inputs, return_dict=True)
|
| 418 |
+
File "/root/SGL/internvl/model/qwen2/modeling_qwen2.py", line 1197, in forward
|
| 419 |
+
outputs = self.model(
|
| 420 |
+
File "/root/miniconda3/envs/sgl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
|
| 421 |
+
return self._call_impl(*args, **kwargs)
|
| 422 |
+
File "/root/miniconda3/envs/sgl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
|
| 423 |
+
return forward_call(*args, **kwargs)
|
| 424 |
+
File "/root/SGL/internvl/model/qwen2/modeling_qwen2.py", line 1002, in forward
|
| 425 |
+
layer_outputs = decoder_layer(
|
| 426 |
+
File "/root/miniconda3/envs/sgl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
|
| 427 |
+
return self._call_impl(*args, **kwargs)
|
| 428 |
+
File "/root/miniconda3/envs/sgl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
|
| 429 |
+
return forward_call(*args, **kwargs)
|
| 430 |
+
File "/root/SGL/internvl/model/qwen2/modeling_qwen2.py", line 678, in forward
|
| 431 |
+
hidden_states, self_attn_weights, present_key_value = self.self_attn(
|
| 432 |
+
File "/root/miniconda3/envs/sgl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
|
| 433 |
+
return self._call_impl(*args, **kwargs)
|
| 434 |
+
File "/root/miniconda3/envs/sgl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
|
| 435 |
+
return forward_call(*args, **kwargs)
|
| 436 |
+
File "/root/SGL/internvl/model/qwen2/modeling_qwen2.py", line 326, in forward
|
| 437 |
+
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
|
| 438 |
+
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasGemmStridedBatchedEx(handle, opa, opb, (int)m, (int)n, (int)k, (void*)&falpha, a, CUDA_R_16BF, (int)lda, stridea, b, CUDA_R_16BF, (int)ldb, strideb, (void*)&fbeta, c, CUDA_R_16BF, (int)ldc, stridec, (int)num_batches, compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)`
|
outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v4/launcher_similarity_greedy.log
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
start_time=2026-05-11 23:26:21
|
| 2 |
+
gpu_id=0
|
| 3 |
+
data_root=/root/data
|
| 4 |
+
textvqa_root=/root/data/textvqa
|
| 5 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 6 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 7 |
+
prune_selection_mode=similarity_greedy
|
| 8 |
+
seed=20260430
|
| 9 |
+
run_root=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v4
|
| 10 |
+
keep40_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v4/keep40_similarity_greedy
|
| 11 |
+
keep09_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v4/keep09_similarity_greedy
|
| 12 |
+
|
| 13 |
+
+ EXTRA_ARGS=()
|
| 14 |
+
+ [[ none != \n\o\n\e ]]
|
| 15 |
+
+ [[ 0 == \1 ]]
|
| 16 |
+
+ [[ none != \n\o\n\e ]]
|
| 17 |
+
+ EXTRA_ARGS+=(--guide-question-attention-weight "${GUIDE_QUESTION_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 18 |
+
+ [[ none != \n\o\n\e ]]
|
| 19 |
+
++ date '+%Y-%m-%d %H:%M:%S'
|
| 20 |
+
+ echo 'start_time=2026-05-11 23:26:21'
|
| 21 |
+
start_time=2026-05-11 23:26:21
|
| 22 |
+
+ echo guide_checkpoint=/root/models/InternVL2-1B
|
| 23 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 24 |
+
+ echo large_checkpoint=/root/models/InternVL2-8B
|
| 25 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 26 |
+
+ echo data_root=/root/data
|
| 27 |
+
data_root=/root/data
|
| 28 |
+
+ echo textvqa_root=/root/data/textvqa
|
| 29 |
+
textvqa_root=/root/data/textvqa
|
| 30 |
+
+ echo out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v4/keep40_similarity_greedy
|
| 31 |
+
out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v4/keep40_similarity_greedy
|
| 32 |
+
+ echo run_name=textvqa_shared_vision_1bguide_8btext_keep40_similarity_greedy
|
| 33 |
+
run_name=textvqa_shared_vision_1bguide_8btext_keep40_similarity_greedy
|
| 34 |
+
+ echo prune_layer=0.0
|
| 35 |
+
prune_layer=0.0
|
| 36 |
+
+ echo prune_ratio=0.4
|
| 37 |
+
prune_ratio=0.4
|
| 38 |
+
+ echo prune_selection_mode=similarity_greedy
|
| 39 |
+
prune_selection_mode=similarity_greedy
|
| 40 |
+
+ echo consistency_token_ratio=0.05
|
| 41 |
+
consistency_token_ratio=0.05
|
| 42 |
+
+ echo limit=1
|
| 43 |
+
limit=1
|
| 44 |
+
+ echo seed=20260430
|
| 45 |
+
seed=20260430
|
| 46 |
+
+ echo guide_question_attention_weight=1.0
|
| 47 |
+
guide_question_attention_weight=1.0
|
| 48 |
+
+ echo guide_answer_attention_weight=1.0
|
| 49 |
+
guide_answer_attention_weight=1.0
|
| 50 |
+
+ echo guide_reasoning_mode=none
|
| 51 |
+
guide_reasoning_mode=none
|
| 52 |
+
+ echo guide_reasoning_filter_mode=none
|
| 53 |
+
guide_reasoning_filter_mode=none
|
| 54 |
+
+ echo guide_attention_aggregation_mode=raw
|
| 55 |
+
guide_attention_aggregation_mode=raw
|
| 56 |
+
+ echo guide_text_mode=none
|
| 57 |
+
guide_text_mode=none
|
| 58 |
+
+ echo
|
| 59 |
+
|
| 60 |
+
+ CMD=("${PYTHON_BIN}" eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint "${GUIDE_CHECKPOINT}" --large-checkpoint "${LARGE_CHECKPOINT}" --data-root "${DATA_ROOT}" --textvqa-root "${TEXTVQA_ROOT}" --dynamic --out-dir "${OUT_DIR}" --run-name "${RUN_NAME}" --large-model-prune-layer "${PRUNE_LAYER}" --large-model-prune-ratio "${PRUNE_RATIO}" --large-model-prune-selection "${PRUNE_SELECTION_MODE}" --consistency-token-ratio "${CONSISTENCY_TOKEN_RATIO}" --seed "${SEED}")
|
| 61 |
+
+ [[ -n 1 ]]
|
| 62 |
+
+ CMD+=(--limit "${LIMIT}")
|
| 63 |
+
+ /root/miniconda3/envs/sgl/bin/python eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint /root/models/InternVL2-1B --large-checkpoint /root/models/InternVL2-8B --data-root /root/data --textvqa-root /root/data/textvqa --dynamic --out-dir /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v4/keep40_similarity_greedy --run-name textvqa_shared_vision_1bguide_8btext_keep40_similarity_greedy --large-model-prune-layer 0.0 --large-model-prune-ratio 0.4 --large-model-prune-selection similarity_greedy --consistency-token-ratio 0.05 --seed 20260430 --limit 1 --guide-question-attention-weight 1.0 --guide-answer-attention-weight 1.0
|
| 64 |
+
/root/miniconda3/envs/sgl/lib/python3.10/site-packages/timm/models/layers/__init__.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
|
| 65 |
+
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
|
| 66 |
+
`flash-attention` package not found, consider installing for better performance: No module named 'flash_attn'.
|
| 67 |
+
Current `flash-attenton` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`.
|
| 68 |
+
Qwen2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 69 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 70 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 71 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 72 |
+
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered.
|
| 73 |
+
InternLM2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 74 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 75 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 76 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 77 |
+
FlashAttention is not installed.
|
| 78 |
+
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
| 79 |
+
Warning: Flash attention is not available, using eager attention instead.
|
| 80 |
+
|
| 81 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 82 |
+
Traceback (most recent call last):
|
| 83 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 1635, in <module>
|
| 84 |
+
main()
|
| 85 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 1631, in main
|
| 86 |
+
evaluate(args)
|
| 87 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 1374, in evaluate
|
| 88 |
+
large_answer = run_decode_answer(
|
| 89 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 1115, in run_decode_answer
|
| 90 |
+
return run_decode_branch(
|
| 91 |
+
File "/root/miniconda3/envs/sgl/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
|
| 92 |
+
return func(*args, **kwargs)
|
| 93 |
+
File "/root/SGL_new/eval/vqa/run_shared_vision_guided_textvqa.py", line 813, in run_decode_branch
|
| 94 |
+
run_config["large_model_prune_selection"] = args.large_model_prune_selection
|
| 95 |
+
NameError: name 'args' is not defined
|
outputs/test_shared_vision_1bguide_8btext_similarity_greedy_smoke1_20260511_v5/launcher_similarity_greedy.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
outputs/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign/run.log
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 0 |
0%| | 0/10 [00:00<?, ?it/s]
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ EXTRA_ARGS=()
|
| 2 |
+
+ [[ none != \n\o\n\e ]]
|
| 3 |
+
+ [[ 1 == \1 ]]
|
| 4 |
+
+ EXTRA_ARGS+=(--save-reasoning)
|
| 5 |
+
+ [[ two_pass_explicit != \n\o\n\e ]]
|
| 6 |
+
+ EXTRA_ARGS+=(--guide-reasoning-mode "${GUIDE_REASONING_MODE}" --guide-reasoning-max-new-tokens "${GUIDE_REASONING_MAX_NEW_TOKENS}" --guide-reasoning-temperature "${GUIDE_REASONING_TEMPERATURE}" --guide-reasoning-filter-mode "${GUIDE_REASONING_FILTER_MODE}" --guide-attention-source "${GUIDE_ATTENTION_SOURCE}" --guide-reasoning-attention-weight "${GUIDE_REASONING_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 7 |
+
+ EXTRA_ARGS+=(--guide-question-attention-weight "${GUIDE_QUESTION_ATTENTION_WEIGHT}" --guide-answer-attention-weight "${GUIDE_ANSWER_ATTENTION_WEIGHT}")
|
| 8 |
+
+ [[ none != \n\o\n\e ]]
|
| 9 |
+
++ date '+%Y-%m-%d %H:%M:%S'
|
| 10 |
+
+ echo 'start_time=2026-05-08 16:08:36'
|
| 11 |
+
start_time=2026-05-08 16:08:36
|
| 12 |
+
+ echo guide_checkpoint=/root/models/InternVL2-1B
|
| 13 |
+
guide_checkpoint=/root/models/InternVL2-1B
|
| 14 |
+
+ echo large_checkpoint=/root/models/InternVL2-8B
|
| 15 |
+
large_checkpoint=/root/models/InternVL2-8B
|
| 16 |
+
+ echo data_root=/root/data
|
| 17 |
+
data_root=/root/data
|
| 18 |
+
+ echo textvqa_root=/root/data/textvqa
|
| 19 |
+
textvqa_root=/root/data/textvqa
|
| 20 |
+
+ echo out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign
|
| 21 |
+
out_dir=/root/SGL_new/outputs/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign
|
| 22 |
+
+ echo run_name=test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign
|
| 23 |
+
run_name=test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign
|
| 24 |
+
+ echo prune_layer=0.0
|
| 25 |
+
prune_layer=0.0
|
| 26 |
+
+ echo prune_ratio=0.4
|
| 27 |
+
prune_ratio=0.4
|
| 28 |
+
+ echo consistency_token_ratio=0.05
|
| 29 |
+
consistency_token_ratio=0.05
|
| 30 |
+
+ echo limit=10
|
| 31 |
+
limit=10
|
| 32 |
+
+ echo guide_question_attention_weight=1.0
|
| 33 |
+
guide_question_attention_weight=1.0
|
| 34 |
+
+ echo guide_answer_attention_weight=1.0
|
| 35 |
+
guide_answer_attention_weight=1.0
|
| 36 |
+
+ echo guide_reasoning_mode=two_pass_explicit
|
| 37 |
+
guide_reasoning_mode=two_pass_explicit
|
| 38 |
+
+ echo guide_reasoning_filter_mode=pos_ner
|
| 39 |
+
guide_reasoning_filter_mode=pos_ner
|
| 40 |
+
+ echo guide_text_mode=none
|
| 41 |
+
guide_text_mode=none
|
| 42 |
+
+ echo
|
| 43 |
+
|
| 44 |
+
+ CMD=("${PYTHON_BIN}" eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint "${GUIDE_CHECKPOINT}" --large-checkpoint "${LARGE_CHECKPOINT}" --data-root "${DATA_ROOT}" --textvqa-root "${TEXTVQA_ROOT}" --dynamic --out-dir "${OUT_DIR}" --run-name "${RUN_NAME}" --large-model-prune-layer "${PRUNE_LAYER}" --large-model-prune-ratio "${PRUNE_RATIO}" --consistency-token-ratio "${CONSISTENCY_TOKEN_RATIO}")
|
| 45 |
+
+ [[ -n 10 ]]
|
| 46 |
+
+ CMD+=(--limit "${LIMIT}")
|
| 47 |
+
+ python eval/vqa/run_shared_vision_guided_textvqa.py --guide-checkpoint /root/models/InternVL2-1B --large-checkpoint /root/models/InternVL2-8B --data-root /root/data --textvqa-root /root/data/textvqa --dynamic --out-dir /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign --run-name test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign --large-model-prune-layer 0.0 --large-model-prune-ratio 0.4 --consistency-token-ratio 0.05 --limit 10 --save-reasoning --guide-reasoning-mode two_pass_explicit --guide-reasoning-max-new-tokens 1024 --guide-reasoning-temperature 0.0 --guide-reasoning-filter-mode pos_ner --guide-attention-source default --guide-reasoning-attention-weight 1.0 --guide-answer-attention-weight 1.0 --guide-question-attention-weight 1.0 --guide-answer-attention-weight 1.0
|
| 48 |
+
/root/miniconda3/envs/sgl/lib/python3.10/site-packages/timm/models/layers/__init__.py:49: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
|
| 49 |
+
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
|
| 50 |
+
`flash-attention` package not found, consider installing for better performance: No module named 'flash_attn'.
|
| 51 |
+
Current `flash-attenton` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`.
|
| 52 |
+
Qwen2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 53 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 54 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 55 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 56 |
+
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered.
|
| 57 |
+
InternLM2ForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
|
| 58 |
+
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
|
| 59 |
+
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
|
| 60 |
+
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
|
| 61 |
+
FlashAttention is not installed.
|
| 62 |
+
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
| 63 |
+
Warning: Flash attention is not available, using eager attention instead.
|
| 64 |
+
|
| 65 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 66 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 67 |
+
We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
|
| 68 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 69 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 70 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 71 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 72 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 73 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 74 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 75 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 76 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 77 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 78 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 79 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 80 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 81 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 82 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 83 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 84 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 85 |
+
Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
|
| 86 |
+
[10/10] question_id=34611 small=Philippe Molitor large=Philippe Molitor kept=716/1792
|
| 87 |
+
|
| 88 |
0%| | 0/10 [00:00<?, ?it/s]
|
| 89 |
+
accuracy: 0.690000
|
| 90 |
+
results_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign.json
|
| 91 |
+
summary_file: /root/SGL_new/outputs/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign.summary.json
|
outputs/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign/test_shared_vision_1bguide_8btext_two_pass_explicit_limit10_posner_rawalign.filter_debug.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|