2026-02-24 01:54:56,297 - INFO - Logging to: /data/shared/Qwen/experiments/swap_analysis/logs/nvila.log 2026-02-24 01:54:56,298 - INFO - === Loading & Creating Swap Pairs === 2026-02-24 01:54:59,210 - INFO - Swap pair creation stats: 2026-02-24 01:54:59,210 - INFO - left: 616/616 2026-02-24 01:54:59,210 - INFO - right: 620/620 2026-02-24 01:54:59,211 - INFO - above: 596/596 2026-02-24 01:54:59,211 - INFO - under: 602/602 2026-02-24 01:54:59,211 - INFO - far: 594/594 2026-02-24 01:54:59,211 - INFO - close: 612/612 2026-02-24 01:54:59,211 - INFO - Total pairs: 3640 2026-02-24 01:54:59,400 - INFO - PyTorch version 2.3.0 available. 2026-02-24 01:54:59,833 - INFO - Loading HF dataset: FlagEval/EmbSpatial-Bench 2026-02-24 01:55:15,769 - INFO - Built bbox cache: 3640 entries (sample keys: ['mp3d_0', 'mp3d_1', 'mp3d_2', 'mp3d_3', 'mp3d_4']) 2026-02-24 01:55:15,797 - INFO - Matched 1206/1206 question_ids between TSV and HF dataset 2026-02-24 01:55:15,806 - INFO - Cross-group quads: 1039/1206 (ambiguous=112, no_bbox=55) 2026-02-24 01:55:15,806 - INFO - ============================================================ 2026-02-24 01:55:15,807 - INFO - Processing nvila - 2m 2026-02-24 01:55:15,807 - INFO - Model path: /data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_2M-20260205_003632 2026-02-24 01:55:15,807 - INFO - ============================================================ [2026-02-24 01:55:20,873] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) 2026-02-24 01:55:29,489 - INFO - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. 2026-02-24 01:55:53,065 - INFO - Loaded NVILA from /data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_2M-20260205_003632 2026-02-24 01:55:53,066 - INFO - Model has 28 layers. Extracting ALL. 2026-02-24 01:55:53,067 - INFO - --- Phase A: Extracting swap pair features --- Swap pairs: 0%| | 0/1200 [00:00