2026-02-13 22:47:40,745 - INFO - === Loading & Creating Swap Pairs === 2026-02-13 22:47:43,750 - INFO - Swap pair creation stats: 2026-02-13 22:47:43,751 - INFO - left: 616/616 2026-02-13 22:47:43,751 - INFO - right: 620/620 2026-02-13 22:47:43,752 - INFO - above: 596/596 2026-02-13 22:47:43,752 - INFO - under: 602/602 2026-02-13 22:47:43,752 - INFO - far: 594/594 2026-02-13 22:47:43,752 - INFO - close: 612/612 2026-02-13 22:47:43,752 - INFO - Total pairs: 3640 2026-02-13 22:47:43,962 - INFO - PyTorch version 2.3.0 available. 2026-02-13 22:47:44,436 - INFO - Loading HF dataset: FlagEval/EmbSpatial-Bench 2026-02-13 22:48:00,290 - INFO - Built bbox cache: 3640 entries 2026-02-13 22:48:00,303 - INFO - Cross-group quads: 0/1206 (ambiguous=0, no_bbox=1206) 2026-02-13 22:48:00,304 - INFO - ============================================================ 2026-02-13 22:48:00,304 - INFO - Processing nvila - 80k 2026-02-13 22:48:00,304 - INFO - Model path: /data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_80K-20251108_180221 2026-02-13 22:48:00,304 - INFO - ============================================================ [2026-02-13 22:48:04,074] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) 2026-02-13 22:48:13,461 - INFO - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. 2026-02-13 22:48:46,199 - INFO - Loaded NVILA from /data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_80K-20251108_180221 2026-02-13 22:48:46,200 - INFO - Model has 28 layers. Extracting ALL. 2026-02-13 22:48:46,201 - INFO - --- Phase A: Extracting swap pair features --- Swap pairs: 0%| | 0/1200 [00:00