File size: 142,745 Bytes
a34effe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
2026-03-10 17:54:50,378 - INFO - Logging to: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/logs/qwen_2m.log
2026-03-10 17:54:50,378 - INFO - 
============================================================
2026-03-10 17:54:50,378 - INFO - Processing qwen - 2m
2026-03-10 17:54:50,378 - INFO - Model path: /data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_2m-20260109_120517
2026-03-10 17:54:50,378 - INFO - ============================================================
2026-03-10 17:55:03,424 - ERROR - Failed qwen - 2m: Incorrect path_or_model_id: '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_2m-20260109_120517'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
2026-03-10 17:55:03,426 - INFO - 
============================================================
2026-03-10 17:55:03,426 - INFO - === All scales complete ===
2026-03-10 17:55:03,426 - INFO - Results: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data
2026-03-10 17:55:03,426 - INFO - ============================================================
2026-03-10 20:07:33,119 - INFO - Logging to: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/logs/qwen_2m.log
2026-03-10 20:07:33,119 - INFO - 
============================================================
2026-03-10 20:07:33,119 - INFO - Processing qwen - 2m
2026-03-10 20:07:33,119 - INFO - Model path: /data/shared/Qwen/mydisk/output/Qwen/data_scale_exp/Qwen2.5-VL-3B-Instruct-data_scale_exp_2m-20260109_120517
2026-03-10 20:07:33,119 - INFO - ============================================================
2026-03-10 20:08:24,622 - INFO - Loaded Qwen2.5-VL from /data/shared/Qwen/mydisk/output/Qwen/data_scale_exp/Qwen2.5-VL-3B-Instruct-data_scale_exp_2m-20260109_120517
2026-03-10 20:08:24,622 - INFO - Model has 36 layers. Extracting ALL.
2026-03-10 20:08:24,623 - INFO - 
--- Phase A: Extracting swap pair features ---
2026-03-10 20:08:24,650 - INFO - set VIDEO_TOTAL_PIXELS: 90316800
2026-03-10 20:08:27,752 - INFO -   #573    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:28,438 - INFO -   #119    left   orig[X]="right" swap[O]="right"
2026-03-10 20:08:29,205 - INFO -   #30     left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:30,354 - INFO -   #308    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:31,135 - INFO -   #277    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:31,987 - INFO -   #257    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:32,612 - WARNING - Error on index 159: CUDA out of memory. Tried to allocate 16.10 GiB. GPU 0 has a total capacity of 79.14 GiB of which 9.35 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.66 GiB is allocated by PyTorch, and 577.02 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:08:33,535 - INFO -   #115    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:34,329 - INFO -   #616    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:35,077 - INFO -   #498    left   orig[O]="left" swap[X]="left"
2026-03-10 20:08:35,925 - INFO -   #97     left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:37,183 - INFO -   #538    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:38,107 - INFO -   #414    left   orig[O]="left" swap[X]="left"
2026-03-10 20:08:39,033 - INFO -   #36     left   orig[X]="right" swap[O]="right"
2026-03-10 20:08:40,222 - INFO -   #32     left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:41,250 - INFO -   #103    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:42,050 - WARNING - Error on index 244: CUDA out of memory. Tried to allocate 20.72 GiB. GPU 0 has a total capacity of 79.14 GiB of which 9.00 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.99 GiB is allocated by PyTorch, and 571.32 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:08:42,849 - INFO -   #270    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:43,968 - INFO -   #476    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:45,223 - INFO -   #547    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:46,120 - INFO -   #638    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:46,977 - INFO -   #505    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:47,734 - INFO -   #227    left   orig[X]="right" swap[O]="right"
2026-03-10 20:08:48,431 - INFO -   #617    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:49,304 - INFO -   #410    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:50,144 - INFO -   #256    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:50,827 - INFO -   #428    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:51,532 - INFO -   #537    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:52,483 - INFO -   #316    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:56,522 - INFO -   #9      left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:57,404 - INFO -   #187    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:58,604 - INFO -   #604    left   orig[O]="left" swap[O]="right"
2026-03-10 20:08:59,354 - INFO -   #358    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:00,416 - INFO -   #642    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:01,178 - INFO -   #185    left   orig[X]="right" swap[O]="right"
2026-03-10 20:09:02,232 - INFO -   #590    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:02,993 - INFO -   #354    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:03,585 - INFO -   #621    left   orig[X]="right" swap[O]="right"
2026-03-10 20:09:04,572 - INFO -   #595    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:05,793 - INFO -   #383    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:06,671 - INFO -   #104    left   orig[O]="left" swap[X]="left"
2026-03-10 20:09:07,510 - INFO -   #371    left   orig[X]="right" swap[O]="right"
2026-03-10 20:09:08,751 - INFO -   #367    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:09,580 - WARNING - Error on index 285: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 10.04 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 438.33 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:09:10,341 - INFO -   #56     left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:11,197 - INFO -   #433    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:12,106 - INFO -   #139    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:13,022 - INFO -   #382    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:13,759 - INFO -   #74     left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:14,611 - INFO -   #324    left   orig[O]="left" swap[X]="left"
2026-03-10 20:09:15,514 - INFO -   #372    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:16,349 - INFO -   #215    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:17,022 - INFO -   #67     left   orig[O]="left" swap[O]="right"  [50/738]
2026-03-10 20:09:17,817 - INFO -   #496    left   orig[X]="right" swap[O]="right"
2026-03-10 20:09:18,437 - WARNING - Error on index 268: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 9.55 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 930.28 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:09:19,163 - INFO -   #322    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:20,011 - INFO -   #466    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:20,739 - INFO -   #47     left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:21,630 - INFO -   #420    left   orig[X]="right" swap[O]="right"
2026-03-10 20:09:22,531 - INFO -   #122    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:23,240 - INFO -   #422    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:24,029 - INFO -   #57     left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:24,879 - INFO -   #211    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:25,633 - INFO -   #630    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:26,379 - INFO -   #474    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:27,571 - INFO -   #346    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:28,251 - INFO -   #409    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:29,063 - INFO -   #208    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:30,325 - INFO -   #487    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:30,939 - INFO -   #209    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:31,662 - INFO -   #204    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:32,386 - INFO -   #512    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:33,455 - INFO -   #353    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:34,284 - INFO -   #148    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:35,241 - INFO -   #368    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:36,061 - INFO -   #550    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:36,917 - INFO -   #347    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:37,738 - INFO -   #45     left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:38,475 - INFO -   #336    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:39,600 - INFO -   #494    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:40,387 - INFO -   #87     left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:41,382 - INFO -   #286    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:42,059 - INFO -   #374    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:43,021 - INFO -   #491    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:43,824 - INFO -   #554    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:44,913 - INFO -   #589    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:46,149 - INFO -   #448    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:46,906 - INFO -   #459    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:47,563 - INFO -   #400    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:48,309 - INFO -   #499    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:49,303 - INFO -   #535    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:50,378 - INFO -   #640    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:51,431 - INFO -   #562    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:52,278 - INFO -   #189    left   orig[X]="right" swap[X]="left"
2026-03-10 20:09:53,322 - INFO -   #597    left   orig[X]="right" swap[O]="right"
2026-03-10 20:09:53,836 - INFO -   #458    left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:56,653 - INFO -   #18     left   orig[O]="left" swap[O]="right"
2026-03-10 20:09:57,381 - INFO -   #540    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:00,221 - INFO -   #230    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:00,975 - INFO -   #381    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:01,808 - INFO -   #599    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:02,638 - INFO -   #117    left   orig[X]="right" swap[O]="right"
2026-03-10 20:10:03,493 - INFO -   #318    left   orig[X]="right" swap[O]="right"
2026-03-10 20:10:04,204 - INFO -   #453    left   orig[O]="left" swap[O]="right"  [100/738]
2026-03-10 20:10:05,998 - INFO -   #376    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:07,177 - INFO -   #278    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:08,592 - INFO -   #569    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:09,823 - INFO -   #436    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:10,652 - INFO -   #69     left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:11,467 - WARNING - Error on index 144: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 10.03 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 438.33 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:10:12,710 - INFO -   #478    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:13,806 - INFO -   #387    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:15,230 - INFO -   #395    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:16,411 - INFO -   #319    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:17,499 - INFO -   #241    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:18,336 - INFO -   #350    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:19,621 - INFO -   #429    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:20,447 - INFO -   #500    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:21,273 - INFO -   #345    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:22,301 - INFO -   #403    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:22,725 - WARNING - Error on index 25: CUDA out of memory. Tried to allocate 5.73 GiB. GPU 0 has a total capacity of 79.14 GiB of which 5.17 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 13.54 GiB is allocated by PyTorch, and 333.43 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:10:23,763 - INFO -   #488    left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:26,984 - INFO -   #16     left   orig[O]="left" swap[O]="right"
2026-03-10 20:10:27,785 - INFO -   #95     right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:28,563 - INFO -   #135    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:29,364 - INFO -   #557    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:30,190 - INFO -   #137    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:30,819 - INFO -   #603    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:31,560 - INFO -   #334    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:32,212 - INFO -   #526    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:33,043 - INFO -   #43     right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:33,866 - INFO -   #312    right  orig[X]="left" swap[O]="left"
2026-03-10 20:10:34,517 - INFO -   #304    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:35,350 - INFO -   #641    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:36,586 - INFO -   #379    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:37,322 - INFO -   #457    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:38,054 - INFO -   #212    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:39,269 - INFO -   #475    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:39,913 - INFO -   #11     right  orig[O]="right" swap[X]="right"
2026-03-10 20:10:40,620 - INFO -   #644    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:41,445 - INFO -   #96     right  orig[X]="left" swap[O]="left"
2026-03-10 20:10:42,165 - INFO -   #623    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:42,887 - INFO -   #463    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:43,717 - INFO -   #220    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:44,771 - INFO -   #579    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:45,579 - INFO -   #281    right  orig[O]="right" swap[X]="right"
2026-03-10 20:10:46,405 - INFO -   #649    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:46,978 - INFO -   #236    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:47,743 - INFO -   #341    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:48,606 - INFO -   #645    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:49,844 - INFO -   #375    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:50,583 - INFO -   #2      right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:51,188 - INFO -   #219    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:51,876 - INFO -   #434    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:52,576 - INFO -   #152    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:53,226 - INFO -   #435    right  orig[O]="right" swap[O]="left"  [150/738]
2026-03-10 20:10:53,950 - INFO -   #92     right  orig[O]="right" swap[X]="right"
2026-03-10 20:10:54,699 - INFO -   #647    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:55,545 - INFO -   #243    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:56,304 - INFO -   #574    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:57,075 - INFO -   #584    right  orig[O]="right" swap[X]="right"
2026-03-10 20:10:58,116 - INFO -   #533    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:58,808 - INFO -   #163    right  orig[O]="right" swap[O]="left"
2026-03-10 20:10:59,789 - INFO -   #648    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:01,046 - INFO -   #297    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:01,690 - INFO -   #138    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:02,529 - INFO -   #465    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:03,366 - INFO -   #627    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:04,185 - INFO -   #596    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:05,224 - INFO -   #529    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:06,077 - INFO -   #263    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:06,908 - INFO -   #404    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:09,676 - INFO -   #15     right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:10,240 - INFO -   #609    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:10,970 - INFO -   #293    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:11,780 - INFO -   #252    right  orig[O]="right" swap[X]="right"
2026-03-10 20:11:12,493 - INFO -   #195    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:13,211 - INFO -   #39     right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:13,854 - INFO -   #506    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:14,530 - INFO -   #54     right  orig[X]="left" swap[X]="right"
2026-03-10 20:11:15,278 - INFO -   #55     right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:15,965 - INFO -   #401    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:16,849 - INFO -   #51     right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:17,665 - INFO -   #460    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:18,489 - INFO -   #109    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:19,181 - INFO -   #534    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:20,011 - INFO -   #391    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:20,656 - WARNING - Error on index 141: CUDA out of memory. Tried to allocate 20.72 GiB. GPU 0 has a total capacity of 79.14 GiB of which 5.21 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.99 GiB is allocated by PyTorch, and 4.88 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:11:21,299 - INFO -   #594    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:21,970 - INFO -   #643    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:22,754 - INFO -   #175    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:23,401 - INFO -   #167    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:24,267 - INFO -   #254    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:24,879 - INFO -   #326    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:25,705 - INFO -   #555    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:26,341 - INFO -   #344    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:27,153 - INFO -   #373    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:27,744 - WARNING - Error on index 214: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 5.21 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 5.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:11:28,470 - INFO -   #183    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:29,169 - INFO -   #41     right  orig[O]="right" swap[X]="right"
2026-03-10 20:11:29,879 - INFO -   #105    right  orig[O]="right" swap[X]="right"
2026-03-10 20:11:30,533 - INFO -   #518    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:31,415 - INFO -   #28     right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:32,153 - INFO -   #150    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:32,913 - INFO -   #624    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:33,933 - INFO -   #605    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:35,093 - INFO -   #224    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:36,040 - INFO -   #620    right  orig[O]="right" swap[O]="left"  [200/738]
2026-03-10 20:11:36,858 - INFO -   #377    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:38,043 - INFO -   #469    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:39,834 - INFO -   #530    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:40,675 - INFO -   #33     right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:41,365 - INFO -   #289    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:47,366 - INFO -   #259    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:48,221 - INFO -   #27     right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:48,850 - INFO -   #406    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:49,835 - INFO -   #380    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:50,662 - INFO -   #520    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:51,701 - INFO -   #560    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:52,505 - INFO -   #619    right  orig[X]="left" swap[O]="left"
2026-03-10 20:11:53,125 - INFO -   #213    right  orig[X]="left" swap[X]="right"
2026-03-10 20:11:53,821 - INFO -   #99     right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:54,642 - INFO -   #118    right  orig[O]="right" swap[X]="right"
2026-03-10 20:11:55,439 - INFO -   #274    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:56,082 - INFO -   #202    right  orig[X]="left" swap[O]="left"
2026-03-10 20:11:56,839 - INFO -   #582    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:57,680 - INFO -   #221    right  orig[X]="left" swap[O]="left"
2026-03-10 20:11:58,388 - INFO -   #639    right  orig[O]="right" swap[O]="left"
2026-03-10 20:11:59,094 - INFO -   #511    right  orig[X]="left" swap[X]="right"
2026-03-10 20:11:59,797 - INFO -   #228    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:00,524 - INFO -   #294    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:01,236 - INFO -   #193    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:02,725 - INFO -   #394    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:03,512 - INFO -   #292    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:04,382 - INFO -   #170    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:05,035 - INFO -   #68     right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:06,193 - INFO -   #37     right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:07,139 - INFO -   #276    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:08,392 - INFO -   #272    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:09,258 - INFO -   #176    right  orig[O]="right" swap[X]="right"
2026-03-10 20:12:11,470 - INFO -   #587    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:12,755 - INFO -   #447    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:19,124 - INFO -   #283    right  orig[O]="right" swap[X]="right"
2026-03-10 20:12:20,499 - INFO -   #192    right  orig[O]="right" swap[X]="right"
2026-03-10 20:12:21,831 - INFO -   #24     right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:23,174 - INFO -   #313    right  orig[O]="right" swap[O]="left"
2026-03-10 20:12:24,124 - WARNING - Error on index 60: CUDA out of memory. Tried to allocate 12.16 GiB. GPU 0 has a total capacity of 79.14 GiB of which 1.90 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 20.46 GiB is allocated by PyTorch, and 1.92 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:12:25,873 - INFO -   #426    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:27,272 - INFO -   #335    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:27,766 - WARNING - Error on index 112: CUDA out of memory. Tried to allocate 5.73 GiB. GPU 0 has a total capacity of 79.14 GiB of which 1.90 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 20.70 GiB is allocated by PyTorch, and 1.67 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:12:29,054 - INFO -   #232    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:29,854 - INFO -   #179    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:30,641 - INFO -   #172    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:31,446 - INFO -   #473    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:32,125 - INFO -   #146    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:32,973 - INFO -   #444    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:33,615 - INFO -   #169    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:34,464 - INFO -   #258    above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:35,283 - INFO -   #507    above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:36,008 - INFO -   #601    above  orig[O]="above" swap[O]="below"  [250/738]
2026-03-10 20:12:36,604 - INFO -   #82     above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:37,673 - INFO -   #468    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:38,343 - INFO -   #106    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:39,105 - INFO -   #21     above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:39,905 - INFO -   #321    above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:40,834 - WARNING - Error on index 253: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 14.06 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 1.61 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:12:41,571 - INFO -   #443    above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:42,268 - INFO -   #1      above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:42,842 - INFO -   #44     above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:43,553 - INFO -   #561    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:44,388 - INFO -   #392    above  orig[X]="below" swap[X]="above"
2026-03-10 20:12:45,418 - INFO -   #578    above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:46,076 - INFO -   #121    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:46,799 - INFO -   #84     above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:47,484 - INFO -   #190    above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:48,197 - INFO -   #231    above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:48,829 - INFO -   #229    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:49,461 - INFO -   #607    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:50,266 - INFO -   #455    above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:50,847 - INFO -   #182    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:51,400 - INFO -   #23     above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:52,102 - INFO -   #483    above  orig[O]="above" swap[X]="above"
2026-03-10 20:12:52,866 - INFO -   #593    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:53,444 - INFO -   #0      above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:54,247 - INFO -   #598    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:54,656 - WARNING - Error on index 133: CUDA out of memory. Tried to allocate 5.73 GiB. GPU 0 has a total capacity of 79.14 GiB of which 2.59 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 20.70 GiB is allocated by PyTorch, and 1000.02 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:12:55,350 - INFO -   #416    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:56,572 - INFO -   #629    above  orig[O]="above" swap[O]="below"
2026-03-10 20:12:59,803 - INFO -   #218    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:00,869 - INFO -   #147    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:01,787 - INFO -   #199    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:02,819 - INFO -   #356    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:03,900 - INFO -   #386    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:05,360 - INFO -   #566    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:06,615 - INFO -   #325    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:07,359 - INFO -   #363    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:08,146 - INFO -   #586    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:09,088 - INFO -   #549    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:10,048 - INFO -   #102    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:11,069 - INFO -   #149    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:11,684 - INFO -   #467    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:12,377 - INFO -   #452    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:13,176 - INFO -   #264    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:14,000 - INFO -   #527    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:15,225 - INFO -   #351    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:15,781 - INFO -   #158    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:16,549 - INFO -   #328    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:17,759 - INFO -   #539    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:19,059 - INFO -   #349    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:20,272 - INFO -   #544    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:20,829 - INFO -   #233    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:21,733 - WARNING - Error on index 249: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 14.04 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 1.61 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:13:22,539 - INFO -   #83     above  orig[O]="above" swap[X]="above"  [300/738]
2026-03-10 20:13:24,065 - INFO -   #536    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:24,920 - INFO -   #239    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:28,237 - INFO -   #42     above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:28,785 - INFO -   #91     above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:29,308 - INFO -   #31     above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:30,184 - INFO -   #260    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:31,030 - INFO -   #486    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:31,766 - INFO -   #462    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:32,429 - INFO -   #646    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:33,237 - INFO -   #126    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:34,191 - INFO -   #17     above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:34,954 - INFO -   #315    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:35,616 - INFO -   #194    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:36,803 - INFO -   #390    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:37,682 - INFO -   #222    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:40,419 - WARNING - Error on index 72: CUDA out of memory. Tried to allocate 20.72 GiB. GPU 0 has a total capacity of 79.14 GiB of which 13.06 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.99 GiB is allocated by PyTorch, and 2.19 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:13:41,144 - INFO -   #48     above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:41,856 - INFO -   #164    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:42,387 - INFO -   #168    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:43,140 - INFO -   #291    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:43,779 - INFO -   #251    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:44,188 - INFO -   #310    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:44,960 - INFO -   #541    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:45,626 - INFO -   #510    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:46,356 - INFO -   #402    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:50,374 - INFO -   #162    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:51,063 - INFO -   #364    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:51,636 - INFO -   #120    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:52,810 - INFO -   #216    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:53,877 - INFO -   #14     above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:54,258 - INFO -   #441    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:55,514 - INFO -   #439    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:56,666 - INFO -   #156    above  orig[X]="below" swap[O]="below"
2026-03-10 20:13:57,222 - INFO -   #419    above  orig[O]="above" swap[X]="above"
2026-03-10 20:13:58,005 - INFO -   #317    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:58,878 - INFO -   #171    above  orig[O]="above" swap[O]="below"
2026-03-10 20:13:59,728 - INFO -   #143    above  orig[O]="above" swap[X]="above"
2026-03-10 20:14:00,563 - INFO -   #223    above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:01,368 - INFO -   #129    above  orig[O]="above" swap[X]="above"
2026-03-10 20:14:04,568 - INFO -   #203    above  orig[O]="above" swap[X]="above"
2026-03-10 20:14:05,319 - INFO -   #309    above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:06,043 - INFO -   #86     above  orig[O]="above" swap[X]="above"
2026-03-10 20:14:06,691 - INFO -   #608    above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:07,463 - INFO -   #207    above  orig[O]="above" swap[X]="above"
2026-03-10 20:14:08,418 - INFO -   #261    above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:09,383 - INFO -   #79     above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:10,235 - WARNING - Error on index 248: CUDA out of memory. Tried to allocate 20.72 GiB. GPU 0 has a total capacity of 79.14 GiB of which 8.84 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.99 GiB is allocated by PyTorch, and 1.24 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:10,882 - WARNING - Error on index 49: CUDA out of memory. Tried to allocate 20.72 GiB. GPU 0 has a total capacity of 79.14 GiB of which 8.84 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.99 GiB is allocated by PyTorch, and 1.24 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:11,476 - INFO -   #110    above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:12,103 - INFO -   #78     above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:13,294 - INFO -   #583    above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:14,136 - INFO -   #40     above  orig[O]="above" swap[X]="above"
2026-03-10 20:14:14,616 - INFO -   #181    above  orig[O]="above" swap[O]="below"  [350/738]
2026-03-10 20:14:15,394 - INFO -   #524    above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:15,892 - INFO -   #360    above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:17,072 - INFO -   #576    above  orig[O]="above" swap[O]="below"
2026-03-10 20:14:17,618 - INFO -   #3      below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:18,380 - INFO -   #6      below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:19,002 - WARNING - Error on index 8: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 8.87 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 1.61 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:19,635 - INFO -   #20     below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:20,347 - INFO -   #22     below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:21,053 - INFO -   #26     below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:21,901 - INFO -   #29     below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:22,771 - INFO -   #50     below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:23,472 - INFO -   #65     below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:24,050 - INFO -   #66     below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:24,721 - WARNING - Error on index 70: CUDA out of memory. Tried to allocate 20.72 GiB. GPU 0 has a total capacity of 79.14 GiB of which 8.88 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.99 GiB is allocated by PyTorch, and 1.24 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:25,226 - WARNING - Error on index 73: CUDA out of memory. Tried to allocate 12.05 GiB. GPU 0 has a total capacity of 79.14 GiB of which 10.26 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.37 GiB is allocated by PyTorch, and 483.60 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:25,955 - INFO -   #75     below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:26,561 - INFO -   #76     below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:27,259 - INFO -   #77     below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:27,969 - INFO -   #80     below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:28,694 - INFO -   #85     below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:29,185 - INFO -   #88     below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:29,626 - INFO -   #90     below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:30,223 - INFO -   #93     below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:31,165 - INFO -   #94     below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:32,546 - INFO -   #98     below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:33,930 - INFO -   #100    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:35,460 - INFO -   #101    below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:36,786 - INFO -   #107    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:38,388 - INFO -   #116    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:39,794 - INFO -   #123    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:40,841 - INFO -   #125    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:41,782 - INFO -   #127    below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:43,083 - INFO -   #128    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:44,721 - INFO -   #132    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:46,122 - INFO -   #136    below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:46,802 - WARNING - Error on index 142: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 2.92 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 724.67 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:47,494 - WARNING - Error on index 145: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 2.92 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 724.72 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:48,484 - INFO -   #161    below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:49,883 - INFO -   #184    below  orig[X]="above" swap[X]="below"
2026-03-10 20:14:50,913 - INFO -   #186    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:51,310 - WARNING - Error on index 191: CUDA out of memory. Tried to allocate 6.21 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.86 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 7.90 GiB is allocated by PyTorch, and 476.50 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:54,541 - INFO -   #197    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:55,650 - INFO -   #200    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:56,203 - INFO -   #205    below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:56,894 - INFO -   #206    below  orig[X]="above" swap[O]="above"
2026-03-10 20:14:57,683 - INFO -   #217    below  orig[O]="below" swap[O]="above"
2026-03-10 20:14:58,377 - WARNING - Error on index 235: CUDA out of memory. Tried to allocate 20.72 GiB. GPU 0 has a total capacity of 79.14 GiB of which 1.95 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.99 GiB is allocated by PyTorch, and 1.29 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:58,923 - WARNING - Error on index 237: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 2.90 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 724.71 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:14:59,774 - INFO -   #242    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:00,812 - INFO -   #246    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:01,436 - INFO -   #247    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:02,240 - INFO -   #250    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:03,052 - INFO -   #267    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:03,293 - WARNING - Error on index 269: CUDA out of memory. Tried to allocate 1.59 GiB. GPU 0 has a total capacity of 79.14 GiB of which 1017.19 MiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 10.99 GiB is allocated by PyTorch, and 248.17 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:15:04,112 - INFO -   #280    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:04,767 - WARNING - Error on index 287: CUDA out of memory. Tried to allocate 20.72 GiB. GPU 0 has a total capacity of 79.14 GiB of which 2.58 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.99 GiB is allocated by PyTorch, and 671.33 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:15:05,702 - WARNING - Error on index 290: CUDA out of memory. Tried to allocate 15.38 GiB. GPU 0 has a total capacity of 79.14 GiB of which 14.65 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 8.61 GiB is allocated by PyTorch, and 1.03 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:15:06,230 - INFO -   #296    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:07,052 - INFO -   #298    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:07,812 - INFO -   #299    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:08,763 - INFO -   #300    below  orig[O]="below" swap[O]="above"  [400/738]
2026-03-10 20:15:09,941 - INFO -   #303    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:11,394 - INFO -   #305    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:12,487 - INFO -   #306    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:14,767 - INFO -   #311    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:15,746 - INFO -   #323    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:16,969 - INFO -   #338    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:18,175 - INFO -   #342    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:20,231 - INFO -   #352    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:21,203 - INFO -   #355    below  orig[X]="above" swap[X]="below"
2026-03-10 20:15:22,121 - INFO -   #357    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:23,521 - INFO -   #362    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:24,558 - INFO -   #365    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:25,768 - INFO -   #366    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:26,579 - INFO -   #370    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:27,291 - INFO -   #384    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:27,878 - INFO -   #388    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:28,533 - INFO -   #397    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:28,992 - INFO -   #399    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:30,299 - INFO -   #407    below  orig[O]="below" swap[X]="below"
2026-03-10 20:15:31,533 - INFO -   #408    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:32,271 - INFO -   #412    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:33,042 - INFO -   #415    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:33,960 - INFO -   #418    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:34,966 - INFO -   #427    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:36,670 - INFO -   #430    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:37,691 - INFO -   #437    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:38,636 - INFO -   #442    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:39,273 - INFO -   #445    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:39,948 - INFO -   #450    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:40,865 - INFO -   #454    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:41,748 - INFO -   #461    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:42,495 - INFO -   #470    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:43,696 - INFO -   #489    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:44,545 - INFO -   #492    below  orig[X]="above" swap[X]="below"
2026-03-10 20:15:45,259 - INFO -   #493    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:46,409 - INFO -   #495    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:47,100 - INFO -   #497    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:48,410 - INFO -   #501    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:49,114 - INFO -   #502    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:50,321 - INFO -   #504    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:51,575 - INFO -   #513    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:52,417 - INFO -   #516    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:53,442 - INFO -   #521    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:54,682 - INFO -   #522    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:55,414 - INFO -   #528    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:56,279 - INFO -   #531    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:56,915 - INFO -   #543    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:57,953 - INFO -   #545    below  orig[O]="below" swap[O]="above"
2026-03-10 20:15:58,993 - INFO -   #548    below  orig[X]="above" swap[O]="above"
2026-03-10 20:15:59,825 - INFO -   #551    below  orig[X]="above" swap[X]="below"  [450/738]
2026-03-10 20:16:00,540 - INFO -   #552    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:01,252 - INFO -   #553    below  orig[X]="above" swap[O]="above"
2026-03-10 20:16:02,024 - INFO -   #563    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:02,830 - INFO -   #564    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:03,512 - INFO -   #565    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:04,229 - INFO -   #572    below  orig[X]="above" swap[O]="above"
2026-03-10 20:16:04,948 - INFO -   #575    below  orig[X]="above" swap[O]="above"
2026-03-10 20:16:05,634 - INFO -   #577    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:06,294 - INFO -   #580    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:07,339 - INFO -   #585    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:08,160 - INFO -   #592    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:08,830 - INFO -   #610    below  orig[X]="above" swap[O]="above"
2026-03-10 20:16:09,709 - INFO -   #611    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:10,468 - INFO -   #622    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:11,298 - INFO -   #631    below  orig[O]="below" swap[O]="above"
2026-03-10 20:16:11,641 - WARNING - Error on index 1140: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.27 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:14,923 - INFO -   #614    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:15,845 - INFO -   #224    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:16,905 - INFO -   #274    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:17,190 - WARNING - Error on index 540: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.27 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:18,226 - INFO -   #243    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:19,189 - INFO -   #230    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:19,484 - WARNING - Error on index 1139: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:20,594 - INFO -   #315    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:20,884 - WARNING - Error on index 556: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:21,189 - WARNING - Error on index 580: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:21,489 - WARNING - Error on index 414: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:24,816 - INFO -   #701    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:25,110 - WARNING - Error on index 405: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:25,355 - WARNING - Error on index 1190: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:25,628 - WARNING - Error on index 1032: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:26,586 - INFO -   #997    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:26,873 - WARNING - Error on index 514: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:30,246 - INFO -   #119    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:33,430 - INFO -   #193    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:34,909 - INFO -   #878    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:35,196 - WARNING - Error on index 571: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:38,391 - INFO -   #107    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:41,610 - INFO -   #9      far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:45,104 - INFO -   #676    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:46,575 - INFO -   #269    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:46,939 - WARNING - Error on index 536: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.10 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:48,727 - INFO -   #323    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:50,971 - INFO -   #916    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:51,194 - WARNING - Error on index 1136: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:16:53,044 - INFO -   #886    far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:56,673 - INFO -   #22     far    orig[O]="far" swap[X]="far"
2026-03-10 20:16:58,391 - INFO -   #239    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:02,727 - INFO -   #158    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:04,350 - INFO -   #301    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:04,803 - WARNING - Error on index 1125: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:17:08,097 - INFO -   #83     far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:12,509 - INFO -   #765    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:14,124 - INFO -   #299    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:16,117 - INFO -   #890    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:17,111 - INFO -   #265    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:21,455 - INFO -   #101    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:25,847 - INFO -   #626    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:30,221 - INFO -   #751    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:34,565 - INFO -   #97     far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:38,954 - INFO -   #734    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:39,338 - WARNING - Error on index 1178: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:17:39,675 - WARNING - Error on index 504: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:17:41,117 - INFO -   #220    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:44,875 - INFO -   #719    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:47,122 - INFO -   #837    far    orig[O]="far" swap[X]="far"  [500/738]
2026-03-10 20:17:48,587 - INFO -   #314    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:48,936 - WARNING - Error on index 479: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:17:50,612 - INFO -   #324    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:52,105 - INFO -   #346    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:53,879 - INFO -   #851    far    orig[O]="far" swap[X]="far"
2026-03-10 20:17:58,698 - INFO -   #60     far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:00,525 - INFO -   #356    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:04,838 - INFO -   #673    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:06,455 - INFO -   #847    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:06,794 - WARNING - Error on index 499: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:18:07,108 - WARNING - Error on index 546: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:18:08,519 - INFO -   #320    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:09,707 - INFO -   #232    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:14,040 - INFO -   #786    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:18,421 - INFO -   #91     far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:19,789 - INFO -   #965    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:20,114 - WARNING - Error on index 447: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:18:21,224 - INFO -   #399    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:23,137 - INFO -   #950    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:26,645 - INFO -   #714    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:31,055 - INFO -   #622    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:31,405 - WARNING - Error on index 457: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:18:31,734 - WARNING - Error on index 451: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:18:35,783 - INFO -   #59     far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:37,473 - INFO -   #389    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:38,848 - INFO -   #816    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:42,843 - INFO -   #664    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:43,167 - WARNING - Error on index 574: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:18:46,920 - INFO -   #148    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:47,269 - WARNING - Error on index 1033: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:18:51,862 - INFO -   #715    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:53,844 - INFO -   #818    far    orig[O]="far" swap[X]="far"
2026-03-10 20:18:57,083 - INFO -   #672    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:01,496 - INFO -   #64     far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:01,864 - WARNING - Error on index 1187: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:04,179 - INFO -   #910    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:05,859 - INFO -   #989    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:06,092 - WARNING - Error on index 1145: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:10,100 - INFO -   #185    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:10,406 - WARNING - Error on index 594: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:11,353 - INFO -   #981    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:11,675 - WARNING - Error on index 1000: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:12,758 - INFO -   #272    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:16,338 - INFO -   #46     far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:20,707 - INFO -   #124    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:25,110 - INFO -   #613    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:25,441 - WARNING - Error on index 431: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:26,509 - INFO -   #338    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:26,831 - WARNING - Error on index 1110: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:28,010 - INFO -   #809    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:29,856 - INFO -   #316    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:30,204 - WARNING - Error on index 432: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:33,591 - INFO -   #620    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:33,883 - WARNING - Error on index 523: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:37,600 - INFO -   #133    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:37,898 - WARNING - Error on index 1041: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:39,758 - INFO -   #927    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:40,065 - WARNING - Error on index 591: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:41,802 - INFO -   #203    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:42,125 - WARNING - Error on index 1117: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:45,916 - INFO -   #52     far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:46,221 - WARNING - Error on index 1101: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:47,525 - INFO -   #984    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:51,280 - INFO -   #4      far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:51,571 - WARNING - Error on index 532: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:53,422 - INFO -   #953    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:54,885 - INFO -   #829    far    orig[O]="far" swap[X]="far"
2026-03-10 20:19:55,273 - WARNING - Error on index 549: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:55,601 - WARNING - Error on index 1176: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:19:59,945 - INFO -   #739    far    orig[O]="far" swap[X]="far"
2026-03-10 20:20:04,304 - INFO -   #766    far    orig[O]="far" swap[X]="far"
2026-03-10 20:20:08,672 - INFO -   #764    far    orig[O]="far" swap[X]="far"
2026-03-10 20:20:08,920 - WARNING - Error on index 420: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:20:12,875 - INFO -   #742    close  orig[X]="far" swap[O]="far"  [550/738]
2026-03-10 20:20:14,481 - INFO -   #872    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:18,509 - INFO -   #129    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:22,870 - INFO -   #682    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:26,402 - INFO -   #649    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:27,394 - INFO -   #251    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:31,123 - INFO -   #611    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:31,407 - WARNING - Error on index 1040: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:20:35,175 - INFO -   #642    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:36,992 - INFO -   #832    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:40,919 - INFO -   #675    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:43,009 - INFO -   #823    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:47,444 - INFO -   #602    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:47,805 - WARNING - Error on index 1128: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:20:50,021 - INFO -   #259    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:50,361 - WARNING - Error on index 404: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:20:52,662 - INFO -   #855    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:56,251 - INFO -   #771    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:57,598 - INFO -   #367    close  orig[X]="far" swap[O]="far"
2026-03-10 20:20:57,832 - WARNING - Error on index 1164: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:01,743 - INFO -   #616    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:03,563 - INFO -   #827    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:03,788 - WARNING - Error on index 1118: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:07,981 - INFO -   #0      close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:12,296 - INFO -   #632    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:12,585 - WARNING - Error on index 585: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:12,916 - WARNING - Error on index 441: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:14,468 - INFO -   #870    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:18,184 - INFO -   #668    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:19,894 - INFO -   #944    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:21,598 - INFO -   #895    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:21,897 - WARNING - Error on index 1133: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:22,272 - WARNING - Error on index 444: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:22,639 - WARNING - Error on index 1051: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:24,940 - INFO -   #969    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:26,903 - INFO -   #359    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:31,191 - INFO -   #167    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:31,555 - WARNING - Error on index 576: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:31,890 - WARNING - Error on index 1056: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:36,270 - INFO -   #691    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:40,247 - INFO -   #187    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:40,569 - WARNING - Error on index 486: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:43,991 - INFO -   #643    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:44,320 - WARNING - Error on index 468: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:44,660 - WARNING - Error on index 423: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:45,823 - INFO -   #303    close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:50,055 - INFO -   #44     close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:54,393 - INFO -   #84     close  orig[X]="far" swap[O]="far"
2026-03-10 20:21:54,738 - WARNING - Error on index 506: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:21:56,501 - INFO -   #975    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:00,246 - INFO -   #140    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:01,878 - INFO -   #926    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:02,904 - INFO -   #844    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:03,312 - WARNING - Error on index 415: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:07,733 - INFO -   #783    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:08,083 - WARNING - Error on index 1018: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:09,837 - INFO -   #820    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:10,114 - WARNING - Error on index 505: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:11,650 - INFO -   #304    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:15,421 - INFO -   #8      close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:17,144 - INFO -   #210    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:18,748 - INFO -   #860    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:19,098 - WARNING - Error on index 458: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:21,399 - INFO -   #369    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:21,746 - WARNING - Error on index 1060: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:23,528 - INFO -   #941    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:27,740 - INFO -   #90     close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:28,044 - WARNING - Error on index 512: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:29,590 - INFO -   #237    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:31,883 - INFO -   #928    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:33,381 - INFO -   #271    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:33,728 - WARNING - Error on index 1066: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:38,210 - INFO -   #652    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:39,327 - INFO -   #896    close  orig[X]="far" swap[O]="far"  [600/738]
2026-03-10 20:22:39,643 - WARNING - Error on index 1035: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:41,791 - INFO -   #861    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:44,051 - INFO -   #905    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:45,838 - INFO -   #343    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:46,190 - WARNING - Error on index 1099: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:48,335 - INFO -   #909    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:48,625 - WARNING - Error on index 530: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:48,867 - WARNING - Error on index 510: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:49,172 - WARNING - Error on index 562: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:51,424 - INFO -   #994    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:51,769 - WARNING - Error on index 497: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:52,162 - WARNING - Error on index 560: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:53,866 - INFO -   #891    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:57,882 - INFO -   #154    close  orig[X]="far" swap[O]="far"
2026-03-10 20:22:58,194 - WARNING - Error on index 581: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:58,498 - WARNING - Error on index 1116: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:58,813 - WARNING - Error on index 557: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:22:59,869 - INFO -   #354    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:00,861 - INFO -   #344    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:02,595 - INFO -   #906    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:02,919 - WARNING - Error on index 553: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:06,681 - INFO -   #74     close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:10,872 - INFO -   #128    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:15,259 - INFO -   #149    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:16,980 - INFO -   #226    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:17,318 - WARNING - Error on index 1168: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:21,709 - INFO -   #711    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:26,079 - INFO -   #151    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:29,672 - INFO -   #726    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:30,038 - WARNING - Error on index 1076: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:34,654 - INFO -   #55     close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:35,008 - WARNING - Error on index 437: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:35,373 - WARNING - Error on index 430: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:37,173 - INFO -   #351    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:37,408 - WARNING - Error on index 554: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:37,744 - WARNING - Error on index 484: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:38,066 - WARNING - Error on index 1039: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:42,295 - INFO -   #54     close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:43,498 - INFO -   #202    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:45,135 - INFO -   #848    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:45,500 - WARNING - Error on index 1148: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:45,850 - WARNING - Error on index 1088: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:47,148 - INFO -   #915    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:50,884 - INFO -   #784    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:51,231 - WARNING - Error on index 599: CUDA out of memory. Tried to allocate 3.17 GiB. GPU 0 has a total capacity of 79.14 GiB of which 3.02 GiB is free. Process 2008515 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 14.72 GiB is allocated by PyTorch, and 1.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2026-03-10 20:23:55,583 - INFO -   #717    close  orig[X]="far" swap[O]="far"
2026-03-10 20:23:59,835 - INFO -   #18     close  orig[X]="far" swap[O]="far"
2026-03-10 20:24:01,534 - INFO -   #866    close  orig[X]="far" swap[O]="far"
2026-03-10 20:24:01,535 - INFO - Extracted 627 swap pair records
2026-03-10 20:24:01,536 - INFO -     left (n=117): acc_orig=89.7%, acc_swap=95.7%, acc_both=86.3%
2026-03-10 20:24:01,536 - INFO -    right (n=121): acc_orig=93.4%, acc_swap=88.4%, acc_both=84.3%
2026-03-10 20:24:01,536 - INFO -    above (n=115): acc_orig=98.3%, acc_swap=70.4%, acc_both=69.6%
2026-03-10 20:24:01,536 - INFO -    below (n=112): acc_orig=66.1%, acc_swap=95.5%, acc_both=65.2%
2026-03-10 20:24:01,536 - INFO -      far (n=84): acc_orig=100.0%, acc_swap=0.0%, acc_both=0.0%
2026-03-10 20:24:01,537 - INFO -    close (n=78): acc_orig=0.0%, acc_swap=100.0%, acc_both=0.0%
2026-03-10 20:24:01,537 - INFO - 
--- Phase C_A: Analysis (swap pairs) ---
2026-03-10 20:24:01,537 - WARNING -   [!] Category 'far' unreliable at scale=2m: acc_orig=100.0%, acc_swap=0.0%
2026-03-10 20:24:01,537 - WARNING -   [!] Category 'close' unreliable at scale=2m: acc_orig=0.0%, acc_swap=100.0%
2026-03-10 20:24:01,537 - WARNING -   Unreliable categories: ['far', 'close']
2026-03-10 20:24:02,444 - INFO -   Both-correct pairs: 356/627
2026-03-10 20:24:03,339 - INFO -   Sign-corrected [horizontal, L35]: 0.4071 +/- 0.3249
2026-03-10 20:24:03,339 - INFO -   Sign-corrected [vertical, L35]: 0.2845 +/- 0.3294
2026-03-10 20:24:03,339 - INFO -   Sign-corrected [distance, L35]: 0.0435 +/- 0.2682
2026-03-10 20:24:03,339 - INFO -   Accuracy orig=78.0%, swap=77.4%, both=56.8%
2026-03-10 20:24:03,339 - INFO - 
--- Phase D_A: Saving Phase A results ---
2026-03-10 20:25:51,705 - INFO - Saved vectors NPZ with correctness metadata for scale=2m
2026-03-10 20:25:51,780 - INFO - Saved results for scale=2m (all_pairs) to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m
2026-03-10 20:25:51,822 - INFO - Saved results for scale=2m (both_correct) to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m
2026-03-10 20:25:51,822 - INFO - 
--- Phase E_A: Per-scale plots (swap-pair data) ---
2026-03-10 20:25:52,489 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/within_cat_consistency/within_cat_consistency_2m.png
2026-03-10 20:25:53,089 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/sign_corrected/sign_corrected_consistency_2m.png
2026-03-10 20:25:53,862 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/within_cat_consistency/within_cat_consistency_2m.png
2026-03-10 20:25:54,472 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/sign_corrected/sign_corrected_consistency_2m.png
2026-03-10 20:27:07,554 - INFO - Saved PCA plots to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/pca
2026-03-10 20:27:43,424 - INFO - Saved 3D PCA plots to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/pca_3d
2026-03-10 20:28:41,462 - INFO - Saved PCA plots to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/pca
2026-03-10 20:29:11,801 - INFO - Saved 3D PCA plots to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/pca_3d
2026-03-10 20:29:12,758 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/pred_stats/pred_stats_2m.png
2026-03-10 20:29:12,759 - INFO - 
--- Accuracy Charts [2m] ---
2026-03-10 20:29:13,767 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/accuracy/accuracy_group_bars.png
2026-03-10 20:29:14,289 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/accuracy/accuracy_trajectory.png
2026-03-10 20:29:15,075 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/accuracy/accuracy_category.png
2026-03-10 20:29:15,713 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/accuracy/category_accuracy_2m.png
2026-03-10 20:29:17,402 - INFO - Saved: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/accuracy/accuracy_chart.png
2026-03-10 20:29:17,402 - INFO - 
--- All-Layer Heatmaps [2m] ---
2026-03-10 20:29:17,406 - INFO -   [qwen/2m] Generating heatmaps for 36 layers...
2026-03-10 20:29:18,137 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L0.png
2026-03-10 20:29:18,761 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L0.png
2026-03-10 20:29:19,475 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L1.png
2026-03-10 20:29:20,096 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L1.png
2026-03-10 20:29:20,827 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L2.png
2026-03-10 20:29:21,451 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L2.png
2026-03-10 20:29:22,175 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L3.png
2026-03-10 20:29:22,808 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L3.png
2026-03-10 20:29:23,876 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L4.png
2026-03-10 20:29:24,499 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L4.png
2026-03-10 20:29:25,222 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L5.png
2026-03-10 20:29:25,833 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L5.png
2026-03-10 20:29:26,559 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L6.png
2026-03-10 20:29:27,169 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L6.png
2026-03-10 20:29:27,873 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L7.png
2026-03-10 20:29:28,483 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L7.png
2026-03-10 20:29:29,181 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L8.png
2026-03-10 20:29:29,779 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L8.png
2026-03-10 20:29:30,469 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L9.png
2026-03-10 20:29:31,083 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L9.png
2026-03-10 20:29:31,819 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L10.png
2026-03-10 20:29:32,421 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L10.png
2026-03-10 20:29:33,118 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L11.png
2026-03-10 20:29:33,713 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L11.png
2026-03-10 20:29:34,770 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L12.png
2026-03-10 20:29:35,372 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L12.png
2026-03-10 20:29:36,073 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L13.png
2026-03-10 20:29:36,674 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L13.png
2026-03-10 20:29:37,372 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L14.png
2026-03-10 20:29:37,968 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L14.png
2026-03-10 20:29:38,676 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L15.png
2026-03-10 20:29:39,278 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L15.png
2026-03-10 20:29:39,987 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L16.png
2026-03-10 20:29:40,591 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L16.png
2026-03-10 20:29:41,295 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L17.png
2026-03-10 20:29:41,899 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L17.png
2026-03-10 20:29:42,593 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L18.png
2026-03-10 20:29:43,187 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L18.png
2026-03-10 20:29:43,882 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L19.png
2026-03-10 20:29:44,474 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L19.png
2026-03-10 20:29:45,513 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L20.png
2026-03-10 20:29:46,107 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L20.png
2026-03-10 20:29:46,802 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L21.png
2026-03-10 20:29:47,397 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L21.png
2026-03-10 20:29:48,095 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L22.png
2026-03-10 20:29:48,689 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L22.png
2026-03-10 20:29:49,374 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L23.png
2026-03-10 20:29:49,969 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L23.png
2026-03-10 20:29:50,658 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L24.png
2026-03-10 20:29:51,248 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L24.png
2026-03-10 20:29:51,941 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L25.png
2026-03-10 20:29:52,533 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L25.png
2026-03-10 20:29:53,233 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L26.png
2026-03-10 20:29:53,832 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L26.png
2026-03-10 20:29:54,525 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L27.png
2026-03-10 20:29:55,119 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L27.png
2026-03-10 20:29:55,811 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L28.png
2026-03-10 20:29:56,767 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L28.png
2026-03-10 20:29:57,461 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L29.png
2026-03-10 20:29:58,055 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L29.png
2026-03-10 20:29:58,749 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L30.png
2026-03-10 20:29:59,339 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L30.png
2026-03-10 20:30:00,033 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L31.png
2026-03-10 20:30:00,623 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L31.png
2026-03-10 20:30:01,318 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L32.png
2026-03-10 20:30:01,910 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L32.png
2026-03-10 20:30:02,607 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L33.png
2026-03-10 20:30:03,200 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L33.png
2026-03-10 20:30:03,897 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L34.png
2026-03-10 20:30:04,503 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L34.png
2026-03-10 20:30:05,202 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/heatmap/heatmap_2m_L35.png
2026-03-10 20:30:05,801 - INFO - Saved delta heatmap: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/heatmap/heatmap_2m_L35.png
2026-03-10 20:30:05,801 - INFO -   [qwen/2m] Saved 72 heatmaps
2026-03-10 20:30:05,802 - INFO - 
--- All-Layer PCA [2m] ---
2026-03-10 20:30:05,802 - INFO -   [qwen/2m] Generating all-layer 2D PCA...
2026-03-10 20:31:18,206 - INFO - Saved PCA plots to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/pca
2026-03-10 20:31:18,207 - INFO -   [qwen/2m] Generating all-layer 3D PCA...
2026-03-10 20:31:53,498 - INFO - Saved 3D PCA plots to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/all/pca_3d
2026-03-10 20:31:53,499 - INFO -   [qwen/2m] Generating both-correct 2D PCA...
2026-03-10 20:32:59,216 - INFO - Saved PCA plots to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/pca
2026-03-10 20:32:59,217 - INFO -   [qwen/2m] Generating both-correct 3D PCA...
2026-03-10 20:33:33,944 - INFO - Saved 3D PCA plots to /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data/qwen_2m/plots/both_correct/pca_3d
2026-03-10 20:33:33,946 - INFO - 
--- Phase B: Cross-group extraction [SKIPPED: --skip-phase-b] ---
2026-03-10 20:33:34,131 - INFO - 
  Scale 2m complete.
2026-03-10 20:33:34,133 - INFO - 
============================================================
2026-03-10 20:33:34,133 - INFO - === All scales complete ===
2026-03-10 20:33:34,133 - INFO - Results: /data/shared/Qwen/experiments/swap_analysis_cvbench/short_answer/saved_data
2026-03-10 20:33:34,133 - INFO - ============================================================