Knowing commited on
Commit
7bf1ffd
·
verified ·
1 Parent(s): 4229dfd

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ABLATION_0225_FreqSelect/.hydra/config.yaml +185 -0
  2. ABLATION_0225_FreqSelect/.hydra/hydra.yaml +165 -0
  3. ABLATION_0225_FreqSelect/.hydra/overrides.yaml +4 -0
  4. ABLATION_0225_FreqSelect/wandb/debug-internal.log +12 -0
  5. ABLATION_0225_FreqSelect/wandb/debug.log +21 -0
  6. ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/config.yaml +307 -0
  7. ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/output.log +0 -0
  8. ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/requirements.txt +172 -0
  9. ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/wandb-metadata.json +93 -0
  10. ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/wandb-summary.json +1 -0
  11. ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug-core.log +15 -0
  12. ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug-internal.log +12 -0
  13. ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug.log +21 -0
  14. ABLATION_0225_OURS/.hydra/config.yaml +185 -0
  15. ABLATION_0225_OURS/.hydra/hydra.yaml +164 -0
  16. ABLATION_0225_OURS/.hydra/overrides.yaml +3 -0
  17. ABLATION_0225_OURS/wandb/debug-internal.log +11 -0
  18. ABLATION_0225_OURS/wandb/debug.log +21 -0
  19. ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/config.yaml +306 -0
  20. ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/output.log +0 -0
  21. ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/requirements.txt +172 -0
  22. ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/wandb-metadata.json +92 -0
  23. ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/wandb-summary.json +1 -0
  24. ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug-core.log +15 -0
  25. ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug-internal.log +11 -0
  26. ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug.log +21 -0
  27. ABLATION_0225_noRefineModule/.hydra/config.yaml +185 -0
  28. ABLATION_0225_noRefineModule/.hydra/hydra.yaml +165 -0
  29. ABLATION_0225_noRefineModule/.hydra/overrides.yaml +4 -0
  30. ABLATION_0225_noRefineModule/main.log +128 -0
  31. ABLATION_0225_noRefineModule/peak_vram_memory.json +6 -0
  32. ABLATION_0225_noRefineModule/train_ddp_process_3.log +66 -0
  33. ABLATION_0225_noRefineModule/train_ddp_process_4.log +66 -0
  34. ABLATION_0225_noRefineModule/train_ddp_process_7.log +66 -0
  35. ABLATION_0225_noRefineModule/wandb/debug-internal.log +11 -0
  36. ABLATION_0225_noRefineModule/wandb/debug.log +21 -0
  37. ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/config.yaml +307 -0
  38. ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/output.log +0 -0
  39. ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/requirements.txt +172 -0
  40. ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/wandb-metadata.json +93 -0
  41. ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/wandb-summary.json +1 -0
  42. ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug-core.log +15 -0
  43. ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug-internal.log +11 -0
  44. ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug.log +21 -0
  45. ABLATION_0225_randomSelect/main.log +116 -0
  46. ABLATION_0225_randomSelect/train_ddp_process_1.log +60 -0
  47. ABLATION_0225_randomSelect/train_ddp_process_2.log +60 -0
  48. ABLATION_0225_randomSelect/train_ddp_process_4.log +60 -0
  49. ABLATION_0225_randomSelect/train_ddp_process_5.log +60 -0
  50. ABLATION_0225_randomSelect/train_ddp_process_6.log +60 -0
ABLATION_0225_FreqSelect/.hydra/config.yaml ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model:
2
+ encoder:
3
+ name: dcsplat
4
+ input_image_shape:
5
+ - 518
6
+ - 518
7
+ head_mode: pcd
8
+ num_level: 3
9
+ gs_param_dim: 256
10
+ align_corners: false
11
+ use_voxelize: true
12
+ decoder:
13
+ name: splatting_cuda
14
+ background_color:
15
+ - 0.0
16
+ - 0.0
17
+ - 0.0
18
+ make_scale_invariant: false
19
+ density_control:
20
+ name: density_control_module
21
+ mean_dim: 32
22
+ gs_param_dim: 256
23
+ refinement_layer_num: 1
24
+ num_level: 3
25
+ grad_mode: absgrad
26
+ use_mean_features: true
27
+ refinement_type: voxelize
28
+ refinement_hidden_dim: 32
29
+ aggregation_mode: mean
30
+ num_heads: 1
31
+ score_mode: frequency
32
+ latent_dim: 128
33
+ num_latents: 64
34
+ num_self_attn_per_block: 2
35
+ voxel_size: 0.001
36
+ aux_refine: false
37
+ refine_error: false
38
+ use_refine_module: true
39
+ voxelize_activate: true
40
+ use_depth: false
41
+ render_loss:
42
+ mse:
43
+ weight: 1.0
44
+ lpips:
45
+ weight: 0.05
46
+ apply_after_step: 0
47
+ density_control_loss:
48
+ error_score:
49
+ weight: 0.01
50
+ log_scale: false
51
+ grad_scale: 10000.0
52
+ mode: original
53
+ direct_loss:
54
+ l1:
55
+ weight: 0.8
56
+ ssim:
57
+ weight: 0.2
58
+ wandb:
59
+ project: DCSplat
60
+ entity: scene-representation-group
61
+ name: ABLATION_0225_FreqSelect
62
+ mode: online
63
+ tags:
64
+ - re10k
65
+ - 256x256
66
+ mode: train
67
+ data_loader:
68
+ train:
69
+ num_workers: 16
70
+ persistent_workers: true
71
+ batch_size: 16
72
+ seed: 1234
73
+ test:
74
+ num_workers: 4
75
+ persistent_workers: false
76
+ batch_size: 1
77
+ seed: 2345
78
+ val:
79
+ num_workers: 1
80
+ persistent_workers: true
81
+ batch_size: 1
82
+ seed: 3456
83
+ optimizer:
84
+ lr: 0.0002
85
+ warm_up_steps: 25
86
+ backbone_lr_multiplier: 0.1
87
+ backbone_trainable: T+H
88
+ accumulate: 1
89
+ checkpointing:
90
+ load: null
91
+ every_n_train_steps: 1500
92
+ save_top_k: 2
93
+ save_weights_only: false
94
+ train:
95
+ extended_visualization: false
96
+ print_log_every_n_steps: 10
97
+ camera_loss: 10.0
98
+ one_sample_validation: null
99
+ align_corners: false
100
+ intrinsic_scaling: false
101
+ verbose: false
102
+ beta_dist_param:
103
+ - 0.5
104
+ - 4.0
105
+ use_refine_aux: false
106
+ train_target_set: true
107
+ train_gs_num: 1
108
+ ext_scale_detach: false
109
+ cam_scale_mode: sum
110
+ scene_scale_reg_loss: 0.01
111
+ train_aux: true
112
+ vggt_cam_loss: true
113
+ vggt_distil: false
114
+ context_view_train: false
115
+ test:
116
+ output_path: test/ablation/re10k
117
+ align_pose: false
118
+ pose_align_steps: 100
119
+ rot_opt_lr: 0.005
120
+ trans_opt_lr: 0.005
121
+ compute_scores: true
122
+ save_image: false
123
+ save_video: false
124
+ save_active_mask_image: false
125
+ save_error_score_image: false
126
+ save_compare: false
127
+ pred_intrinsic: false
128
+ error_threshold: 0.4
129
+ error_threshold_list:
130
+ - 0.2
131
+ - 0.4
132
+ - 0.6
133
+ - 0.8
134
+ - 1.0
135
+ threshold_mode: ratio
136
+ nvs_view_N_list:
137
+ - 3
138
+ - 6
139
+ - 16
140
+ - 32
141
+ - 64
142
+ seed: 111123
143
+ trainer:
144
+ max_steps: 3001
145
+ val_check_interval: 250
146
+ gradient_clip_val: 0.5
147
+ num_nodes: 1
148
+ dataset:
149
+ re10k:
150
+ make_baseline_1: true
151
+ relative_pose: true
152
+ augment: true
153
+ background_color:
154
+ - 0.0
155
+ - 0.0
156
+ - 0.0
157
+ overfit_to_scene: null
158
+ skip_bad_shape: true
159
+ view_sampler:
160
+ name: bounded
161
+ num_target_views: 4
162
+ num_context_views: 2
163
+ min_distance_between_context_views: 45
164
+ max_distance_between_context_views: 90
165
+ min_distance_to_context_views: 0
166
+ warm_up_steps: 1000
167
+ initial_min_distance_between_context_views: 25
168
+ initial_max_distance_between_context_views: 25
169
+ same_target_gap: false
170
+ num_target_set: 3
171
+ name: re10k
172
+ roots:
173
+ - datasets/re10k
174
+ input_image_shape:
175
+ - 256
176
+ - 256
177
+ original_image_shape:
178
+ - 360
179
+ - 640
180
+ cameras_are_circular: false
181
+ baseline_min: 0.001
182
+ baseline_max: 10000000000.0
183
+ max_fov: 100.0
184
+ dynamic_context_views: true
185
+ max_context_views_per_gpu: 24
ABLATION_0225_FreqSelect/.hydra/hydra.yaml ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hydra:
2
+ run:
3
+ dir: outputs/ablation/re10k/${wandb.name}
4
+ sweep:
5
+ dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
6
+ subdir: ${hydra.job.num}
7
+ launcher:
8
+ _target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
9
+ sweeper:
10
+ _target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
11
+ max_batch_size: null
12
+ params: null
13
+ help:
14
+ app_name: ${hydra.job.name}
15
+ header: '${hydra.help.app_name} is powered by Hydra.
16
+
17
+ '
18
+ footer: 'Powered by Hydra (https://hydra.cc)
19
+
20
+ Use --hydra-help to view Hydra specific help
21
+
22
+ '
23
+ template: '${hydra.help.header}
24
+
25
+ == Configuration groups ==
26
+
27
+ Compose your configuration from those groups (group=option)
28
+
29
+
30
+ $APP_CONFIG_GROUPS
31
+
32
+
33
+ == Config ==
34
+
35
+ Override anything in the config (foo.bar=value)
36
+
37
+
38
+ $CONFIG
39
+
40
+
41
+ ${hydra.help.footer}
42
+
43
+ '
44
+ hydra_help:
45
+ template: 'Hydra (${hydra.runtime.version})
46
+
47
+ See https://hydra.cc for more info.
48
+
49
+
50
+ == Flags ==
51
+
52
+ $FLAGS_HELP
53
+
54
+
55
+ == Configuration groups ==
56
+
57
+ Compose your configuration from those groups (For example, append hydra/job_logging=disabled
58
+ to command line)
59
+
60
+
61
+ $HYDRA_CONFIG_GROUPS
62
+
63
+
64
+ Use ''--cfg hydra'' to Show the Hydra config.
65
+
66
+ '
67
+ hydra_help: ???
68
+ hydra_logging:
69
+ version: 1
70
+ formatters:
71
+ simple:
72
+ format: '[%(asctime)s][HYDRA] %(message)s'
73
+ handlers:
74
+ console:
75
+ class: logging.StreamHandler
76
+ formatter: simple
77
+ stream: ext://sys.stdout
78
+ root:
79
+ level: INFO
80
+ handlers:
81
+ - console
82
+ loggers:
83
+ logging_example:
84
+ level: DEBUG
85
+ disable_existing_loggers: false
86
+ job_logging:
87
+ version: 1
88
+ formatters:
89
+ simple:
90
+ format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
91
+ handlers:
92
+ console:
93
+ class: logging.StreamHandler
94
+ formatter: simple
95
+ stream: ext://sys.stdout
96
+ file:
97
+ class: logging.FileHandler
98
+ formatter: simple
99
+ filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
100
+ root:
101
+ level: INFO
102
+ handlers:
103
+ - console
104
+ - file
105
+ disable_existing_loggers: false
106
+ env: {}
107
+ mode: RUN
108
+ searchpath: []
109
+ callbacks: {}
110
+ output_subdir: .hydra
111
+ overrides:
112
+ hydra:
113
+ - hydra.mode=RUN
114
+ task:
115
+ - +experiment=re10k_ablation_24v
116
+ - wandb.mode=online
117
+ - wandb.name=ABLATION_0225_FreqSelect
118
+ - model.density_control.score_mode=frequency
119
+ job:
120
+ name: main
121
+ chdir: null
122
+ override_dirname: +experiment=re10k_ablation_24v,model.density_control.score_mode=frequency,wandb.mode=online,wandb.name=ABLATION_0225_FreqSelect
123
+ id: ???
124
+ num: ???
125
+ config_name: main
126
+ env_set: {}
127
+ env_copy: []
128
+ config:
129
+ override_dirname:
130
+ kv_sep: '='
131
+ item_sep: ','
132
+ exclude_keys: []
133
+ runtime:
134
+ version: 1.3.2
135
+ version_base: '1.3'
136
+ cwd: /workspace/code/CVPR2026
137
+ config_sources:
138
+ - path: hydra.conf
139
+ schema: pkg
140
+ provider: hydra
141
+ - path: /workspace/code/CVPR2026/config
142
+ schema: file
143
+ provider: main
144
+ - path: ''
145
+ schema: structured
146
+ provider: schema
147
+ output_dir: /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_FreqSelect
148
+ choices:
149
+ experiment: re10k_ablation_24v
150
+ dataset@dataset.re10k: re10k
151
+ dataset/view_sampler_dataset_specific_config@dataset.re10k.view_sampler: bounded_re10k
152
+ dataset/view_sampler@dataset.re10k.view_sampler: bounded
153
+ model/density_control: density_control_module
154
+ model/decoder: splatting_cuda
155
+ model/encoder: dcsplat
156
+ hydra/env: default
157
+ hydra/callbacks: null
158
+ hydra/job_logging: default
159
+ hydra/hydra_logging: default
160
+ hydra/hydra_help: default
161
+ hydra/help: default
162
+ hydra/sweeper: basic
163
+ hydra/launcher: basic
164
+ hydra/output: default
165
+ verbose: false
ABLATION_0225_FreqSelect/.hydra/overrides.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ - +experiment=re10k_ablation_24v
2
+ - wandb.mode=online
3
+ - wandb.name=ABLATION_0225_FreqSelect
4
+ - model.density_control.score_mode=frequency
ABLATION_0225_FreqSelect/wandb/debug-internal.log ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-24T22:27:39.882209485Z","level":"INFO","msg":"stream: starting","core version":"0.25.0"}
2
+ {"time":"2026-02-24T22:27:40.294571378Z","level":"INFO","msg":"stream: created new stream","id":"y7wvpmyy"}
3
+ {"time":"2026-02-24T22:27:40.2947114Z","level":"INFO","msg":"handler: started","stream_id":"y7wvpmyy"}
4
+ {"time":"2026-02-24T22:27:40.294855053Z","level":"INFO","msg":"stream: started","id":"y7wvpmyy"}
5
+ {"time":"2026-02-24T22:27:40.294904223Z","level":"INFO","msg":"sender: started","stream_id":"y7wvpmyy"}
6
+ {"time":"2026-02-24T22:27:40.294940724Z","level":"INFO","msg":"writer: started","stream_id":"y7wvpmyy"}
7
+ {"time":"2026-02-25T01:00:56.785103175Z","level":"INFO","msg":"api: retrying HTTP error","status":502,"url":"https://api.wandb.ai/files/know/DCSplat/y7wvpmyy/file_stream","body":"\n<html><head>\n<meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">\n<title>502 Server Error</title>\n</head>\n<body text=#000000 bgcolor=#ffffff>\n<h1>Error: Server Error</h1>\n<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>\n<h2></h2>\n</body></html>\n"}
8
+ {"time":"2026-02-25T01:39:55.965783052Z","level":"INFO","msg":"stream: closing","id":"y7wvpmyy"}
9
+ {"time":"2026-02-25T01:39:56.929029575Z","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
10
+ {"time":"2026-02-25T01:39:57.1548805Z","level":"INFO","msg":"handler: closed","stream_id":"y7wvpmyy"}
11
+ {"time":"2026-02-25T01:39:57.155103083Z","level":"INFO","msg":"sender: closed","stream_id":"y7wvpmyy"}
12
+ {"time":"2026-02-25T01:39:57.155127144Z","level":"INFO","msg":"stream: closed","id":"y7wvpmyy"}
ABLATION_0225_FreqSelect/wandb/debug.log ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-24 22:27:39,587 INFO MainThread:113743 [wandb_setup.py:_flush():81] Current SDK version is 0.25.0
2
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_setup.py:_flush():81] Configure stats pid to 113743
3
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_setup.py:_flush():81] Loading settings from environment variables
4
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:setup_run_log_directory():717] Logging user logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug.log
5
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:setup_run_log_directory():718] Logging internal logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug-internal.log
6
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:init():844] calling init triggers
7
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:init():849] wandb.init called with sweep_config: {}
8
+ config: {'model': {'encoder': {'name': 'dcsplat', 'input_image_shape': [518, 518], 'head_mode': 'pcd', 'num_level': 3, 'gs_param_dim': 256, 'align_corners': False, 'use_voxelize': True}, 'decoder': {'name': 'splatting_cuda', 'background_color': [0.0, 0.0, 0.0], 'make_scale_invariant': False}, 'density_control': {'name': 'density_control_module', 'mean_dim': 32, 'gs_param_dim': 256, 'refinement_layer_num': 1, 'num_level': 3, 'grad_mode': 'absgrad', 'use_mean_features': True, 'refinement_type': 'voxelize', 'refinement_hidden_dim': 32, 'aggregation_mode': 'mean', 'num_heads': 1, 'score_mode': 'frequency', 'latent_dim': 128, 'num_latents': 64, 'num_self_attn_per_block': 2, 'voxel_size': 0.001, 'aux_refine': False, 'refine_error': False, 'use_refine_module': True, 'voxelize_activate': True, 'use_depth': False}}, 'render_loss': {'mse': {'weight': 1.0}, 'lpips': {'weight': 0.05, 'apply_after_step': 0}}, 'density_control_loss': {'error_score': {'weight': 0.01, 'log_scale': False, 'grad_scale': 10000.0, 'mode': 'original'}}, 'direct_loss': {'l1': {'weight': 0.8}, 'ssim': {'weight': 0.2}}, 'wandb': {'project': 'DCSplat', 'entity': 'scene-representation-group', 'name': 'ABLATION_0225_FreqSelect', 'mode': 'online', 'tags': ['re10k', '256x256']}, 'mode': 'train', 'data_loader': {'train': {'num_workers': 16, 'persistent_workers': True, 'batch_size': 16, 'seed': 1234}, 'test': {'num_workers': 4, 'persistent_workers': False, 'batch_size': 1, 'seed': 2345}, 'val': {'num_workers': 1, 'persistent_workers': True, 'batch_size': 1, 'seed': 3456}}, 'optimizer': {'lr': 0.0002, 'warm_up_steps': 25, 'backbone_lr_multiplier': 0.1, 'backbone_trainable': 'T+H', 'accumulate': 1}, 'checkpointing': {'load': None, 'every_n_train_steps': 1500, 'save_top_k': 2, 'save_weights_only': False}, 'train': {'extended_visualization': False, 'print_log_every_n_steps': 10, 'camera_loss': 10.0, 'one_sample_validation': None, 'align_corners': False, 'intrinsic_scaling': False, 'verbose': False, 'beta_dist_param': [0.5, 4.0], 'use_refine_aux': False, 'train_target_set': True, 'train_gs_num': 1, 'ext_scale_detach': False, 'cam_scale_mode': 'sum', 'scene_scale_reg_loss': 0.01, 'train_aux': True, 'vggt_cam_loss': True, 'vggt_distil': False, 'context_view_train': False}, 'test': {'output_path': 'test/ablation/re10k', 'align_pose': False, 'pose_align_steps': 100, 'rot_opt_lr': 0.005, 'trans_opt_lr': 0.005, 'compute_scores': True, 'save_image': False, 'save_video': False, 'save_active_mask_image': False, 'save_error_score_image': False, 'save_compare': False, 'pred_intrinsic': False, 'error_threshold': 0.4, 'error_threshold_list': [0.2, 0.4, 0.6, 0.8, 1.0], 'threshold_mode': 'ratio', 'nvs_view_N_list': [3, 6, 16, 32, 64]}, 'seed': 111123, 'trainer': {'max_steps': 3001, 'val_check_interval': 250, 'gradient_clip_val': 0.5, 'num_nodes': 1}, 'dataset': {'re10k': {'make_baseline_1': True, 'relative_pose': True, 'augment': True, 'background_color': [0.0, 0.0, 0.0], 'overfit_to_scene': None, 'skip_bad_shape': True, 'view_sampler': {'name': 'bounded', 'num_target_views': 4, 'num_context_views': 2, 'min_distance_between_context_views': 45, 'max_distance_between_context_views': 90, 'min_distance_to_context_views': 0, 'warm_up_steps': 1000, 'initial_min_distance_between_context_views': 25, 'initial_max_distance_between_context_views': 25, 'same_target_gap': False, 'num_target_set': 3}, 'name': 're10k', 'roots': ['datasets/re10k'], 'input_image_shape': [256, 256], 'original_image_shape': [360, 640], 'cameras_are_circular': False, 'baseline_min': 0.001, 'baseline_max': 10000000000.0, 'max_fov': 100.0, 'dynamic_context_views': True, 'max_context_views_per_gpu': 24}}, '_wandb': {}}
9
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:init():892] starting backend
10
+ 2026-02-24 22:27:39,873 INFO MainThread:113743 [wandb_init.py:init():895] sending inform_init request
11
+ 2026-02-24 22:27:39,880 INFO MainThread:113743 [wandb_init.py:init():903] backend started and connected
12
+ 2026-02-24 22:27:39,887 INFO MainThread:113743 [wandb_init.py:init():973] updated telemetry
13
+ 2026-02-24 22:27:39,894 INFO MainThread:113743 [wandb_init.py:init():997] communicating run to backend with 90.0 second timeout
14
+ 2026-02-24 22:27:41,506 INFO MainThread:113743 [wandb_init.py:init():1042] starting run threads in backend
15
+ 2026-02-24 22:27:41,632 INFO MainThread:113743 [wandb_run.py:_console_start():2524] atexit reg
16
+ 2026-02-24 22:27:41,632 INFO MainThread:113743 [wandb_run.py:_redirect():2373] redirect: wrap_raw
17
+ 2026-02-24 22:27:41,632 INFO MainThread:113743 [wandb_run.py:_redirect():2442] Wrapping output streams.
18
+ 2026-02-24 22:27:41,632 INFO MainThread:113743 [wandb_run.py:_redirect():2465] Redirects installed.
19
+ 2026-02-24 22:27:41,635 INFO MainThread:113743 [wandb_init.py:init():1082] run started, returning control to user process
20
+ 2026-02-25 01:39:55,965 INFO wandb-AsyncioManager-main:113743 [service_client.py:_forward_responses():134] Reached EOF.
21
+ 2026-02-25 01:39:55,965 INFO wandb-AsyncioManager-main:113743 [mailbox.py:close():155] Closing mailbox, abandoning 1 handles.
ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/config.yaml ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ _wandb:
2
+ value:
3
+ cli_version: 0.25.0
4
+ e:
5
+ 1aoh34iwmaamch760bz6silmn5l3ie5b:
6
+ args:
7
+ - +experiment=re10k_ablation_24v
8
+ - wandb.mode=online
9
+ - wandb.name=ABLATION_0225_FreqSelect
10
+ - model.density_control.score_mode=frequency
11
+ cpu_count: 128
12
+ cpu_count_logical: 256
13
+ cudaVersion: "13.1"
14
+ disk:
15
+ /:
16
+ total: "1170378588160"
17
+ used: "636725506048"
18
+ email: dna9041@korea.ac.kr
19
+ executable: /venv/main/bin/python
20
+ git:
21
+ commit: 2512754c6c27ca5150bf17fbcbdde3f192fd53cc
22
+ remote: git@github.com:K-nowing/CVPR2026.git
23
+ gpu: NVIDIA H200
24
+ gpu_count: 8
25
+ gpu_nvidia:
26
+ - architecture: Hopper
27
+ cudaCores: 16896
28
+ memoryTotal: "150754820096"
29
+ name: NVIDIA H200
30
+ uuid: GPU-2649ab80-a3a6-5a1c-0fa5-12bc11bd75e9
31
+ - architecture: Hopper
32
+ cudaCores: 16896
33
+ memoryTotal: "150754820096"
34
+ name: NVIDIA H200
35
+ uuid: GPU-e92921d9-c681-246f-af93-637e0dc938ca
36
+ - architecture: Hopper
37
+ cudaCores: 16896
38
+ memoryTotal: "150754820096"
39
+ name: NVIDIA H200
40
+ uuid: GPU-ffe12ffc-9bb7-82de-5692-1ec0ee2e68d8
41
+ - architecture: Hopper
42
+ cudaCores: 16896
43
+ memoryTotal: "150754820096"
44
+ name: NVIDIA H200
45
+ uuid: GPU-499e5acd-b6ab-2010-c51b-ee9b5aa65825
46
+ - architecture: Hopper
47
+ cudaCores: 16896
48
+ memoryTotal: "150754820096"
49
+ name: NVIDIA H200
50
+ uuid: GPU-3b2522d9-1c72-e49b-2c30-96165680b74a
51
+ - architecture: Hopper
52
+ cudaCores: 16896
53
+ memoryTotal: "150754820096"
54
+ name: NVIDIA H200
55
+ uuid: GPU-a9a280c5-b2f9-dc1e-a8a9-7326a74001ff
56
+ - architecture: Hopper
57
+ cudaCores: 16896
58
+ memoryTotal: "150754820096"
59
+ name: NVIDIA H200
60
+ uuid: GPU-07d0167b-a6a1-1900-2d27-7c6c11598409
61
+ - architecture: Hopper
62
+ cudaCores: 16896
63
+ memoryTotal: "150754820096"
64
+ name: NVIDIA H200
65
+ uuid: GPU-8362a999-20d1-c27b-5d18-032d23f859ab
66
+ host: 27d18dedec6d
67
+ memory:
68
+ total: "1622948257792"
69
+ os: Linux-6.8.0-90-generic-x86_64-with-glibc2.39
70
+ program: -m src.main
71
+ python: CPython 3.12.12
72
+ root: /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_FreqSelect
73
+ startedAt: "2026-02-24T22:27:39.584882Z"
74
+ writerId: 1aoh34iwmaamch760bz6silmn5l3ie5b
75
+ m:
76
+ - "1": trainer/global_step
77
+ "6":
78
+ - 3
79
+ "7": []
80
+ - "2": '*'
81
+ "5": 1
82
+ "6":
83
+ - 1
84
+ "7": []
85
+ python_version: 3.12.12
86
+ t:
87
+ "1":
88
+ - 1
89
+ - 41
90
+ - 49
91
+ - 50
92
+ - 106
93
+ "2":
94
+ - 1
95
+ - 41
96
+ - 49
97
+ - 50
98
+ - 106
99
+ "3":
100
+ - 7
101
+ - 13
102
+ - 15
103
+ - 16
104
+ - 66
105
+ "4": 3.12.12
106
+ "5": 0.25.0
107
+ "12": 0.25.0
108
+ "13": linux-x86_64
109
+ checkpointing:
110
+ value:
111
+ every_n_train_steps: 1500
112
+ load: null
113
+ save_top_k: 2
114
+ save_weights_only: false
115
+ data_loader:
116
+ value:
117
+ test:
118
+ batch_size: 1
119
+ num_workers: 4
120
+ persistent_workers: false
121
+ seed: 2345
122
+ train:
123
+ batch_size: 16
124
+ num_workers: 16
125
+ persistent_workers: true
126
+ seed: 1234
127
+ val:
128
+ batch_size: 1
129
+ num_workers: 1
130
+ persistent_workers: true
131
+ seed: 3456
132
+ dataset:
133
+ value:
134
+ re10k:
135
+ augment: true
136
+ background_color:
137
+ - 0
138
+ - 0
139
+ - 0
140
+ baseline_max: 1e+10
141
+ baseline_min: 0.001
142
+ cameras_are_circular: false
143
+ dynamic_context_views: true
144
+ input_image_shape:
145
+ - 256
146
+ - 256
147
+ make_baseline_1: true
148
+ max_context_views_per_gpu: 24
149
+ max_fov: 100
150
+ name: re10k
151
+ original_image_shape:
152
+ - 360
153
+ - 640
154
+ overfit_to_scene: null
155
+ relative_pose: true
156
+ roots:
157
+ - datasets/re10k
158
+ skip_bad_shape: true
159
+ view_sampler:
160
+ initial_max_distance_between_context_views: 25
161
+ initial_min_distance_between_context_views: 25
162
+ max_distance_between_context_views: 90
163
+ min_distance_between_context_views: 45
164
+ min_distance_to_context_views: 0
165
+ name: bounded
166
+ num_context_views: 2
167
+ num_target_set: 3
168
+ num_target_views: 4
169
+ same_target_gap: false
170
+ warm_up_steps: 1000
171
+ density_control_loss:
172
+ value:
173
+ error_score:
174
+ grad_scale: 10000
175
+ log_scale: false
176
+ mode: original
177
+ weight: 0.01
178
+ direct_loss:
179
+ value:
180
+ l1:
181
+ weight: 0.8
182
+ ssim:
183
+ weight: 0.2
184
+ mode:
185
+ value: train
186
+ model:
187
+ value:
188
+ decoder:
189
+ background_color:
190
+ - 0
191
+ - 0
192
+ - 0
193
+ make_scale_invariant: false
194
+ name: splatting_cuda
195
+ density_control:
196
+ aggregation_mode: mean
197
+ aux_refine: false
198
+ grad_mode: absgrad
199
+ gs_param_dim: 256
200
+ latent_dim: 128
201
+ mean_dim: 32
202
+ name: density_control_module
203
+ num_heads: 1
204
+ num_latents: 64
205
+ num_level: 3
206
+ num_self_attn_per_block: 2
207
+ refine_error: false
208
+ refinement_hidden_dim: 32
209
+ refinement_layer_num: 1
210
+ refinement_type: voxelize
211
+ score_mode: frequency
212
+ use_depth: false
213
+ use_mean_features: true
214
+ use_refine_module: true
215
+ voxel_size: 0.001
216
+ voxelize_activate: true
217
+ encoder:
218
+ align_corners: false
219
+ gs_param_dim: 256
220
+ head_mode: pcd
221
+ input_image_shape:
222
+ - 518
223
+ - 518
224
+ name: dcsplat
225
+ num_level: 3
226
+ use_voxelize: true
227
+ optimizer:
228
+ value:
229
+ accumulate: 1
230
+ backbone_lr_multiplier: 0.1
231
+ backbone_trainable: T+H
232
+ lr: 0.0002
233
+ warm_up_steps: 25
234
+ render_loss:
235
+ value:
236
+ lpips:
237
+ apply_after_step: 0
238
+ weight: 0.05
239
+ mse:
240
+ weight: 1
241
+ seed:
242
+ value: 111123
243
+ test:
244
+ value:
245
+ align_pose: false
246
+ compute_scores: true
247
+ error_threshold: 0.4
248
+ error_threshold_list:
249
+ - 0.2
250
+ - 0.4
251
+ - 0.6
252
+ - 0.8
253
+ - 1
254
+ nvs_view_N_list:
255
+ - 3
256
+ - 6
257
+ - 16
258
+ - 32
259
+ - 64
260
+ output_path: test/ablation/re10k
261
+ pose_align_steps: 100
262
+ pred_intrinsic: false
263
+ rot_opt_lr: 0.005
264
+ save_active_mask_image: false
265
+ save_compare: false
266
+ save_error_score_image: false
267
+ save_image: false
268
+ save_video: false
269
+ threshold_mode: ratio
270
+ trans_opt_lr: 0.005
271
+ train:
272
+ value:
273
+ align_corners: false
274
+ beta_dist_param:
275
+ - 0.5
276
+ - 4
277
+ cam_scale_mode: sum
278
+ camera_loss: 10
279
+ context_view_train: false
280
+ ext_scale_detach: false
281
+ extended_visualization: false
282
+ intrinsic_scaling: false
283
+ one_sample_validation: null
284
+ print_log_every_n_steps: 10
285
+ scene_scale_reg_loss: 0.01
286
+ train_aux: true
287
+ train_gs_num: 1
288
+ train_target_set: true
289
+ use_refine_aux: false
290
+ verbose: false
291
+ vggt_cam_loss: true
292
+ vggt_distil: false
293
+ trainer:
294
+ value:
295
+ gradient_clip_val: 0.5
296
+ max_steps: 3001
297
+ num_nodes: 1
298
+ val_check_interval: 250
299
+ wandb:
300
+ value:
301
+ entity: scene-representation-group
302
+ mode: online
303
+ name: ABLATION_0225_FreqSelect
304
+ project: DCSplat
305
+ tags:
306
+ - re10k
307
+ - 256x256
ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/output.log ADDED
The diff for this file is too large to render. See raw diff
 
ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/requirements.txt ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wheel==0.45.1
2
+ pytz==2025.2
3
+ easydict==1.13
4
+ antlr4-python3-runtime==4.9.3
5
+ wadler_lindig==0.1.7
6
+ urllib3==2.5.0
7
+ tzdata==2025.2
8
+ typing-inspection==0.4.1
9
+ tabulate==0.9.0
10
+ smmap==5.0.2
11
+ kornia_rs==0.1.9
12
+ setuptools==78.1.1
13
+ safetensors==0.5.3
14
+ PyYAML==6.0.2
15
+ PySocks==1.7.1
16
+ pyparsing==3.2.5
17
+ pydantic_core==2.33.2
18
+ pycparser==2.23
19
+ protobuf==6.32.1
20
+ propcache==0.3.2
21
+ proglog==0.1.12
22
+ fsspec==2024.6.1
23
+ platformdirs==4.4.0
24
+ pip==25.2
25
+ pillow==10.4.0
26
+ frozenlist==1.7.0
27
+ packaging==24.2
28
+ opt_einsum==3.4.0
29
+ numpy==1.26.4
30
+ ninja==1.13.0
31
+ fonttools==4.60.0
32
+ networkx==3.4.2
33
+ multidict==6.6.4
34
+ mdurl==0.1.2
35
+ MarkupSafe==3.0.2
36
+ kiwisolver==1.4.9
37
+ imageio-ffmpeg==0.6.0
38
+ idna==3.7
39
+ hf-xet==1.1.10
40
+ gmpy2==2.2.1
41
+ einops==0.8.1
42
+ filelock==3.17.0
43
+ decorator==4.4.2
44
+ dacite==1.9.2
45
+ cycler==0.12.1
46
+ colorama==0.4.6
47
+ click==8.3.0
48
+ nvidia-nvtx-cu12==12.8.90
49
+ charset-normalizer==3.3.2
50
+ certifi==2025.8.3
51
+ beartype==0.19.0
52
+ attrs==25.3.0
53
+ async-timeout==5.0.1
54
+ annotated-types==0.7.0
55
+ aiohappyeyeballs==2.6.1
56
+ yarl==1.20.1
57
+ tifffile==2025.5.10
58
+ sentry-sdk==2.39.0
59
+ scipy==1.15.3
60
+ pydantic==2.11.9
61
+ pandas==2.3.2
62
+ opencv-python==4.11.0.86
63
+ omegaconf==2.3.0
64
+ markdown-it-py==4.0.0
65
+ lightning-utilities==0.14.3
66
+ lazy_loader==0.4
67
+ jaxtyping==0.2.37
68
+ imageio==2.37.0
69
+ gitdb==4.0.12
70
+ contourpy==1.3.2
71
+ colorspacious==1.1.2
72
+ cffi==1.17.1
73
+ aiosignal==1.4.0
74
+ scikit-video==1.1.11
75
+ scikit-image==0.25.2
76
+ rich==14.1.0
77
+ moviepy==1.0.3
78
+ matplotlib==3.10.6
79
+ hydra-core==1.3.2
80
+ nvidia-nccl-cu12==2.27.3
81
+ huggingface-hub==0.35.1
82
+ GitPython==3.1.45
83
+ brotlicffi==1.0.9.2
84
+ aiohttp==3.12.15
85
+ torchmetrics==1.8.2
86
+ opt-einsum-fx==0.1.4
87
+ kornia==0.8.1
88
+ pytorch-lightning==2.5.1
89
+ lpips==0.1.4
90
+ e3nn==0.6.0
91
+ lightning==2.5.1
92
+ nvidia-cusparselt-cu12==0.7.1
93
+ triton==3.4.0
94
+ nvidia-nvjitlink-cu12==12.8.93
95
+ nvidia-curand-cu12==10.3.9.90
96
+ nvidia-cufile-cu12==1.13.1.3
97
+ nvidia-cuda-runtime-cu12==12.8.90
98
+ nvidia-cuda-nvrtc-cu12==12.8.93
99
+ nvidia-cuda-cupti-cu12==12.8.90
100
+ nvidia-cublas-cu12==12.8.4.1
101
+ nvidia-cusparse-cu12==12.5.8.93
102
+ nvidia-cufft-cu12==11.3.3.83
103
+ nvidia-cudnn-cu12==9.10.2.21
104
+ nvidia-cusolver-cu12==11.7.3.90
105
+ torch==2.8.0+cu128
106
+ torchvision==0.23.0+cu128
107
+ torchaudio==2.8.0+cu128
108
+ torch_scatter==2.1.2+pt28cu128
109
+ gsplat==1.5.3
110
+ wandb==0.25.0
111
+ cuda-bindings==13.0.3
112
+ cuda-pathfinder==1.3.3
113
+ Jinja2==3.1.6
114
+ mpmath==1.3.0
115
+ nvidia-cublas==13.1.0.3
116
+ nvidia-cuda-cupti==13.0.85
117
+ nvidia-cuda-nvrtc==13.0.88
118
+ nvidia-cuda-runtime==13.0.96
119
+ nvidia-cudnn-cu13==9.15.1.9
120
+ nvidia-cufft==12.0.0.61
121
+ nvidia-cufile==1.15.1.6
122
+ nvidia-curand==10.4.0.35
123
+ nvidia-cusolver==12.0.4.66
124
+ nvidia-cusparse==12.6.3.3
125
+ nvidia-cusparselt-cu13==0.8.0
126
+ nvidia-nccl-cu13==2.28.9
127
+ nvidia-nvjitlink==13.0.88
128
+ nvidia-nvshmem-cu13==3.4.5
129
+ nvidia-nvtx==13.0.85
130
+ requests==2.32.5
131
+ sentencepiece==0.2.1
132
+ sympy==1.14.0
133
+ torchcodec==0.10.0
134
+ torchdata==0.10.0
135
+ torchtext==0.6.0
136
+ anyio==4.12.0
137
+ asttokens==3.0.1
138
+ comm==0.2.3
139
+ debugpy==1.8.19
140
+ executing==2.2.1
141
+ h11==0.16.0
142
+ httpcore==1.0.9
143
+ httpx==0.28.1
144
+ ipykernel==7.1.0
145
+ ipython==9.8.0
146
+ ipython_pygments_lexers==1.1.1
147
+ ipywidgets==8.1.8
148
+ jedi==0.19.2
149
+ jupyter_client==8.7.0
150
+ jupyter_core==5.9.1
151
+ jupyterlab_widgets==3.0.16
152
+ matplotlib-inline==0.2.1
153
+ nest-asyncio==1.6.0
154
+ parso==0.8.5
155
+ pexpect==4.9.0
156
+ prompt_toolkit==3.0.52
157
+ psutil==7.2.1
158
+ ptyprocess==0.7.0
159
+ pure_eval==0.2.3
160
+ Pygments==2.19.2
161
+ python-dateutil==2.9.0.post0
162
+ pyzmq==27.1.0
163
+ shellingham==1.5.4
164
+ six==1.17.0
165
+ stack-data==0.6.3
166
+ tornado==6.5.4
167
+ tqdm==4.67.1
168
+ traitlets==5.14.3
169
+ typer-slim==0.21.0
170
+ typing_extensions==4.15.0
171
+ wcwidth==0.2.14
172
+ widgetsnbextension==4.0.15
ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/wandb-metadata.json ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-6.8.0-90-generic-x86_64-with-glibc2.39",
3
+ "python": "CPython 3.12.12",
4
+ "startedAt": "2026-02-24T22:27:39.584882Z",
5
+ "args": [
6
+ "+experiment=re10k_ablation_24v",
7
+ "wandb.mode=online",
8
+ "wandb.name=ABLATION_0225_FreqSelect",
9
+ "model.density_control.score_mode=frequency"
10
+ ],
11
+ "program": "-m src.main",
12
+ "git": {
13
+ "remote": "git@github.com:K-nowing/CVPR2026.git",
14
+ "commit": "2512754c6c27ca5150bf17fbcbdde3f192fd53cc"
15
+ },
16
+ "email": "dna9041@korea.ac.kr",
17
+ "root": "/workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_FreqSelect",
18
+ "host": "27d18dedec6d",
19
+ "executable": "/venv/main/bin/python",
20
+ "cpu_count": 128,
21
+ "cpu_count_logical": 256,
22
+ "gpu": "NVIDIA H200",
23
+ "gpu_count": 8,
24
+ "disk": {
25
+ "/": {
26
+ "total": "1170378588160",
27
+ "used": "636725506048"
28
+ }
29
+ },
30
+ "memory": {
31
+ "total": "1622948257792"
32
+ },
33
+ "gpu_nvidia": [
34
+ {
35
+ "name": "NVIDIA H200",
36
+ "memoryTotal": "150754820096",
37
+ "cudaCores": 16896,
38
+ "architecture": "Hopper",
39
+ "uuid": "GPU-2649ab80-a3a6-5a1c-0fa5-12bc11bd75e9"
40
+ },
41
+ {
42
+ "name": "NVIDIA H200",
43
+ "memoryTotal": "150754820096",
44
+ "cudaCores": 16896,
45
+ "architecture": "Hopper",
46
+ "uuid": "GPU-e92921d9-c681-246f-af93-637e0dc938ca"
47
+ },
48
+ {
49
+ "name": "NVIDIA H200",
50
+ "memoryTotal": "150754820096",
51
+ "cudaCores": 16896,
52
+ "architecture": "Hopper",
53
+ "uuid": "GPU-ffe12ffc-9bb7-82de-5692-1ec0ee2e68d8"
54
+ },
55
+ {
56
+ "name": "NVIDIA H200",
57
+ "memoryTotal": "150754820096",
58
+ "cudaCores": 16896,
59
+ "architecture": "Hopper",
60
+ "uuid": "GPU-499e5acd-b6ab-2010-c51b-ee9b5aa65825"
61
+ },
62
+ {
63
+ "name": "NVIDIA H200",
64
+ "memoryTotal": "150754820096",
65
+ "cudaCores": 16896,
66
+ "architecture": "Hopper",
67
+ "uuid": "GPU-3b2522d9-1c72-e49b-2c30-96165680b74a"
68
+ },
69
+ {
70
+ "name": "NVIDIA H200",
71
+ "memoryTotal": "150754820096",
72
+ "cudaCores": 16896,
73
+ "architecture": "Hopper",
74
+ "uuid": "GPU-a9a280c5-b2f9-dc1e-a8a9-7326a74001ff"
75
+ },
76
+ {
77
+ "name": "NVIDIA H200",
78
+ "memoryTotal": "150754820096",
79
+ "cudaCores": 16896,
80
+ "architecture": "Hopper",
81
+ "uuid": "GPU-07d0167b-a6a1-1900-2d27-7c6c11598409"
82
+ },
83
+ {
84
+ "name": "NVIDIA H200",
85
+ "memoryTotal": "150754820096",
86
+ "cudaCores": 16896,
87
+ "architecture": "Hopper",
88
+ "uuid": "GPU-8362a999-20d1-c27b-5d18-032d23f859ab"
89
+ }
90
+ ],
91
+ "cudaVersion": "13.1",
92
+ "writerId": "1aoh34iwmaamch760bz6silmn5l3ie5b"
93
+ }
ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"loss/total":0.10145657509565353,"loss/final_3dgs/lpips":0.009992067702114582,"val/lpips":0.15560707449913025,"loss/camera":0.00027895145467482507,"lr-AdamW/pg1-momentum":0.9,"loss/aux_0/lpips":0.011460522189736366,"loss/aux_2/mse":0.013906704261898994,"loss/scene_scale_reg":0.00029438614728860557,"loss/aux_0/mse":0.014569984748959541,"lr-AdamW/pg2":2e-05,"val/psnr":22.323665618896484,"loss/aux_0/error_score":0.8076989054679871,"loss/aux_2/lpips":0.0103166364133358,"epoch":0,"train/psnr_probabilistic":18.699142456054688,"_runtime":11534,"train/error_scores":{"filenames":["media/images/train/error_scores_201_6255176ede93e5c4c605.png"],"captions":[["0621c7675fab1418"]],"_type":"images/separated","width":1328,"height":2120,"format":"png","count":1},"loss/aux_1/mse":0.014023929834365845,"train/comparison":{"height":2154,"format":"png","count":1,"filenames":["media/images/train/comparison_202_2d515c3482668baeba0f.png"],"captions":[["0621c7675fab1418"]],"_type":"images/separated","width":1328},"error_scores":{"format":"png","count":1,"filenames":["media/images/error_scores_199_bbf557521907e54e9e40.png"],"captions":["a76028640ffa1ef9"],"_type":"images/separated","width":800,"height":536},"loss/aux_1/lpips":0.010416326113045216,"train/scene_scale":1.0072107315063477,"_step":202,"_timestamp":1.771983588695968e+09,"val/gaussian_num_ratio":0.3998870849609375,"trainer/global_step":3001,"loss/final_3dgs/mse":0.013686501421034336,"val/ssim":0.8440837860107422,"loss/aux_1/error_score":0.4816555380821228,"active_mask_imgs":{"filenames":["media/images/active_mask_imgs_198_24c7ded6b719c7a30450.png"],"captions":["a76028640ffa1ef9"],"_type":"images/separated","width":536,"height":800,"format":"png","count":1},"comparison":{"width":1064,"height":1098,"format":"png","count":1,"filenames":["media/images/comparison_197_e0879eb637c4b3dfe984.png"],"captions":["a76028640ffa1ef9"],"_type":"images/separated"},"_wandb":{"runtime":11534},"lr-AdamW/pg1":2.003594834351718e-05,"info/global_step":3000,"lr-AdamW/pg2-momentum":0.9}
ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug-core.log ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-24T22:27:39.691505272Z","level":"INFO","msg":"main: starting server","port-filename":"/tmp/tmphvccr9ry/port-113743.txt","pid":113743,"log-level":0,"disable-analytics":false,"shutdown-on-parent-exit":false,"enable-dcgm-profiling":false}
2
+ {"time":"2026-02-24T22:27:39.692335245Z","level":"INFO","msg":"server: will exit if parent process dies","ppid":113743}
3
+ {"time":"2026-02-24T22:27:39.692317115Z","level":"INFO","msg":"server: accepting connections","addr":{"Name":"/tmp/wandb-113743-116175-1521879483/socket","Net":"unix"}}
4
+ {"time":"2026-02-24T22:27:39.872966329Z","level":"INFO","msg":"connection: ManageConnectionData: new connection created","id":"1(@)"}
5
+ {"time":"2026-02-24T22:27:39.882057082Z","level":"INFO","msg":"handleInformInit: received","streamId":"y7wvpmyy","id":"1(@)"}
6
+ {"time":"2026-02-24T22:27:40.294862883Z","level":"INFO","msg":"handleInformInit: stream started","streamId":"y7wvpmyy","id":"1(@)"}
7
+ {"time":"2026-02-24T22:27:46.739505276Z","level":"INFO","msg":"connection: cancelling request","id":"1(@)","requestId":"ml9idastztfo"}
8
+ {"time":"2026-02-25T01:39:55.96564956Z","level":"INFO","msg":"handleInformTeardown: server teardown initiated","id":"1(@)"}
9
+ {"time":"2026-02-25T01:39:55.965789472Z","level":"INFO","msg":"server is shutting down"}
10
+ {"time":"2026-02-25T01:39:55.965784692Z","level":"INFO","msg":"connection: closing","id":"1(@)"}
11
+ {"time":"2026-02-25T01:39:55.965834972Z","level":"INFO","msg":"connection: closed successfully","id":"1(@)"}
12
+ {"time":"2026-02-25T01:39:55.965861283Z","level":"INFO","msg":"server: listener closed","addr":{"Name":"/tmp/wandb-113743-116175-1521879483/socket","Net":"unix"}}
13
+ {"time":"2026-02-25T01:39:57.156626442Z","level":"INFO","msg":"handleInformTeardown: server shutdown complete","id":"1(@)"}
14
+ {"time":"2026-02-25T01:39:57.156689353Z","level":"INFO","msg":"connection: ManageConnectionData: connection closed","id":"1(@)"}
15
+ {"time":"2026-02-25T01:39:57.156724243Z","level":"INFO","msg":"server is closed"}
ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug-internal.log ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-24T22:27:39.882209485Z","level":"INFO","msg":"stream: starting","core version":"0.25.0"}
2
+ {"time":"2026-02-24T22:27:40.294571378Z","level":"INFO","msg":"stream: created new stream","id":"y7wvpmyy"}
3
+ {"time":"2026-02-24T22:27:40.2947114Z","level":"INFO","msg":"handler: started","stream_id":"y7wvpmyy"}
4
+ {"time":"2026-02-24T22:27:40.294855053Z","level":"INFO","msg":"stream: started","id":"y7wvpmyy"}
5
+ {"time":"2026-02-24T22:27:40.294904223Z","level":"INFO","msg":"sender: started","stream_id":"y7wvpmyy"}
6
+ {"time":"2026-02-24T22:27:40.294940724Z","level":"INFO","msg":"writer: started","stream_id":"y7wvpmyy"}
7
+ {"time":"2026-02-25T01:00:56.785103175Z","level":"INFO","msg":"api: retrying HTTP error","status":502,"url":"https://api.wandb.ai/files/know/DCSplat/y7wvpmyy/file_stream","body":"\n<html><head>\n<meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">\n<title>502 Server Error</title>\n</head>\n<body text=#000000 bgcolor=#ffffff>\n<h1>Error: Server Error</h1>\n<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>\n<h2></h2>\n</body></html>\n"}
8
+ {"time":"2026-02-25T01:39:55.965783052Z","level":"INFO","msg":"stream: closing","id":"y7wvpmyy"}
9
+ {"time":"2026-02-25T01:39:56.929029575Z","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
10
+ {"time":"2026-02-25T01:39:57.1548805Z","level":"INFO","msg":"handler: closed","stream_id":"y7wvpmyy"}
11
+ {"time":"2026-02-25T01:39:57.155103083Z","level":"INFO","msg":"sender: closed","stream_id":"y7wvpmyy"}
12
+ {"time":"2026-02-25T01:39:57.155127144Z","level":"INFO","msg":"stream: closed","id":"y7wvpmyy"}
ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug.log ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-24 22:27:39,587 INFO MainThread:113743 [wandb_setup.py:_flush():81] Current SDK version is 0.25.0
2
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_setup.py:_flush():81] Configure stats pid to 113743
3
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_setup.py:_flush():81] Loading settings from environment variables
4
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:setup_run_log_directory():717] Logging user logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug.log
5
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:setup_run_log_directory():718] Logging internal logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_FreqSelect/wandb/run-20260224_222739-y7wvpmyy/logs/debug-internal.log
6
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:init():844] calling init triggers
7
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:init():849] wandb.init called with sweep_config: {}
8
+ config: {'model': {'encoder': {'name': 'dcsplat', 'input_image_shape': [518, 518], 'head_mode': 'pcd', 'num_level': 3, 'gs_param_dim': 256, 'align_corners': False, 'use_voxelize': True}, 'decoder': {'name': 'splatting_cuda', 'background_color': [0.0, 0.0, 0.0], 'make_scale_invariant': False}, 'density_control': {'name': 'density_control_module', 'mean_dim': 32, 'gs_param_dim': 256, 'refinement_layer_num': 1, 'num_level': 3, 'grad_mode': 'absgrad', 'use_mean_features': True, 'refinement_type': 'voxelize', 'refinement_hidden_dim': 32, 'aggregation_mode': 'mean', 'num_heads': 1, 'score_mode': 'frequency', 'latent_dim': 128, 'num_latents': 64, 'num_self_attn_per_block': 2, 'voxel_size': 0.001, 'aux_refine': False, 'refine_error': False, 'use_refine_module': True, 'voxelize_activate': True, 'use_depth': False}}, 'render_loss': {'mse': {'weight': 1.0}, 'lpips': {'weight': 0.05, 'apply_after_step': 0}}, 'density_control_loss': {'error_score': {'weight': 0.01, 'log_scale': False, 'grad_scale': 10000.0, 'mode': 'original'}}, 'direct_loss': {'l1': {'weight': 0.8}, 'ssim': {'weight': 0.2}}, 'wandb': {'project': 'DCSplat', 'entity': 'scene-representation-group', 'name': 'ABLATION_0225_FreqSelect', 'mode': 'online', 'tags': ['re10k', '256x256']}, 'mode': 'train', 'data_loader': {'train': {'num_workers': 16, 'persistent_workers': True, 'batch_size': 16, 'seed': 1234}, 'test': {'num_workers': 4, 'persistent_workers': False, 'batch_size': 1, 'seed': 2345}, 'val': {'num_workers': 1, 'persistent_workers': True, 'batch_size': 1, 'seed': 3456}}, 'optimizer': {'lr': 0.0002, 'warm_up_steps': 25, 'backbone_lr_multiplier': 0.1, 'backbone_trainable': 'T+H', 'accumulate': 1}, 'checkpointing': {'load': None, 'every_n_train_steps': 1500, 'save_top_k': 2, 'save_weights_only': False}, 'train': {'extended_visualization': False, 'print_log_every_n_steps': 10, 'camera_loss': 10.0, 'one_sample_validation': None, 'align_corners': False, 'intrinsic_scaling': False, 'verbose': False, 'beta_dist_param': [0.5, 4.0], 'use_refine_aux': False, 'train_target_set': True, 'train_gs_num': 1, 'ext_scale_detach': False, 'cam_scale_mode': 'sum', 'scene_scale_reg_loss': 0.01, 'train_aux': True, 'vggt_cam_loss': True, 'vggt_distil': False, 'context_view_train': False}, 'test': {'output_path': 'test/ablation/re10k', 'align_pose': False, 'pose_align_steps': 100, 'rot_opt_lr': 0.005, 'trans_opt_lr': 0.005, 'compute_scores': True, 'save_image': False, 'save_video': False, 'save_active_mask_image': False, 'save_error_score_image': False, 'save_compare': False, 'pred_intrinsic': False, 'error_threshold': 0.4, 'error_threshold_list': [0.2, 0.4, 0.6, 0.8, 1.0], 'threshold_mode': 'ratio', 'nvs_view_N_list': [3, 6, 16, 32, 64]}, 'seed': 111123, 'trainer': {'max_steps': 3001, 'val_check_interval': 250, 'gradient_clip_val': 0.5, 'num_nodes': 1}, 'dataset': {'re10k': {'make_baseline_1': True, 'relative_pose': True, 'augment': True, 'background_color': [0.0, 0.0, 0.0], 'overfit_to_scene': None, 'skip_bad_shape': True, 'view_sampler': {'name': 'bounded', 'num_target_views': 4, 'num_context_views': 2, 'min_distance_between_context_views': 45, 'max_distance_between_context_views': 90, 'min_distance_to_context_views': 0, 'warm_up_steps': 1000, 'initial_min_distance_between_context_views': 25, 'initial_max_distance_between_context_views': 25, 'same_target_gap': False, 'num_target_set': 3}, 'name': 're10k', 'roots': ['datasets/re10k'], 'input_image_shape': [256, 256], 'original_image_shape': [360, 640], 'cameras_are_circular': False, 'baseline_min': 0.001, 'baseline_max': 10000000000.0, 'max_fov': 100.0, 'dynamic_context_views': True, 'max_context_views_per_gpu': 24}}, '_wandb': {}}
9
+ 2026-02-24 22:27:39,588 INFO MainThread:113743 [wandb_init.py:init():892] starting backend
10
+ 2026-02-24 22:27:39,873 INFO MainThread:113743 [wandb_init.py:init():895] sending inform_init request
11
+ 2026-02-24 22:27:39,880 INFO MainThread:113743 [wandb_init.py:init():903] backend started and connected
12
+ 2026-02-24 22:27:39,887 INFO MainThread:113743 [wandb_init.py:init():973] updated telemetry
13
+ 2026-02-24 22:27:39,894 INFO MainThread:113743 [wandb_init.py:init():997] communicating run to backend with 90.0 second timeout
14
+ 2026-02-24 22:27:41,506 INFO MainThread:113743 [wandb_init.py:init():1042] starting run threads in backend
15
+ 2026-02-24 22:27:41,632 INFO MainThread:113743 [wandb_run.py:_console_start():2524] atexit reg
16
+ 2026-02-24 22:27:41,632 INFO MainThread:113743 [wandb_run.py:_redirect():2373] redirect: wrap_raw
17
+ 2026-02-24 22:27:41,632 INFO MainThread:113743 [wandb_run.py:_redirect():2442] Wrapping output streams.
18
+ 2026-02-24 22:27:41,632 INFO MainThread:113743 [wandb_run.py:_redirect():2465] Redirects installed.
19
+ 2026-02-24 22:27:41,635 INFO MainThread:113743 [wandb_init.py:init():1082] run started, returning control to user process
20
+ 2026-02-25 01:39:55,965 INFO wandb-AsyncioManager-main:113743 [service_client.py:_forward_responses():134] Reached EOF.
21
+ 2026-02-25 01:39:55,965 INFO wandb-AsyncioManager-main:113743 [mailbox.py:close():155] Closing mailbox, abandoning 1 handles.
ABLATION_0225_OURS/.hydra/config.yaml ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model:
2
+ encoder:
3
+ name: dcsplat
4
+ input_image_shape:
5
+ - 518
6
+ - 518
7
+ head_mode: pcd
8
+ num_level: 3
9
+ gs_param_dim: 256
10
+ align_corners: false
11
+ use_voxelize: true
12
+ decoder:
13
+ name: splatting_cuda
14
+ background_color:
15
+ - 0.0
16
+ - 0.0
17
+ - 0.0
18
+ make_scale_invariant: false
19
+ density_control:
20
+ name: density_control_module
21
+ mean_dim: 32
22
+ gs_param_dim: 256
23
+ refinement_layer_num: 1
24
+ num_level: 3
25
+ grad_mode: absgrad
26
+ use_mean_features: true
27
+ refinement_type: voxelize
28
+ refinement_hidden_dim: 32
29
+ aggregation_mode: mean
30
+ num_heads: 1
31
+ score_mode: absgrad
32
+ latent_dim: 128
33
+ num_latents: 64
34
+ num_self_attn_per_block: 2
35
+ voxel_size: 0.001
36
+ aux_refine: false
37
+ refine_error: false
38
+ use_refine_module: true
39
+ voxelize_activate: true
40
+ use_depth: false
41
+ render_loss:
42
+ mse:
43
+ weight: 1.0
44
+ lpips:
45
+ weight: 0.05
46
+ apply_after_step: 0
47
+ density_control_loss:
48
+ error_score:
49
+ weight: 0.01
50
+ log_scale: false
51
+ grad_scale: 10000.0
52
+ mode: original
53
+ direct_loss:
54
+ l1:
55
+ weight: 0.8
56
+ ssim:
57
+ weight: 0.2
58
+ wandb:
59
+ project: DCSplat
60
+ entity: scene-representation-group
61
+ name: ABLATION_0225_OURS
62
+ mode: online
63
+ tags:
64
+ - re10k
65
+ - 256x256
66
+ mode: train
67
+ data_loader:
68
+ train:
69
+ num_workers: 16
70
+ persistent_workers: true
71
+ batch_size: 16
72
+ seed: 1234
73
+ test:
74
+ num_workers: 4
75
+ persistent_workers: false
76
+ batch_size: 1
77
+ seed: 2345
78
+ val:
79
+ num_workers: 1
80
+ persistent_workers: true
81
+ batch_size: 1
82
+ seed: 3456
83
+ optimizer:
84
+ lr: 0.0002
85
+ warm_up_steps: 25
86
+ backbone_lr_multiplier: 0.1
87
+ backbone_trainable: T+H
88
+ accumulate: 1
89
+ checkpointing:
90
+ load: null
91
+ every_n_train_steps: 1500
92
+ save_top_k: 2
93
+ save_weights_only: false
94
+ train:
95
+ extended_visualization: false
96
+ print_log_every_n_steps: 10
97
+ camera_loss: 10.0
98
+ one_sample_validation: null
99
+ align_corners: false
100
+ intrinsic_scaling: false
101
+ verbose: false
102
+ beta_dist_param:
103
+ - 0.5
104
+ - 4.0
105
+ use_refine_aux: false
106
+ train_target_set: true
107
+ train_gs_num: 1
108
+ ext_scale_detach: false
109
+ cam_scale_mode: sum
110
+ scene_scale_reg_loss: 0.01
111
+ train_aux: true
112
+ vggt_cam_loss: true
113
+ vggt_distil: false
114
+ context_view_train: false
115
+ test:
116
+ output_path: test/ablation/re10k
117
+ align_pose: false
118
+ pose_align_steps: 100
119
+ rot_opt_lr: 0.005
120
+ trans_opt_lr: 0.005
121
+ compute_scores: true
122
+ save_image: false
123
+ save_video: false
124
+ save_active_mask_image: false
125
+ save_error_score_image: false
126
+ save_compare: false
127
+ pred_intrinsic: false
128
+ error_threshold: 0.4
129
+ error_threshold_list:
130
+ - 0.2
131
+ - 0.4
132
+ - 0.6
133
+ - 0.8
134
+ - 1.0
135
+ threshold_mode: ratio
136
+ nvs_view_N_list:
137
+ - 3
138
+ - 6
139
+ - 16
140
+ - 32
141
+ - 64
142
+ seed: 111123
143
+ trainer:
144
+ max_steps: 3001
145
+ val_check_interval: 250
146
+ gradient_clip_val: 0.5
147
+ num_nodes: 1
148
+ dataset:
149
+ re10k:
150
+ make_baseline_1: true
151
+ relative_pose: true
152
+ augment: true
153
+ background_color:
154
+ - 0.0
155
+ - 0.0
156
+ - 0.0
157
+ overfit_to_scene: null
158
+ skip_bad_shape: true
159
+ view_sampler:
160
+ name: bounded
161
+ num_target_views: 4
162
+ num_context_views: 2
163
+ min_distance_between_context_views: 45
164
+ max_distance_between_context_views: 90
165
+ min_distance_to_context_views: 0
166
+ warm_up_steps: 1000
167
+ initial_min_distance_between_context_views: 25
168
+ initial_max_distance_between_context_views: 25
169
+ same_target_gap: false
170
+ num_target_set: 3
171
+ name: re10k
172
+ roots:
173
+ - datasets/re10k
174
+ input_image_shape:
175
+ - 256
176
+ - 256
177
+ original_image_shape:
178
+ - 360
179
+ - 640
180
+ cameras_are_circular: false
181
+ baseline_min: 0.001
182
+ baseline_max: 10000000000.0
183
+ max_fov: 100.0
184
+ dynamic_context_views: true
185
+ max_context_views_per_gpu: 24
ABLATION_0225_OURS/.hydra/hydra.yaml ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hydra:
2
+ run:
3
+ dir: outputs/ablation/re10k/${wandb.name}
4
+ sweep:
5
+ dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
6
+ subdir: ${hydra.job.num}
7
+ launcher:
8
+ _target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
9
+ sweeper:
10
+ _target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
11
+ max_batch_size: null
12
+ params: null
13
+ help:
14
+ app_name: ${hydra.job.name}
15
+ header: '${hydra.help.app_name} is powered by Hydra.
16
+
17
+ '
18
+ footer: 'Powered by Hydra (https://hydra.cc)
19
+
20
+ Use --hydra-help to view Hydra specific help
21
+
22
+ '
23
+ template: '${hydra.help.header}
24
+
25
+ == Configuration groups ==
26
+
27
+ Compose your configuration from those groups (group=option)
28
+
29
+
30
+ $APP_CONFIG_GROUPS
31
+
32
+
33
+ == Config ==
34
+
35
+ Override anything in the config (foo.bar=value)
36
+
37
+
38
+ $CONFIG
39
+
40
+
41
+ ${hydra.help.footer}
42
+
43
+ '
44
+ hydra_help:
45
+ template: 'Hydra (${hydra.runtime.version})
46
+
47
+ See https://hydra.cc for more info.
48
+
49
+
50
+ == Flags ==
51
+
52
+ $FLAGS_HELP
53
+
54
+
55
+ == Configuration groups ==
56
+
57
+ Compose your configuration from those groups (For example, append hydra/job_logging=disabled
58
+ to command line)
59
+
60
+
61
+ $HYDRA_CONFIG_GROUPS
62
+
63
+
64
+ Use ''--cfg hydra'' to Show the Hydra config.
65
+
66
+ '
67
+ hydra_help: ???
68
+ hydra_logging:
69
+ version: 1
70
+ formatters:
71
+ simple:
72
+ format: '[%(asctime)s][HYDRA] %(message)s'
73
+ handlers:
74
+ console:
75
+ class: logging.StreamHandler
76
+ formatter: simple
77
+ stream: ext://sys.stdout
78
+ root:
79
+ level: INFO
80
+ handlers:
81
+ - console
82
+ loggers:
83
+ logging_example:
84
+ level: DEBUG
85
+ disable_existing_loggers: false
86
+ job_logging:
87
+ version: 1
88
+ formatters:
89
+ simple:
90
+ format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
91
+ handlers:
92
+ console:
93
+ class: logging.StreamHandler
94
+ formatter: simple
95
+ stream: ext://sys.stdout
96
+ file:
97
+ class: logging.FileHandler
98
+ formatter: simple
99
+ filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
100
+ root:
101
+ level: INFO
102
+ handlers:
103
+ - console
104
+ - file
105
+ disable_existing_loggers: false
106
+ env: {}
107
+ mode: RUN
108
+ searchpath: []
109
+ callbacks: {}
110
+ output_subdir: .hydra
111
+ overrides:
112
+ hydra:
113
+ - hydra.mode=RUN
114
+ task:
115
+ - +experiment=re10k_ablation_24v
116
+ - wandb.mode=online
117
+ - wandb.name=ABLATION_0225_OURS
118
+ job:
119
+ name: main
120
+ chdir: null
121
+ override_dirname: +experiment=re10k_ablation_24v,wandb.mode=online,wandb.name=ABLATION_0225_OURS
122
+ id: ???
123
+ num: ???
124
+ config_name: main
125
+ env_set: {}
126
+ env_copy: []
127
+ config:
128
+ override_dirname:
129
+ kv_sep: '='
130
+ item_sep: ','
131
+ exclude_keys: []
132
+ runtime:
133
+ version: 1.3.2
134
+ version_base: '1.3'
135
+ cwd: /workspace/code/CVPR2026
136
+ config_sources:
137
+ - path: hydra.conf
138
+ schema: pkg
139
+ provider: hydra
140
+ - path: /workspace/code/CVPR2026/config
141
+ schema: file
142
+ provider: main
143
+ - path: ''
144
+ schema: structured
145
+ provider: schema
146
+ output_dir: /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_OURS
147
+ choices:
148
+ experiment: re10k_ablation_24v
149
+ dataset@dataset.re10k: re10k
150
+ dataset/view_sampler_dataset_specific_config@dataset.re10k.view_sampler: bounded_re10k
151
+ dataset/view_sampler@dataset.re10k.view_sampler: bounded
152
+ model/density_control: density_control_module
153
+ model/decoder: splatting_cuda
154
+ model/encoder: dcsplat
155
+ hydra/env: default
156
+ hydra/callbacks: null
157
+ hydra/job_logging: default
158
+ hydra/hydra_logging: default
159
+ hydra/hydra_help: default
160
+ hydra/help: default
161
+ hydra/sweeper: basic
162
+ hydra/launcher: basic
163
+ hydra/output: default
164
+ verbose: false
ABLATION_0225_OURS/.hydra/overrides.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ - +experiment=re10k_ablation_24v
2
+ - wandb.mode=online
3
+ - wandb.name=ABLATION_0225_OURS
ABLATION_0225_OURS/wandb/debug-internal.log ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-24T19:15:08.591653472Z","level":"INFO","msg":"stream: starting","core version":"0.25.0"}
2
+ {"time":"2026-02-24T19:15:09.22244861Z","level":"INFO","msg":"stream: created new stream","id":"0b125b6z"}
3
+ {"time":"2026-02-24T19:15:09.222653934Z","level":"INFO","msg":"handler: started","stream_id":"0b125b6z"}
4
+ {"time":"2026-02-24T19:15:09.222865877Z","level":"INFO","msg":"stream: started","id":"0b125b6z"}
5
+ {"time":"2026-02-24T19:15:09.222943579Z","level":"INFO","msg":"writer: started","stream_id":"0b125b6z"}
6
+ {"time":"2026-02-24T19:15:09.222946409Z","level":"INFO","msg":"sender: started","stream_id":"0b125b6z"}
7
+ {"time":"2026-02-24T22:26:34.518352356Z","level":"INFO","msg":"stream: closing","id":"0b125b6z"}
8
+ {"time":"2026-02-24T22:26:35.362766174Z","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
9
+ {"time":"2026-02-24T22:26:35.604459738Z","level":"INFO","msg":"handler: closed","stream_id":"0b125b6z"}
10
+ {"time":"2026-02-24T22:26:35.604786383Z","level":"INFO","msg":"sender: closed","stream_id":"0b125b6z"}
11
+ {"time":"2026-02-24T22:26:35.604815153Z","level":"INFO","msg":"stream: closed","id":"0b125b6z"}
ABLATION_0225_OURS/wandb/debug.log ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_setup.py:_flush():81] Current SDK version is 0.25.0
2
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_setup.py:_flush():81] Configure stats pid to 90349
3
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_setup.py:_flush():81] Loading settings from environment variables
4
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:setup_run_log_directory():717] Logging user logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug.log
5
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:setup_run_log_directory():718] Logging internal logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug-internal.log
6
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:init():844] calling init triggers
7
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:init():849] wandb.init called with sweep_config: {}
8
+ config: {'model': {'encoder': {'name': 'dcsplat', 'input_image_shape': [518, 518], 'head_mode': 'pcd', 'num_level': 3, 'gs_param_dim': 256, 'align_corners': False, 'use_voxelize': True}, 'decoder': {'name': 'splatting_cuda', 'background_color': [0.0, 0.0, 0.0], 'make_scale_invariant': False}, 'density_control': {'name': 'density_control_module', 'mean_dim': 32, 'gs_param_dim': 256, 'refinement_layer_num': 1, 'num_level': 3, 'grad_mode': 'absgrad', 'use_mean_features': True, 'refinement_type': 'voxelize', 'refinement_hidden_dim': 32, 'aggregation_mode': 'mean', 'num_heads': 1, 'score_mode': 'absgrad', 'latent_dim': 128, 'num_latents': 64, 'num_self_attn_per_block': 2, 'voxel_size': 0.001, 'aux_refine': False, 'refine_error': False, 'use_refine_module': True, 'voxelize_activate': True, 'use_depth': False}}, 'render_loss': {'mse': {'weight': 1.0}, 'lpips': {'weight': 0.05, 'apply_after_step': 0}}, 'density_control_loss': {'error_score': {'weight': 0.01, 'log_scale': False, 'grad_scale': 10000.0, 'mode': 'original'}}, 'direct_loss': {'l1': {'weight': 0.8}, 'ssim': {'weight': 0.2}}, 'wandb': {'project': 'DCSplat', 'entity': 'scene-representation-group', 'name': 'ABLATION_0225_OURS', 'mode': 'online', 'tags': ['re10k', '256x256']}, 'mode': 'train', 'data_loader': {'train': {'num_workers': 16, 'persistent_workers': True, 'batch_size': 16, 'seed': 1234}, 'test': {'num_workers': 4, 'persistent_workers': False, 'batch_size': 1, 'seed': 2345}, 'val': {'num_workers': 1, 'persistent_workers': True, 'batch_size': 1, 'seed': 3456}}, 'optimizer': {'lr': 0.0002, 'warm_up_steps': 25, 'backbone_lr_multiplier': 0.1, 'backbone_trainable': 'T+H', 'accumulate': 1}, 'checkpointing': {'load': None, 'every_n_train_steps': 1500, 'save_top_k': 2, 'save_weights_only': False}, 'train': {'extended_visualization': False, 'print_log_every_n_steps': 10, 'camera_loss': 10.0, 'one_sample_validation': None, 'align_corners': False, 'intrinsic_scaling': False, 'verbose': False, 'beta_dist_param': [0.5, 4.0], 'use_refine_aux': False, 'train_target_set': True, 'train_gs_num': 1, 'ext_scale_detach': False, 'cam_scale_mode': 'sum', 'scene_scale_reg_loss': 0.01, 'train_aux': True, 'vggt_cam_loss': True, 'vggt_distil': False, 'context_view_train': False}, 'test': {'output_path': 'test/ablation/re10k', 'align_pose': False, 'pose_align_steps': 100, 'rot_opt_lr': 0.005, 'trans_opt_lr': 0.005, 'compute_scores': True, 'save_image': False, 'save_video': False, 'save_active_mask_image': False, 'save_error_score_image': False, 'save_compare': False, 'pred_intrinsic': False, 'error_threshold': 0.4, 'error_threshold_list': [0.2, 0.4, 0.6, 0.8, 1.0], 'threshold_mode': 'ratio', 'nvs_view_N_list': [3, 6, 16, 32, 64]}, 'seed': 111123, 'trainer': {'max_steps': 3001, 'val_check_interval': 250, 'gradient_clip_val': 0.5, 'num_nodes': 1}, 'dataset': {'re10k': {'make_baseline_1': True, 'relative_pose': True, 'augment': True, 'background_color': [0.0, 0.0, 0.0], 'overfit_to_scene': None, 'skip_bad_shape': True, 'view_sampler': {'name': 'bounded', 'num_target_views': 4, 'num_context_views': 2, 'min_distance_between_context_views': 45, 'max_distance_between_context_views': 90, 'min_distance_to_context_views': 0, 'warm_up_steps': 1000, 'initial_min_distance_between_context_views': 25, 'initial_max_distance_between_context_views': 25, 'same_target_gap': False, 'num_target_set': 3}, 'name': 're10k', 'roots': ['datasets/re10k'], 'input_image_shape': [256, 256], 'original_image_shape': [360, 640], 'cameras_are_circular': False, 'baseline_min': 0.001, 'baseline_max': 10000000000.0, 'max_fov': 100.0, 'dynamic_context_views': True, 'max_context_views_per_gpu': 24}}, '_wandb': {}}
9
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:init():892] starting backend
10
+ 2026-02-24 19:15:08,582 INFO MainThread:90349 [wandb_init.py:init():895] sending inform_init request
11
+ 2026-02-24 19:15:08,588 INFO MainThread:90349 [wandb_init.py:init():903] backend started and connected
12
+ 2026-02-24 19:15:08,591 INFO MainThread:90349 [wandb_init.py:init():973] updated telemetry
13
+ 2026-02-24 19:15:08,598 INFO MainThread:90349 [wandb_init.py:init():997] communicating run to backend with 90.0 second timeout
14
+ 2026-02-24 19:15:10,455 INFO MainThread:90349 [wandb_init.py:init():1042] starting run threads in backend
15
+ 2026-02-24 19:15:10,580 INFO MainThread:90349 [wandb_run.py:_console_start():2524] atexit reg
16
+ 2026-02-24 19:15:10,580 INFO MainThread:90349 [wandb_run.py:_redirect():2373] redirect: wrap_raw
17
+ 2026-02-24 19:15:10,580 INFO MainThread:90349 [wandb_run.py:_redirect():2442] Wrapping output streams.
18
+ 2026-02-24 19:15:10,582 INFO MainThread:90349 [wandb_run.py:_redirect():2465] Redirects installed.
19
+ 2026-02-24 19:15:10,584 INFO MainThread:90349 [wandb_init.py:init():1082] run started, returning control to user process
20
+ 2026-02-24 22:26:34,518 INFO wandb-AsyncioManager-main:90349 [service_client.py:_forward_responses():134] Reached EOF.
21
+ 2026-02-24 22:26:34,518 INFO wandb-AsyncioManager-main:90349 [mailbox.py:close():155] Closing mailbox, abandoning 1 handles.
ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/config.yaml ADDED
@@ -0,0 +1,306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ _wandb:
2
+ value:
3
+ cli_version: 0.25.0
4
+ e:
5
+ lma14qrq4ffkxha58hrfyhtyvrmlfx2i:
6
+ args:
7
+ - +experiment=re10k_ablation_24v
8
+ - wandb.mode=online
9
+ - wandb.name=ABLATION_0225_OURS
10
+ cpu_count: 128
11
+ cpu_count_logical: 256
12
+ cudaVersion: "13.1"
13
+ disk:
14
+ /:
15
+ total: "1170378588160"
16
+ used: "612674392064"
17
+ email: dna9041@korea.ac.kr
18
+ executable: /venv/main/bin/python
19
+ git:
20
+ commit: 2512754c6c27ca5150bf17fbcbdde3f192fd53cc
21
+ remote: git@github.com:K-nowing/CVPR2026.git
22
+ gpu: NVIDIA H200
23
+ gpu_count: 8
24
+ gpu_nvidia:
25
+ - architecture: Hopper
26
+ cudaCores: 16896
27
+ memoryTotal: "150754820096"
28
+ name: NVIDIA H200
29
+ uuid: GPU-2649ab80-a3a6-5a1c-0fa5-12bc11bd75e9
30
+ - architecture: Hopper
31
+ cudaCores: 16896
32
+ memoryTotal: "150754820096"
33
+ name: NVIDIA H200
34
+ uuid: GPU-e92921d9-c681-246f-af93-637e0dc938ca
35
+ - architecture: Hopper
36
+ cudaCores: 16896
37
+ memoryTotal: "150754820096"
38
+ name: NVIDIA H200
39
+ uuid: GPU-ffe12ffc-9bb7-82de-5692-1ec0ee2e68d8
40
+ - architecture: Hopper
41
+ cudaCores: 16896
42
+ memoryTotal: "150754820096"
43
+ name: NVIDIA H200
44
+ uuid: GPU-499e5acd-b6ab-2010-c51b-ee9b5aa65825
45
+ - architecture: Hopper
46
+ cudaCores: 16896
47
+ memoryTotal: "150754820096"
48
+ name: NVIDIA H200
49
+ uuid: GPU-3b2522d9-1c72-e49b-2c30-96165680b74a
50
+ - architecture: Hopper
51
+ cudaCores: 16896
52
+ memoryTotal: "150754820096"
53
+ name: NVIDIA H200
54
+ uuid: GPU-a9a280c5-b2f9-dc1e-a8a9-7326a74001ff
55
+ - architecture: Hopper
56
+ cudaCores: 16896
57
+ memoryTotal: "150754820096"
58
+ name: NVIDIA H200
59
+ uuid: GPU-07d0167b-a6a1-1900-2d27-7c6c11598409
60
+ - architecture: Hopper
61
+ cudaCores: 16896
62
+ memoryTotal: "150754820096"
63
+ name: NVIDIA H200
64
+ uuid: GPU-8362a999-20d1-c27b-5d18-032d23f859ab
65
+ host: 27d18dedec6d
66
+ memory:
67
+ total: "1622948257792"
68
+ os: Linux-6.8.0-90-generic-x86_64-with-glibc2.39
69
+ program: -m src.main
70
+ python: CPython 3.12.12
71
+ root: /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_OURS
72
+ startedAt: "2026-02-24T19:15:08.304921Z"
73
+ writerId: lma14qrq4ffkxha58hrfyhtyvrmlfx2i
74
+ m:
75
+ - "1": trainer/global_step
76
+ "6":
77
+ - 3
78
+ "7": []
79
+ - "2": '*'
80
+ "5": 1
81
+ "6":
82
+ - 1
83
+ "7": []
84
+ python_version: 3.12.12
85
+ t:
86
+ "1":
87
+ - 1
88
+ - 41
89
+ - 49
90
+ - 50
91
+ - 106
92
+ "2":
93
+ - 1
94
+ - 41
95
+ - 49
96
+ - 50
97
+ - 106
98
+ "3":
99
+ - 7
100
+ - 13
101
+ - 15
102
+ - 16
103
+ - 66
104
+ "4": 3.12.12
105
+ "5": 0.25.0
106
+ "12": 0.25.0
107
+ "13": linux-x86_64
108
+ checkpointing:
109
+ value:
110
+ every_n_train_steps: 1500
111
+ load: null
112
+ save_top_k: 2
113
+ save_weights_only: false
114
+ data_loader:
115
+ value:
116
+ test:
117
+ batch_size: 1
118
+ num_workers: 4
119
+ persistent_workers: false
120
+ seed: 2345
121
+ train:
122
+ batch_size: 16
123
+ num_workers: 16
124
+ persistent_workers: true
125
+ seed: 1234
126
+ val:
127
+ batch_size: 1
128
+ num_workers: 1
129
+ persistent_workers: true
130
+ seed: 3456
131
+ dataset:
132
+ value:
133
+ re10k:
134
+ augment: true
135
+ background_color:
136
+ - 0
137
+ - 0
138
+ - 0
139
+ baseline_max: 1e+10
140
+ baseline_min: 0.001
141
+ cameras_are_circular: false
142
+ dynamic_context_views: true
143
+ input_image_shape:
144
+ - 256
145
+ - 256
146
+ make_baseline_1: true
147
+ max_context_views_per_gpu: 24
148
+ max_fov: 100
149
+ name: re10k
150
+ original_image_shape:
151
+ - 360
152
+ - 640
153
+ overfit_to_scene: null
154
+ relative_pose: true
155
+ roots:
156
+ - datasets/re10k
157
+ skip_bad_shape: true
158
+ view_sampler:
159
+ initial_max_distance_between_context_views: 25
160
+ initial_min_distance_between_context_views: 25
161
+ max_distance_between_context_views: 90
162
+ min_distance_between_context_views: 45
163
+ min_distance_to_context_views: 0
164
+ name: bounded
165
+ num_context_views: 2
166
+ num_target_set: 3
167
+ num_target_views: 4
168
+ same_target_gap: false
169
+ warm_up_steps: 1000
170
+ density_control_loss:
171
+ value:
172
+ error_score:
173
+ grad_scale: 10000
174
+ log_scale: false
175
+ mode: original
176
+ weight: 0.01
177
+ direct_loss:
178
+ value:
179
+ l1:
180
+ weight: 0.8
181
+ ssim:
182
+ weight: 0.2
183
+ mode:
184
+ value: train
185
+ model:
186
+ value:
187
+ decoder:
188
+ background_color:
189
+ - 0
190
+ - 0
191
+ - 0
192
+ make_scale_invariant: false
193
+ name: splatting_cuda
194
+ density_control:
195
+ aggregation_mode: mean
196
+ aux_refine: false
197
+ grad_mode: absgrad
198
+ gs_param_dim: 256
199
+ latent_dim: 128
200
+ mean_dim: 32
201
+ name: density_control_module
202
+ num_heads: 1
203
+ num_latents: 64
204
+ num_level: 3
205
+ num_self_attn_per_block: 2
206
+ refine_error: false
207
+ refinement_hidden_dim: 32
208
+ refinement_layer_num: 1
209
+ refinement_type: voxelize
210
+ score_mode: absgrad
211
+ use_depth: false
212
+ use_mean_features: true
213
+ use_refine_module: true
214
+ voxel_size: 0.001
215
+ voxelize_activate: true
216
+ encoder:
217
+ align_corners: false
218
+ gs_param_dim: 256
219
+ head_mode: pcd
220
+ input_image_shape:
221
+ - 518
222
+ - 518
223
+ name: dcsplat
224
+ num_level: 3
225
+ use_voxelize: true
226
+ optimizer:
227
+ value:
228
+ accumulate: 1
229
+ backbone_lr_multiplier: 0.1
230
+ backbone_trainable: T+H
231
+ lr: 0.0002
232
+ warm_up_steps: 25
233
+ render_loss:
234
+ value:
235
+ lpips:
236
+ apply_after_step: 0
237
+ weight: 0.05
238
+ mse:
239
+ weight: 1
240
+ seed:
241
+ value: 111123
242
+ test:
243
+ value:
244
+ align_pose: false
245
+ compute_scores: true
246
+ error_threshold: 0.4
247
+ error_threshold_list:
248
+ - 0.2
249
+ - 0.4
250
+ - 0.6
251
+ - 0.8
252
+ - 1
253
+ nvs_view_N_list:
254
+ - 3
255
+ - 6
256
+ - 16
257
+ - 32
258
+ - 64
259
+ output_path: test/ablation/re10k
260
+ pose_align_steps: 100
261
+ pred_intrinsic: false
262
+ rot_opt_lr: 0.005
263
+ save_active_mask_image: false
264
+ save_compare: false
265
+ save_error_score_image: false
266
+ save_image: false
267
+ save_video: false
268
+ threshold_mode: ratio
269
+ trans_opt_lr: 0.005
270
+ train:
271
+ value:
272
+ align_corners: false
273
+ beta_dist_param:
274
+ - 0.5
275
+ - 4
276
+ cam_scale_mode: sum
277
+ camera_loss: 10
278
+ context_view_train: false
279
+ ext_scale_detach: false
280
+ extended_visualization: false
281
+ intrinsic_scaling: false
282
+ one_sample_validation: null
283
+ print_log_every_n_steps: 10
284
+ scene_scale_reg_loss: 0.01
285
+ train_aux: true
286
+ train_gs_num: 1
287
+ train_target_set: true
288
+ use_refine_aux: false
289
+ verbose: false
290
+ vggt_cam_loss: true
291
+ vggt_distil: false
292
+ trainer:
293
+ value:
294
+ gradient_clip_val: 0.5
295
+ max_steps: 3001
296
+ num_nodes: 1
297
+ val_check_interval: 250
298
+ wandb:
299
+ value:
300
+ entity: scene-representation-group
301
+ mode: online
302
+ name: ABLATION_0225_OURS
303
+ project: DCSplat
304
+ tags:
305
+ - re10k
306
+ - 256x256
ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/output.log ADDED
The diff for this file is too large to render. See raw diff
 
ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/requirements.txt ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wheel==0.45.1
2
+ pytz==2025.2
3
+ easydict==1.13
4
+ antlr4-python3-runtime==4.9.3
5
+ wadler_lindig==0.1.7
6
+ urllib3==2.5.0
7
+ tzdata==2025.2
8
+ typing-inspection==0.4.1
9
+ tabulate==0.9.0
10
+ smmap==5.0.2
11
+ kornia_rs==0.1.9
12
+ setuptools==78.1.1
13
+ safetensors==0.5.3
14
+ PyYAML==6.0.2
15
+ PySocks==1.7.1
16
+ pyparsing==3.2.5
17
+ pydantic_core==2.33.2
18
+ pycparser==2.23
19
+ protobuf==6.32.1
20
+ propcache==0.3.2
21
+ proglog==0.1.12
22
+ fsspec==2024.6.1
23
+ platformdirs==4.4.0
24
+ pip==25.2
25
+ pillow==10.4.0
26
+ frozenlist==1.7.0
27
+ packaging==24.2
28
+ opt_einsum==3.4.0
29
+ numpy==1.26.4
30
+ ninja==1.13.0
31
+ fonttools==4.60.0
32
+ networkx==3.4.2
33
+ multidict==6.6.4
34
+ mdurl==0.1.2
35
+ MarkupSafe==3.0.2
36
+ kiwisolver==1.4.9
37
+ imageio-ffmpeg==0.6.0
38
+ idna==3.7
39
+ hf-xet==1.1.10
40
+ gmpy2==2.2.1
41
+ einops==0.8.1
42
+ filelock==3.17.0
43
+ decorator==4.4.2
44
+ dacite==1.9.2
45
+ cycler==0.12.1
46
+ colorama==0.4.6
47
+ click==8.3.0
48
+ nvidia-nvtx-cu12==12.8.90
49
+ charset-normalizer==3.3.2
50
+ certifi==2025.8.3
51
+ beartype==0.19.0
52
+ attrs==25.3.0
53
+ async-timeout==5.0.1
54
+ annotated-types==0.7.0
55
+ aiohappyeyeballs==2.6.1
56
+ yarl==1.20.1
57
+ tifffile==2025.5.10
58
+ sentry-sdk==2.39.0
59
+ scipy==1.15.3
60
+ pydantic==2.11.9
61
+ pandas==2.3.2
62
+ opencv-python==4.11.0.86
63
+ omegaconf==2.3.0
64
+ markdown-it-py==4.0.0
65
+ lightning-utilities==0.14.3
66
+ lazy_loader==0.4
67
+ jaxtyping==0.2.37
68
+ imageio==2.37.0
69
+ gitdb==4.0.12
70
+ contourpy==1.3.2
71
+ colorspacious==1.1.2
72
+ cffi==1.17.1
73
+ aiosignal==1.4.0
74
+ scikit-video==1.1.11
75
+ scikit-image==0.25.2
76
+ rich==14.1.0
77
+ moviepy==1.0.3
78
+ matplotlib==3.10.6
79
+ hydra-core==1.3.2
80
+ nvidia-nccl-cu12==2.27.3
81
+ huggingface-hub==0.35.1
82
+ GitPython==3.1.45
83
+ brotlicffi==1.0.9.2
84
+ aiohttp==3.12.15
85
+ torchmetrics==1.8.2
86
+ opt-einsum-fx==0.1.4
87
+ kornia==0.8.1
88
+ pytorch-lightning==2.5.1
89
+ lpips==0.1.4
90
+ e3nn==0.6.0
91
+ lightning==2.5.1
92
+ nvidia-cusparselt-cu12==0.7.1
93
+ triton==3.4.0
94
+ nvidia-nvjitlink-cu12==12.8.93
95
+ nvidia-curand-cu12==10.3.9.90
96
+ nvidia-cufile-cu12==1.13.1.3
97
+ nvidia-cuda-runtime-cu12==12.8.90
98
+ nvidia-cuda-nvrtc-cu12==12.8.93
99
+ nvidia-cuda-cupti-cu12==12.8.90
100
+ nvidia-cublas-cu12==12.8.4.1
101
+ nvidia-cusparse-cu12==12.5.8.93
102
+ nvidia-cufft-cu12==11.3.3.83
103
+ nvidia-cudnn-cu12==9.10.2.21
104
+ nvidia-cusolver-cu12==11.7.3.90
105
+ torch==2.8.0+cu128
106
+ torchvision==0.23.0+cu128
107
+ torchaudio==2.8.0+cu128
108
+ torch_scatter==2.1.2+pt28cu128
109
+ gsplat==1.5.3
110
+ wandb==0.25.0
111
+ cuda-bindings==13.0.3
112
+ cuda-pathfinder==1.3.3
113
+ Jinja2==3.1.6
114
+ mpmath==1.3.0
115
+ nvidia-cublas==13.1.0.3
116
+ nvidia-cuda-cupti==13.0.85
117
+ nvidia-cuda-nvrtc==13.0.88
118
+ nvidia-cuda-runtime==13.0.96
119
+ nvidia-cudnn-cu13==9.15.1.9
120
+ nvidia-cufft==12.0.0.61
121
+ nvidia-cufile==1.15.1.6
122
+ nvidia-curand==10.4.0.35
123
+ nvidia-cusolver==12.0.4.66
124
+ nvidia-cusparse==12.6.3.3
125
+ nvidia-cusparselt-cu13==0.8.0
126
+ nvidia-nccl-cu13==2.28.9
127
+ nvidia-nvjitlink==13.0.88
128
+ nvidia-nvshmem-cu13==3.4.5
129
+ nvidia-nvtx==13.0.85
130
+ requests==2.32.5
131
+ sentencepiece==0.2.1
132
+ sympy==1.14.0
133
+ torchcodec==0.10.0
134
+ torchdata==0.10.0
135
+ torchtext==0.6.0
136
+ anyio==4.12.0
137
+ asttokens==3.0.1
138
+ comm==0.2.3
139
+ debugpy==1.8.19
140
+ executing==2.2.1
141
+ h11==0.16.0
142
+ httpcore==1.0.9
143
+ httpx==0.28.1
144
+ ipykernel==7.1.0
145
+ ipython==9.8.0
146
+ ipython_pygments_lexers==1.1.1
147
+ ipywidgets==8.1.8
148
+ jedi==0.19.2
149
+ jupyter_client==8.7.0
150
+ jupyter_core==5.9.1
151
+ jupyterlab_widgets==3.0.16
152
+ matplotlib-inline==0.2.1
153
+ nest-asyncio==1.6.0
154
+ parso==0.8.5
155
+ pexpect==4.9.0
156
+ prompt_toolkit==3.0.52
157
+ psutil==7.2.1
158
+ ptyprocess==0.7.0
159
+ pure_eval==0.2.3
160
+ Pygments==2.19.2
161
+ python-dateutil==2.9.0.post0
162
+ pyzmq==27.1.0
163
+ shellingham==1.5.4
164
+ six==1.17.0
165
+ stack-data==0.6.3
166
+ tornado==6.5.4
167
+ tqdm==4.67.1
168
+ traitlets==5.14.3
169
+ typer-slim==0.21.0
170
+ typing_extensions==4.15.0
171
+ wcwidth==0.2.14
172
+ widgetsnbextension==4.0.15
ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/wandb-metadata.json ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-6.8.0-90-generic-x86_64-with-glibc2.39",
3
+ "python": "CPython 3.12.12",
4
+ "startedAt": "2026-02-24T19:15:08.304921Z",
5
+ "args": [
6
+ "+experiment=re10k_ablation_24v",
7
+ "wandb.mode=online",
8
+ "wandb.name=ABLATION_0225_OURS"
9
+ ],
10
+ "program": "-m src.main",
11
+ "git": {
12
+ "remote": "git@github.com:K-nowing/CVPR2026.git",
13
+ "commit": "2512754c6c27ca5150bf17fbcbdde3f192fd53cc"
14
+ },
15
+ "email": "dna9041@korea.ac.kr",
16
+ "root": "/workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_OURS",
17
+ "host": "27d18dedec6d",
18
+ "executable": "/venv/main/bin/python",
19
+ "cpu_count": 128,
20
+ "cpu_count_logical": 256,
21
+ "gpu": "NVIDIA H200",
22
+ "gpu_count": 8,
23
+ "disk": {
24
+ "/": {
25
+ "total": "1170378588160",
26
+ "used": "612674392064"
27
+ }
28
+ },
29
+ "memory": {
30
+ "total": "1622948257792"
31
+ },
32
+ "gpu_nvidia": [
33
+ {
34
+ "name": "NVIDIA H200",
35
+ "memoryTotal": "150754820096",
36
+ "cudaCores": 16896,
37
+ "architecture": "Hopper",
38
+ "uuid": "GPU-2649ab80-a3a6-5a1c-0fa5-12bc11bd75e9"
39
+ },
40
+ {
41
+ "name": "NVIDIA H200",
42
+ "memoryTotal": "150754820096",
43
+ "cudaCores": 16896,
44
+ "architecture": "Hopper",
45
+ "uuid": "GPU-e92921d9-c681-246f-af93-637e0dc938ca"
46
+ },
47
+ {
48
+ "name": "NVIDIA H200",
49
+ "memoryTotal": "150754820096",
50
+ "cudaCores": 16896,
51
+ "architecture": "Hopper",
52
+ "uuid": "GPU-ffe12ffc-9bb7-82de-5692-1ec0ee2e68d8"
53
+ },
54
+ {
55
+ "name": "NVIDIA H200",
56
+ "memoryTotal": "150754820096",
57
+ "cudaCores": 16896,
58
+ "architecture": "Hopper",
59
+ "uuid": "GPU-499e5acd-b6ab-2010-c51b-ee9b5aa65825"
60
+ },
61
+ {
62
+ "name": "NVIDIA H200",
63
+ "memoryTotal": "150754820096",
64
+ "cudaCores": 16896,
65
+ "architecture": "Hopper",
66
+ "uuid": "GPU-3b2522d9-1c72-e49b-2c30-96165680b74a"
67
+ },
68
+ {
69
+ "name": "NVIDIA H200",
70
+ "memoryTotal": "150754820096",
71
+ "cudaCores": 16896,
72
+ "architecture": "Hopper",
73
+ "uuid": "GPU-a9a280c5-b2f9-dc1e-a8a9-7326a74001ff"
74
+ },
75
+ {
76
+ "name": "NVIDIA H200",
77
+ "memoryTotal": "150754820096",
78
+ "cudaCores": 16896,
79
+ "architecture": "Hopper",
80
+ "uuid": "GPU-07d0167b-a6a1-1900-2d27-7c6c11598409"
81
+ },
82
+ {
83
+ "name": "NVIDIA H200",
84
+ "memoryTotal": "150754820096",
85
+ "cudaCores": 16896,
86
+ "architecture": "Hopper",
87
+ "uuid": "GPU-8362a999-20d1-c27b-5d18-032d23f859ab"
88
+ }
89
+ ],
90
+ "cudaVersion": "13.1",
91
+ "writerId": "lma14qrq4ffkxha58hrfyhtyvrmlfx2i"
92
+ }
ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"val/psnr":21.833919525146484,"val/gaussian_num_ratio":0.3997955322265625,"lr-AdamW/pg2-momentum":0.9,"loss/aux_1/mse":0.023314595222473145,"lr-AdamW/pg1-momentum":0.9,"_runtime":11484,"val/lpips":0.15058016777038574,"loss/aux_2/lpips":0.010988885536789894,"loss/aux_2/mse":0.021367380395531654,"active_mask_imgs":{"format":"png","count":1,"filenames":["media/images/active_mask_imgs_198_c40793305ed32fbebf33.png"],"captions":["a76028640ffa1ef9"],"_type":"images/separated","width":536,"height":800},"_wandb":{"runtime":11484},"loss/aux_1/error_score":0.26384127140045166,"lr-AdamW/pg1":2.003594834351718e-05,"lr-AdamW/pg2":2e-05,"val/ssim":0.8224983215332031,"epoch":0,"comparison":{"format":"png","count":1,"filenames":["media/images/comparison_197_7a08eead29b131fa3472.png"],"captions":["a76028640ffa1ef9"],"_type":"images/separated","width":1064,"height":1098},"loss/aux_0/lpips":0.011281725019216537,"loss/aux_0/error_score":0.38854989409446716,"train/scene_scale":1.007591724395752,"_timestamp":1.7719719874732008e+09,"loss/final_3dgs/mse":0.017216090112924576,"train/psnr_probabilistic":18.861385345458984,"train/comparison":{"captions":[["0621c7675fab1418"]],"_type":"images/separated","width":1328,"height":2154,"format":"png","count":1,"filenames":["media/images/train/comparison_202_cd1b6f4b037275c862f9.png"]},"loss/final_3dgs/lpips":0.010056810453534126,"trainer/global_step":3001,"_step":202,"loss/camera":0.0006347345770336688,"train/error_scores":{"count":1,"filenames":["media/images/train/error_scores_201_ff4206b6e1e67b9747a0.png"],"captions":[["0621c7675fab1418"]],"_type":"images/separated","width":1328,"height":2120,"format":"png"},"loss/total":0.13550999760627747,"info/global_step":3000,"loss/aux_0/mse":0.01684199832379818,"loss/scene_scale_reg":0.00027991057140752673,"loss/aux_1/lpips":0.011291351169347763,"error_scores":{"height":536,"format":"png","count":1,"filenames":["media/images/error_scores_199_8210df2265e5ec17f86c.png"],"captions":["a76028640ffa1ef9"],"_type":"images/separated","width":800}}
ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug-core.log ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-24T19:15:08.401067312Z","level":"INFO","msg":"main: starting server","port-filename":"/tmp/tmpolh0ef_f/port-90349.txt","pid":90349,"log-level":0,"disable-analytics":false,"shutdown-on-parent-exit":false,"enable-dcgm-profiling":false}
2
+ {"time":"2026-02-24T19:15:08.401700012Z","level":"INFO","msg":"server: will exit if parent process dies","ppid":90349}
3
+ {"time":"2026-02-24T19:15:08.401675802Z","level":"INFO","msg":"server: accepting connections","addr":{"Name":"/tmp/wandb-90349-93084-214034883/socket","Net":"unix"}}
4
+ {"time":"2026-02-24T19:15:08.582167376Z","level":"INFO","msg":"connection: ManageConnectionData: new connection created","id":"1(@)"}
5
+ {"time":"2026-02-24T19:15:08.591392017Z","level":"INFO","msg":"handleInformInit: received","streamId":"0b125b6z","id":"1(@)"}
6
+ {"time":"2026-02-24T19:15:09.222880558Z","level":"INFO","msg":"handleInformInit: stream started","streamId":"0b125b6z","id":"1(@)"}
7
+ {"time":"2026-02-24T19:15:15.586457954Z","level":"INFO","msg":"connection: cancelling request","id":"1(@)","requestId":"oxc0a3k6ggl8"}
8
+ {"time":"2026-02-24T22:26:34.518248714Z","level":"INFO","msg":"handleInformTeardown: server teardown initiated","id":"1(@)"}
9
+ {"time":"2026-02-24T22:26:34.518338446Z","level":"INFO","msg":"connection: closing","id":"1(@)"}
10
+ {"time":"2026-02-24T22:26:34.518373096Z","level":"INFO","msg":"server is shutting down"}
11
+ {"time":"2026-02-24T22:26:34.518414607Z","level":"INFO","msg":"connection: closed successfully","id":"1(@)"}
12
+ {"time":"2026-02-24T22:26:34.519469613Z","level":"INFO","msg":"server: listener closed","addr":{"Name":"/tmp/wandb-90349-93084-214034883/socket","Net":"unix"}}
13
+ {"time":"2026-02-24T22:26:35.605950781Z","level":"INFO","msg":"handleInformTeardown: server shutdown complete","id":"1(@)"}
14
+ {"time":"2026-02-24T22:26:35.605994712Z","level":"INFO","msg":"connection: ManageConnectionData: connection closed","id":"1(@)"}
15
+ {"time":"2026-02-24T22:26:35.606019742Z","level":"INFO","msg":"server is closed"}
ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug-internal.log ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-24T19:15:08.591653472Z","level":"INFO","msg":"stream: starting","core version":"0.25.0"}
2
+ {"time":"2026-02-24T19:15:09.22244861Z","level":"INFO","msg":"stream: created new stream","id":"0b125b6z"}
3
+ {"time":"2026-02-24T19:15:09.222653934Z","level":"INFO","msg":"handler: started","stream_id":"0b125b6z"}
4
+ {"time":"2026-02-24T19:15:09.222865877Z","level":"INFO","msg":"stream: started","id":"0b125b6z"}
5
+ {"time":"2026-02-24T19:15:09.222943579Z","level":"INFO","msg":"writer: started","stream_id":"0b125b6z"}
6
+ {"time":"2026-02-24T19:15:09.222946409Z","level":"INFO","msg":"sender: started","stream_id":"0b125b6z"}
7
+ {"time":"2026-02-24T22:26:34.518352356Z","level":"INFO","msg":"stream: closing","id":"0b125b6z"}
8
+ {"time":"2026-02-24T22:26:35.362766174Z","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
9
+ {"time":"2026-02-24T22:26:35.604459738Z","level":"INFO","msg":"handler: closed","stream_id":"0b125b6z"}
10
+ {"time":"2026-02-24T22:26:35.604786383Z","level":"INFO","msg":"sender: closed","stream_id":"0b125b6z"}
11
+ {"time":"2026-02-24T22:26:35.604815153Z","level":"INFO","msg":"stream: closed","id":"0b125b6z"}
ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug.log ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_setup.py:_flush():81] Current SDK version is 0.25.0
2
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_setup.py:_flush():81] Configure stats pid to 90349
3
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_setup.py:_flush():81] Loading settings from environment variables
4
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:setup_run_log_directory():717] Logging user logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug.log
5
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:setup_run_log_directory():718] Logging internal logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_OURS/wandb/run-20260224_191508-0b125b6z/logs/debug-internal.log
6
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:init():844] calling init triggers
7
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:init():849] wandb.init called with sweep_config: {}
8
+ config: {'model': {'encoder': {'name': 'dcsplat', 'input_image_shape': [518, 518], 'head_mode': 'pcd', 'num_level': 3, 'gs_param_dim': 256, 'align_corners': False, 'use_voxelize': True}, 'decoder': {'name': 'splatting_cuda', 'background_color': [0.0, 0.0, 0.0], 'make_scale_invariant': False}, 'density_control': {'name': 'density_control_module', 'mean_dim': 32, 'gs_param_dim': 256, 'refinement_layer_num': 1, 'num_level': 3, 'grad_mode': 'absgrad', 'use_mean_features': True, 'refinement_type': 'voxelize', 'refinement_hidden_dim': 32, 'aggregation_mode': 'mean', 'num_heads': 1, 'score_mode': 'absgrad', 'latent_dim': 128, 'num_latents': 64, 'num_self_attn_per_block': 2, 'voxel_size': 0.001, 'aux_refine': False, 'refine_error': False, 'use_refine_module': True, 'voxelize_activate': True, 'use_depth': False}}, 'render_loss': {'mse': {'weight': 1.0}, 'lpips': {'weight': 0.05, 'apply_after_step': 0}}, 'density_control_loss': {'error_score': {'weight': 0.01, 'log_scale': False, 'grad_scale': 10000.0, 'mode': 'original'}}, 'direct_loss': {'l1': {'weight': 0.8}, 'ssim': {'weight': 0.2}}, 'wandb': {'project': 'DCSplat', 'entity': 'scene-representation-group', 'name': 'ABLATION_0225_OURS', 'mode': 'online', 'tags': ['re10k', '256x256']}, 'mode': 'train', 'data_loader': {'train': {'num_workers': 16, 'persistent_workers': True, 'batch_size': 16, 'seed': 1234}, 'test': {'num_workers': 4, 'persistent_workers': False, 'batch_size': 1, 'seed': 2345}, 'val': {'num_workers': 1, 'persistent_workers': True, 'batch_size': 1, 'seed': 3456}}, 'optimizer': {'lr': 0.0002, 'warm_up_steps': 25, 'backbone_lr_multiplier': 0.1, 'backbone_trainable': 'T+H', 'accumulate': 1}, 'checkpointing': {'load': None, 'every_n_train_steps': 1500, 'save_top_k': 2, 'save_weights_only': False}, 'train': {'extended_visualization': False, 'print_log_every_n_steps': 10, 'camera_loss': 10.0, 'one_sample_validation': None, 'align_corners': False, 'intrinsic_scaling': False, 'verbose': False, 'beta_dist_param': [0.5, 4.0], 'use_refine_aux': False, 'train_target_set': True, 'train_gs_num': 1, 'ext_scale_detach': False, 'cam_scale_mode': 'sum', 'scene_scale_reg_loss': 0.01, 'train_aux': True, 'vggt_cam_loss': True, 'vggt_distil': False, 'context_view_train': False}, 'test': {'output_path': 'test/ablation/re10k', 'align_pose': False, 'pose_align_steps': 100, 'rot_opt_lr': 0.005, 'trans_opt_lr': 0.005, 'compute_scores': True, 'save_image': False, 'save_video': False, 'save_active_mask_image': False, 'save_error_score_image': False, 'save_compare': False, 'pred_intrinsic': False, 'error_threshold': 0.4, 'error_threshold_list': [0.2, 0.4, 0.6, 0.8, 1.0], 'threshold_mode': 'ratio', 'nvs_view_N_list': [3, 6, 16, 32, 64]}, 'seed': 111123, 'trainer': {'max_steps': 3001, 'val_check_interval': 250, 'gradient_clip_val': 0.5, 'num_nodes': 1}, 'dataset': {'re10k': {'make_baseline_1': True, 'relative_pose': True, 'augment': True, 'background_color': [0.0, 0.0, 0.0], 'overfit_to_scene': None, 'skip_bad_shape': True, 'view_sampler': {'name': 'bounded', 'num_target_views': 4, 'num_context_views': 2, 'min_distance_between_context_views': 45, 'max_distance_between_context_views': 90, 'min_distance_to_context_views': 0, 'warm_up_steps': 1000, 'initial_min_distance_between_context_views': 25, 'initial_max_distance_between_context_views': 25, 'same_target_gap': False, 'num_target_set': 3}, 'name': 're10k', 'roots': ['datasets/re10k'], 'input_image_shape': [256, 256], 'original_image_shape': [360, 640], 'cameras_are_circular': False, 'baseline_min': 0.001, 'baseline_max': 10000000000.0, 'max_fov': 100.0, 'dynamic_context_views': True, 'max_context_views_per_gpu': 24}}, '_wandb': {}}
9
+ 2026-02-24 19:15:08,307 INFO MainThread:90349 [wandb_init.py:init():892] starting backend
10
+ 2026-02-24 19:15:08,582 INFO MainThread:90349 [wandb_init.py:init():895] sending inform_init request
11
+ 2026-02-24 19:15:08,588 INFO MainThread:90349 [wandb_init.py:init():903] backend started and connected
12
+ 2026-02-24 19:15:08,591 INFO MainThread:90349 [wandb_init.py:init():973] updated telemetry
13
+ 2026-02-24 19:15:08,598 INFO MainThread:90349 [wandb_init.py:init():997] communicating run to backend with 90.0 second timeout
14
+ 2026-02-24 19:15:10,455 INFO MainThread:90349 [wandb_init.py:init():1042] starting run threads in backend
15
+ 2026-02-24 19:15:10,580 INFO MainThread:90349 [wandb_run.py:_console_start():2524] atexit reg
16
+ 2026-02-24 19:15:10,580 INFO MainThread:90349 [wandb_run.py:_redirect():2373] redirect: wrap_raw
17
+ 2026-02-24 19:15:10,580 INFO MainThread:90349 [wandb_run.py:_redirect():2442] Wrapping output streams.
18
+ 2026-02-24 19:15:10,582 INFO MainThread:90349 [wandb_run.py:_redirect():2465] Redirects installed.
19
+ 2026-02-24 19:15:10,584 INFO MainThread:90349 [wandb_init.py:init():1082] run started, returning control to user process
20
+ 2026-02-24 22:26:34,518 INFO wandb-AsyncioManager-main:90349 [service_client.py:_forward_responses():134] Reached EOF.
21
+ 2026-02-24 22:26:34,518 INFO wandb-AsyncioManager-main:90349 [mailbox.py:close():155] Closing mailbox, abandoning 1 handles.
ABLATION_0225_noRefineModule/.hydra/config.yaml ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model:
2
+ encoder:
3
+ name: dcsplat
4
+ input_image_shape:
5
+ - 518
6
+ - 518
7
+ head_mode: pcd
8
+ num_level: 3
9
+ gs_param_dim: 256
10
+ align_corners: false
11
+ use_voxelize: true
12
+ decoder:
13
+ name: splatting_cuda
14
+ background_color:
15
+ - 0.0
16
+ - 0.0
17
+ - 0.0
18
+ make_scale_invariant: false
19
+ density_control:
20
+ name: density_control_module
21
+ mean_dim: 32
22
+ gs_param_dim: 256
23
+ refinement_layer_num: 1
24
+ num_level: 3
25
+ grad_mode: absgrad
26
+ use_mean_features: true
27
+ refinement_type: voxelize
28
+ refinement_hidden_dim: 32
29
+ aggregation_mode: mean
30
+ num_heads: 1
31
+ score_mode: absgrad
32
+ latent_dim: 128
33
+ num_latents: 64
34
+ num_self_attn_per_block: 2
35
+ voxel_size: 0.001
36
+ aux_refine: false
37
+ refine_error: false
38
+ use_refine_module: false
39
+ voxelize_activate: true
40
+ use_depth: false
41
+ render_loss:
42
+ mse:
43
+ weight: 1.0
44
+ lpips:
45
+ weight: 0.05
46
+ apply_after_step: 0
47
+ density_control_loss:
48
+ error_score:
49
+ weight: 0.01
50
+ log_scale: false
51
+ grad_scale: 10000.0
52
+ mode: original
53
+ direct_loss:
54
+ l1:
55
+ weight: 0.8
56
+ ssim:
57
+ weight: 0.2
58
+ wandb:
59
+ project: DCSplat
60
+ entity: scene-representation-group
61
+ name: ABLATION_0225_noRefineModule
62
+ mode: online
63
+ tags:
64
+ - re10k
65
+ - 256x256
66
+ mode: train
67
+ data_loader:
68
+ train:
69
+ num_workers: 16
70
+ persistent_workers: true
71
+ batch_size: 16
72
+ seed: 1234
73
+ test:
74
+ num_workers: 4
75
+ persistent_workers: false
76
+ batch_size: 1
77
+ seed: 2345
78
+ val:
79
+ num_workers: 1
80
+ persistent_workers: true
81
+ batch_size: 1
82
+ seed: 3456
83
+ optimizer:
84
+ lr: 0.0002
85
+ warm_up_steps: 25
86
+ backbone_lr_multiplier: 0.1
87
+ backbone_trainable: T+H
88
+ accumulate: 1
89
+ checkpointing:
90
+ load: null
91
+ every_n_train_steps: 1500
92
+ save_top_k: 2
93
+ save_weights_only: false
94
+ train:
95
+ extended_visualization: false
96
+ print_log_every_n_steps: 10
97
+ camera_loss: 10.0
98
+ one_sample_validation: null
99
+ align_corners: false
100
+ intrinsic_scaling: false
101
+ verbose: false
102
+ beta_dist_param:
103
+ - 0.5
104
+ - 4.0
105
+ use_refine_aux: false
106
+ train_target_set: true
107
+ train_gs_num: 1
108
+ ext_scale_detach: false
109
+ cam_scale_mode: sum
110
+ scene_scale_reg_loss: 0.01
111
+ train_aux: true
112
+ vggt_cam_loss: true
113
+ vggt_distil: false
114
+ context_view_train: false
115
+ test:
116
+ output_path: test/ablation/re10k
117
+ align_pose: false
118
+ pose_align_steps: 100
119
+ rot_opt_lr: 0.005
120
+ trans_opt_lr: 0.005
121
+ compute_scores: true
122
+ save_image: false
123
+ save_video: false
124
+ save_active_mask_image: false
125
+ save_error_score_image: false
126
+ save_compare: false
127
+ pred_intrinsic: false
128
+ error_threshold: 0.4
129
+ error_threshold_list:
130
+ - 0.2
131
+ - 0.4
132
+ - 0.6
133
+ - 0.8
134
+ - 1.0
135
+ threshold_mode: ratio
136
+ nvs_view_N_list:
137
+ - 3
138
+ - 6
139
+ - 16
140
+ - 32
141
+ - 64
142
+ seed: 111123
143
+ trainer:
144
+ max_steps: 3001
145
+ val_check_interval: 250
146
+ gradient_clip_val: 0.5
147
+ num_nodes: 1
148
+ dataset:
149
+ re10k:
150
+ make_baseline_1: true
151
+ relative_pose: true
152
+ augment: true
153
+ background_color:
154
+ - 0.0
155
+ - 0.0
156
+ - 0.0
157
+ overfit_to_scene: null
158
+ skip_bad_shape: true
159
+ view_sampler:
160
+ name: bounded
161
+ num_target_views: 4
162
+ num_context_views: 2
163
+ min_distance_between_context_views: 45
164
+ max_distance_between_context_views: 90
165
+ min_distance_to_context_views: 0
166
+ warm_up_steps: 1000
167
+ initial_min_distance_between_context_views: 25
168
+ initial_max_distance_between_context_views: 25
169
+ same_target_gap: false
170
+ num_target_set: 3
171
+ name: re10k
172
+ roots:
173
+ - datasets/re10k
174
+ input_image_shape:
175
+ - 256
176
+ - 256
177
+ original_image_shape:
178
+ - 360
179
+ - 640
180
+ cameras_are_circular: false
181
+ baseline_min: 0.001
182
+ baseline_max: 10000000000.0
183
+ max_fov: 100.0
184
+ dynamic_context_views: true
185
+ max_context_views_per_gpu: 24
ABLATION_0225_noRefineModule/.hydra/hydra.yaml ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hydra:
2
+ run:
3
+ dir: outputs/ablation/re10k/${wandb.name}
4
+ sweep:
5
+ dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
6
+ subdir: ${hydra.job.num}
7
+ launcher:
8
+ _target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
9
+ sweeper:
10
+ _target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
11
+ max_batch_size: null
12
+ params: null
13
+ help:
14
+ app_name: ${hydra.job.name}
15
+ header: '${hydra.help.app_name} is powered by Hydra.
16
+
17
+ '
18
+ footer: 'Powered by Hydra (https://hydra.cc)
19
+
20
+ Use --hydra-help to view Hydra specific help
21
+
22
+ '
23
+ template: '${hydra.help.header}
24
+
25
+ == Configuration groups ==
26
+
27
+ Compose your configuration from those groups (group=option)
28
+
29
+
30
+ $APP_CONFIG_GROUPS
31
+
32
+
33
+ == Config ==
34
+
35
+ Override anything in the config (foo.bar=value)
36
+
37
+
38
+ $CONFIG
39
+
40
+
41
+ ${hydra.help.footer}
42
+
43
+ '
44
+ hydra_help:
45
+ template: 'Hydra (${hydra.runtime.version})
46
+
47
+ See https://hydra.cc for more info.
48
+
49
+
50
+ == Flags ==
51
+
52
+ $FLAGS_HELP
53
+
54
+
55
+ == Configuration groups ==
56
+
57
+ Compose your configuration from those groups (For example, append hydra/job_logging=disabled
58
+ to command line)
59
+
60
+
61
+ $HYDRA_CONFIG_GROUPS
62
+
63
+
64
+ Use ''--cfg hydra'' to Show the Hydra config.
65
+
66
+ '
67
+ hydra_help: ???
68
+ hydra_logging:
69
+ version: 1
70
+ formatters:
71
+ simple:
72
+ format: '[%(asctime)s][HYDRA] %(message)s'
73
+ handlers:
74
+ console:
75
+ class: logging.StreamHandler
76
+ formatter: simple
77
+ stream: ext://sys.stdout
78
+ root:
79
+ level: INFO
80
+ handlers:
81
+ - console
82
+ loggers:
83
+ logging_example:
84
+ level: DEBUG
85
+ disable_existing_loggers: false
86
+ job_logging:
87
+ version: 1
88
+ formatters:
89
+ simple:
90
+ format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
91
+ handlers:
92
+ console:
93
+ class: logging.StreamHandler
94
+ formatter: simple
95
+ stream: ext://sys.stdout
96
+ file:
97
+ class: logging.FileHandler
98
+ formatter: simple
99
+ filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
100
+ root:
101
+ level: INFO
102
+ handlers:
103
+ - console
104
+ - file
105
+ disable_existing_loggers: false
106
+ env: {}
107
+ mode: RUN
108
+ searchpath: []
109
+ callbacks: {}
110
+ output_subdir: .hydra
111
+ overrides:
112
+ hydra:
113
+ - hydra.mode=RUN
114
+ task:
115
+ - +experiment=re10k_ablation_24v
116
+ - wandb.mode=online
117
+ - wandb.name=ABLATION_0225_noRefineModule
118
+ - model.density_control.use_refine_module=false
119
+ job:
120
+ name: main
121
+ chdir: null
122
+ override_dirname: +experiment=re10k_ablation_24v,model.density_control.use_refine_module=false,wandb.mode=online,wandb.name=ABLATION_0225_noRefineModule
123
+ id: ???
124
+ num: ???
125
+ config_name: main
126
+ env_set: {}
127
+ env_copy: []
128
+ config:
129
+ override_dirname:
130
+ kv_sep: '='
131
+ item_sep: ','
132
+ exclude_keys: []
133
+ runtime:
134
+ version: 1.3.2
135
+ version_base: '1.3'
136
+ cwd: /workspace/code/CVPR2026
137
+ config_sources:
138
+ - path: hydra.conf
139
+ schema: pkg
140
+ provider: hydra
141
+ - path: /workspace/code/CVPR2026/config
142
+ schema: file
143
+ provider: main
144
+ - path: ''
145
+ schema: structured
146
+ provider: schema
147
+ output_dir: /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_noRefineModule
148
+ choices:
149
+ experiment: re10k_ablation_24v
150
+ dataset@dataset.re10k: re10k
151
+ dataset/view_sampler_dataset_specific_config@dataset.re10k.view_sampler: bounded_re10k
152
+ dataset/view_sampler@dataset.re10k.view_sampler: bounded
153
+ model/density_control: density_control_module
154
+ model/decoder: splatting_cuda
155
+ model/encoder: dcsplat
156
+ hydra/env: default
157
+ hydra/callbacks: null
158
+ hydra/job_logging: default
159
+ hydra/hydra_logging: default
160
+ hydra/hydra_help: default
161
+ hydra/help: default
162
+ hydra/sweeper: basic
163
+ hydra/launcher: basic
164
+ hydra/output: default
165
+ verbose: false
ABLATION_0225_noRefineModule/.hydra/overrides.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ - +experiment=re10k_ablation_24v
2
+ - wandb.mode=online
3
+ - wandb.name=ABLATION_0225_noRefineModule
4
+ - model.density_control.use_refine_module=false
ABLATION_0225_noRefineModule/main.log ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 07:31:34,037][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 07:31:40,112][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 07:31:40,112][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 07:32:30,542][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:425: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=31` in the `DataLoader` to improve performance.
9
+
10
+ [2026-02-25 07:32:30,543][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
11
+ warnings.warn( # warn only once
12
+
13
+ [2026-02-25 07:32:33,093][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
14
+ result[selector] = overlay
15
+
16
+ [2026-02-25 07:32:33,103][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/utilities/data.py:79: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 1. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
17
+
18
+ [2026-02-25 07:32:33,104][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
19
+ warnings.warn(
20
+
21
+ [2026-02-25 07:32:33,104][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
22
+ warnings.warn(msg)
23
+
24
+ [2026-02-25 07:32:34,792][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/functional.py:554: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /pytorch/aten/src/ATen/native/TensorShape.cpp:4322.)
25
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
26
+
27
+ [2026-02-25 07:32:35,076][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('val/psnr', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
28
+
29
+ [2026-02-25 07:32:35,077][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('val/lpips', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
30
+
31
+ [2026-02-25 07:32:35,077][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('val/ssim', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
32
+
33
+ [2026-02-25 07:32:35,078][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('val/gaussian_num_ratio', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
34
+
35
+ [2026-02-25 07:32:35,078][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('info/global_step', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
36
+
37
+ [2026-02-25 07:32:44,871][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
38
+ grad.sizes() = [57, 256, 1, 1], strides() = [256, 1, 256, 256]
39
+ bucket_view.sizes() = [57, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
40
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
41
+
42
+ [2026-02-25 07:32:44,967][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
43
+ result[selector] = overlay
44
+
45
+ [2026-02-25 07:34:17,416][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
46
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
47
+
48
+ [2026-02-25 07:45:01,533][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
49
+ result[selector] = overlay
50
+
51
+ [2026-02-25 07:48:10,917][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
52
+ result[selector] = overlay
53
+
54
+ [2026-02-25 07:57:27,231][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
55
+ result[selector] = overlay
56
+
57
+ [2026-02-25 08:03:33,811][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
58
+ result[selector] = overlay
59
+
60
+ [2026-02-25 08:09:48,816][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
61
+ result[selector] = overlay
62
+
63
+ [2026-02-25 08:19:01,130][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
64
+ result[selector] = overlay
65
+
66
+ [2026-02-25 08:22:10,768][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
67
+ result[selector] = overlay
68
+
69
+ [2026-02-25 08:34:25,661][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
70
+ result[selector] = overlay
71
+
72
+ [2026-02-25 08:34:29,312][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
73
+ result[selector] = overlay
74
+
75
+ [2026-02-25 08:46:50,776][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
76
+ result[selector] = overlay
77
+
78
+ [2026-02-25 08:49:55,355][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
79
+ result[selector] = overlay
80
+
81
+ [2026-02-25 08:59:12,245][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
82
+ result[selector] = overlay
83
+
84
+ [2026-02-25 09:05:35,984][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
85
+ result[selector] = overlay
86
+
87
+ [2026-02-25 09:11:48,010][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
88
+ result[selector] = overlay
89
+
90
+ [2026-02-25 09:21:06,680][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
91
+ result[selector] = overlay
92
+
93
+ [2026-02-25 09:24:15,287][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
94
+ result[selector] = overlay
95
+
96
+ [2026-02-25 09:36:29,623][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
97
+ result[selector] = overlay
98
+
99
+ [2026-02-25 09:36:33,850][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
100
+ result[selector] = overlay
101
+
102
+ [2026-02-25 09:48:56,864][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
103
+ result[selector] = overlay
104
+
105
+ [2026-02-25 09:52:05,306][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
106
+ result[selector] = overlay
107
+
108
+ [2026-02-25 10:01:25,665][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
109
+ result[selector] = overlay
110
+
111
+ [2026-02-25 10:07:31,359][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
112
+ result[selector] = overlay
113
+
114
+ [2026-02-25 10:13:42,512][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
115
+ result[selector] = overlay
116
+
117
+ [2026-02-25 10:22:55,254][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
118
+ result[selector] = overlay
119
+
120
+ [2026-02-25 10:26:05,189][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
121
+ result[selector] = overlay
122
+
123
+ [2026-02-25 10:38:39,652][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
124
+ result[selector] = overlay
125
+
126
+ [2026-02-25 10:38:43,134][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
127
+ result[selector] = overlay
128
+
ABLATION_0225_noRefineModule/peak_vram_memory.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "peak_memory_allocated_gb": 96.07,
3
+ "peak_memory_reserved_gb": 136.279,
4
+ "total_elapsed_hours": 3.12,
5
+ "mode": "train"
6
+ }
ABLATION_0225_noRefineModule/train_ddp_process_3.log ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 07:31:50,767][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 07:32:08,454][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 07:32:08,455][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 07:32:30,542][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
9
+ warnings.warn( # warn only once
10
+
11
+ [2026-02-25 07:32:44,868][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
12
+ grad.sizes() = [57, 256, 1, 1], strides() = [256, 1, 256, 256]
13
+ bucket_view.sizes() = [57, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
14
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
15
+
16
+ [2026-02-25 07:32:45,002][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
17
+ result[selector] = overlay
18
+
19
+ [2026-02-25 07:34:17,440][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
20
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
21
+
22
+ [2026-02-25 07:45:01,533][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
23
+ result[selector] = overlay
24
+
25
+ [2026-02-25 07:57:27,232][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
26
+ result[selector] = overlay
27
+
28
+ [2026-02-25 08:09:48,815][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
29
+ result[selector] = overlay
30
+
31
+ [2026-02-25 08:22:10,768][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
32
+ result[selector] = overlay
33
+
34
+ [2026-02-25 08:34:29,312][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
35
+ result[selector] = overlay
36
+
37
+ [2026-02-25 08:46:50,775][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
38
+ result[selector] = overlay
39
+
40
+ [2026-02-25 08:59:12,243][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
41
+ result[selector] = overlay
42
+
43
+ [2026-02-25 09:11:48,007][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
44
+ result[selector] = overlay
45
+
46
+ [2026-02-25 09:24:15,287][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
47
+ result[selector] = overlay
48
+
49
+ [2026-02-25 09:36:33,848][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
50
+ result[selector] = overlay
51
+
52
+ [2026-02-25 09:48:56,863][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
53
+ result[selector] = overlay
54
+
55
+ [2026-02-25 10:01:25,665][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
56
+ result[selector] = overlay
57
+
58
+ [2026-02-25 10:13:42,514][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
59
+ result[selector] = overlay
60
+
61
+ [2026-02-25 10:26:05,189][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
62
+ result[selector] = overlay
63
+
64
+ [2026-02-25 10:38:43,134][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
65
+ result[selector] = overlay
66
+
ABLATION_0225_noRefineModule/train_ddp_process_4.log ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 07:31:50,601][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 07:32:19,908][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 07:32:19,908][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 07:32:30,542][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
9
+ warnings.warn( # warn only once
10
+
11
+ [2026-02-25 07:32:44,872][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
12
+ grad.sizes() = [57, 256, 1, 1], strides() = [256, 1, 256, 256]
13
+ bucket_view.sizes() = [57, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
14
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
15
+
16
+ [2026-02-25 07:32:45,084][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
17
+ result[selector] = overlay
18
+
19
+ [2026-02-25 07:34:17,446][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
20
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
21
+
22
+ [2026-02-25 07:45:01,534][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
23
+ result[selector] = overlay
24
+
25
+ [2026-02-25 07:57:27,231][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
26
+ result[selector] = overlay
27
+
28
+ [2026-02-25 08:09:48,816][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
29
+ result[selector] = overlay
30
+
31
+ [2026-02-25 08:22:10,768][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
32
+ result[selector] = overlay
33
+
34
+ [2026-02-25 08:34:29,312][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
35
+ result[selector] = overlay
36
+
37
+ [2026-02-25 08:46:50,775][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
38
+ result[selector] = overlay
39
+
40
+ [2026-02-25 08:59:12,243][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
41
+ result[selector] = overlay
42
+
43
+ [2026-02-25 09:11:48,007][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
44
+ result[selector] = overlay
45
+
46
+ [2026-02-25 09:24:15,287][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
47
+ result[selector] = overlay
48
+
49
+ [2026-02-25 09:36:33,848][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
50
+ result[selector] = overlay
51
+
52
+ [2026-02-25 09:48:56,863][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
53
+ result[selector] = overlay
54
+
55
+ [2026-02-25 10:01:25,665][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
56
+ result[selector] = overlay
57
+
58
+ [2026-02-25 10:13:42,512][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
59
+ result[selector] = overlay
60
+
61
+ [2026-02-25 10:26:05,190][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
62
+ result[selector] = overlay
63
+
64
+ [2026-02-25 10:38:43,134][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
65
+ result[selector] = overlay
66
+
ABLATION_0225_noRefineModule/train_ddp_process_7.log ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 07:31:50,806][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 07:32:14,953][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 07:32:14,956][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 07:32:30,542][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
9
+ warnings.warn( # warn only once
10
+
11
+ [2026-02-25 07:32:44,356][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
12
+ grad.sizes() = [57, 256, 1, 1], strides() = [256, 1, 256, 256]
13
+ bucket_view.sizes() = [57, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
14
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
15
+
16
+ [2026-02-25 07:32:44,996][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
17
+ result[selector] = overlay
18
+
19
+ [2026-02-25 07:34:17,417][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
20
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
21
+
22
+ [2026-02-25 07:45:01,533][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
23
+ result[selector] = overlay
24
+
25
+ [2026-02-25 07:57:27,231][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
26
+ result[selector] = overlay
27
+
28
+ [2026-02-25 08:09:48,816][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
29
+ result[selector] = overlay
30
+
31
+ [2026-02-25 08:22:10,770][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
32
+ result[selector] = overlay
33
+
34
+ [2026-02-25 08:34:29,312][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
35
+ result[selector] = overlay
36
+
37
+ [2026-02-25 08:46:50,775][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
38
+ result[selector] = overlay
39
+
40
+ [2026-02-25 08:59:12,244][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
41
+ result[selector] = overlay
42
+
43
+ [2026-02-25 09:11:48,007][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
44
+ result[selector] = overlay
45
+
46
+ [2026-02-25 09:24:15,287][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
47
+ result[selector] = overlay
48
+
49
+ [2026-02-25 09:36:33,848][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
50
+ result[selector] = overlay
51
+
52
+ [2026-02-25 09:48:56,863][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
53
+ result[selector] = overlay
54
+
55
+ [2026-02-25 10:01:25,665][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
56
+ result[selector] = overlay
57
+
58
+ [2026-02-25 10:13:42,512][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
59
+ result[selector] = overlay
60
+
61
+ [2026-02-25 10:26:05,189][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
62
+ result[selector] = overlay
63
+
64
+ [2026-02-25 10:38:43,142][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
65
+ result[selector] = overlay
66
+
ABLATION_0225_noRefineModule/wandb/debug-internal.log ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-25T07:32:27.611867617Z","level":"INFO","msg":"stream: starting","core version":"0.25.0"}
2
+ {"time":"2026-02-25T07:32:28.03755666Z","level":"INFO","msg":"stream: created new stream","id":"2f0bcys0"}
3
+ {"time":"2026-02-25T07:32:28.037863635Z","level":"INFO","msg":"handler: started","stream_id":"2f0bcys0"}
4
+ {"time":"2026-02-25T07:32:28.037970207Z","level":"INFO","msg":"stream: started","id":"2f0bcys0"}
5
+ {"time":"2026-02-25T07:32:28.038020847Z","level":"INFO","msg":"writer: started","stream_id":"2f0bcys0"}
6
+ {"time":"2026-02-25T07:32:28.038027757Z","level":"INFO","msg":"sender: started","stream_id":"2f0bcys0"}
7
+ {"time":"2026-02-25T10:38:52.520830581Z","level":"INFO","msg":"stream: closing","id":"2f0bcys0"}
8
+ {"time":"2026-02-25T10:38:53.390340772Z","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
9
+ {"time":"2026-02-25T10:38:53.699950002Z","level":"INFO","msg":"handler: closed","stream_id":"2f0bcys0"}
10
+ {"time":"2026-02-25T10:38:53.700227926Z","level":"INFO","msg":"sender: closed","stream_id":"2f0bcys0"}
11
+ {"time":"2026-02-25T10:38:53.700251656Z","level":"INFO","msg":"stream: closed","id":"2f0bcys0"}
ABLATION_0225_noRefineModule/wandb/debug.log ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_setup.py:_flush():81] Current SDK version is 0.25.0
2
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_setup.py:_flush():81] Configure stats pid to 137621
3
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_setup.py:_flush():81] Loading settings from environment variables
4
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:setup_run_log_directory():717] Logging user logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug.log
5
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:setup_run_log_directory():718] Logging internal logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug-internal.log
6
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:init():844] calling init triggers
7
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:init():849] wandb.init called with sweep_config: {}
8
+ config: {'model': {'encoder': {'name': 'dcsplat', 'input_image_shape': [518, 518], 'head_mode': 'pcd', 'num_level': 3, 'gs_param_dim': 256, 'align_corners': False, 'use_voxelize': True}, 'decoder': {'name': 'splatting_cuda', 'background_color': [0.0, 0.0, 0.0], 'make_scale_invariant': False}, 'density_control': {'name': 'density_control_module', 'mean_dim': 32, 'gs_param_dim': 256, 'refinement_layer_num': 1, 'num_level': 3, 'grad_mode': 'absgrad', 'use_mean_features': True, 'refinement_type': 'voxelize', 'refinement_hidden_dim': 32, 'aggregation_mode': 'mean', 'num_heads': 1, 'score_mode': 'absgrad', 'latent_dim': 128, 'num_latents': 64, 'num_self_attn_per_block': 2, 'voxel_size': 0.001, 'aux_refine': False, 'refine_error': False, 'use_refine_module': False, 'voxelize_activate': True, 'use_depth': False}}, 'render_loss': {'mse': {'weight': 1.0}, 'lpips': {'weight': 0.05, 'apply_after_step': 0}}, 'density_control_loss': {'error_score': {'weight': 0.01, 'log_scale': False, 'grad_scale': 10000.0, 'mode': 'original'}}, 'direct_loss': {'l1': {'weight': 0.8}, 'ssim': {'weight': 0.2}}, 'wandb': {'project': 'DCSplat', 'entity': 'scene-representation-group', 'name': 'ABLATION_0225_noRefineModule', 'mode': 'online', 'tags': ['re10k', '256x256']}, 'mode': 'train', 'data_loader': {'train': {'num_workers': 16, 'persistent_workers': True, 'batch_size': 16, 'seed': 1234}, 'test': {'num_workers': 4, 'persistent_workers': False, 'batch_size': 1, 'seed': 2345}, 'val': {'num_workers': 1, 'persistent_workers': True, 'batch_size': 1, 'seed': 3456}}, 'optimizer': {'lr': 0.0002, 'warm_up_steps': 25, 'backbone_lr_multiplier': 0.1, 'backbone_trainable': 'T+H', 'accumulate': 1}, 'checkpointing': {'load': None, 'every_n_train_steps': 1500, 'save_top_k': 2, 'save_weights_only': False}, 'train': {'extended_visualization': False, 'print_log_every_n_steps': 10, 'camera_loss': 10.0, 'one_sample_validation': None, 'align_corners': False, 'intrinsic_scaling': False, 'verbose': False, 'beta_dist_param': [0.5, 4.0], 'use_refine_aux': False, 'train_target_set': True, 'train_gs_num': 1, 'ext_scale_detach': False, 'cam_scale_mode': 'sum', 'scene_scale_reg_loss': 0.01, 'train_aux': True, 'vggt_cam_loss': True, 'vggt_distil': False, 'context_view_train': False}, 'test': {'output_path': 'test/ablation/re10k', 'align_pose': False, 'pose_align_steps': 100, 'rot_opt_lr': 0.005, 'trans_opt_lr': 0.005, 'compute_scores': True, 'save_image': False, 'save_video': False, 'save_active_mask_image': False, 'save_error_score_image': False, 'save_compare': False, 'pred_intrinsic': False, 'error_threshold': 0.4, 'error_threshold_list': [0.2, 0.4, 0.6, 0.8, 1.0], 'threshold_mode': 'ratio', 'nvs_view_N_list': [3, 6, 16, 32, 64]}, 'seed': 111123, 'trainer': {'max_steps': 3001, 'val_check_interval': 250, 'gradient_clip_val': 0.5, 'num_nodes': 1}, 'dataset': {'re10k': {'make_baseline_1': True, 'relative_pose': True, 'augment': True, 'background_color': [0.0, 0.0, 0.0], 'overfit_to_scene': None, 'skip_bad_shape': True, 'view_sampler': {'name': 'bounded', 'num_target_views': 4, 'num_context_views': 2, 'min_distance_between_context_views': 45, 'max_distance_between_context_views': 90, 'min_distance_to_context_views': 0, 'warm_up_steps': 1000, 'initial_min_distance_between_context_views': 25, 'initial_max_distance_between_context_views': 25, 'same_target_gap': False, 'num_target_set': 3}, 'name': 're10k', 'roots': ['datasets/re10k'], 'input_image_shape': [256, 256], 'original_image_shape': [360, 640], 'cameras_are_circular': False, 'baseline_min': 0.001, 'baseline_max': 10000000000.0, 'max_fov': 100.0, 'dynamic_context_views': True, 'max_context_views_per_gpu': 24}}, '_wandb': {}}
9
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:init():892] starting backend
10
+ 2026-02-25 07:32:27,602 INFO MainThread:137621 [wandb_init.py:init():895] sending inform_init request
11
+ 2026-02-25 07:32:27,609 INFO MainThread:137621 [wandb_init.py:init():903] backend started and connected
12
+ 2026-02-25 07:32:27,613 INFO MainThread:137621 [wandb_init.py:init():973] updated telemetry
13
+ 2026-02-25 07:32:27,622 INFO MainThread:137621 [wandb_init.py:init():997] communicating run to backend with 90.0 second timeout
14
+ 2026-02-25 07:32:28,628 INFO MainThread:137621 [wandb_init.py:init():1042] starting run threads in backend
15
+ 2026-02-25 07:32:28,738 INFO MainThread:137621 [wandb_run.py:_console_start():2524] atexit reg
16
+ 2026-02-25 07:32:28,738 INFO MainThread:137621 [wandb_run.py:_redirect():2373] redirect: wrap_raw
17
+ 2026-02-25 07:32:28,738 INFO MainThread:137621 [wandb_run.py:_redirect():2442] Wrapping output streams.
18
+ 2026-02-25 07:32:28,738 INFO MainThread:137621 [wandb_run.py:_redirect():2465] Redirects installed.
19
+ 2026-02-25 07:32:28,740 INFO MainThread:137621 [wandb_init.py:init():1082] run started, returning control to user process
20
+ 2026-02-25 10:38:52,520 INFO wandb-AsyncioManager-main:137621 [service_client.py:_forward_responses():134] Reached EOF.
21
+ 2026-02-25 10:38:52,520 INFO wandb-AsyncioManager-main:137621 [mailbox.py:close():155] Closing mailbox, abandoning 1 handles.
ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/config.yaml ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ _wandb:
2
+ value:
3
+ cli_version: 0.25.0
4
+ e:
5
+ z1winms0ab80rmcbaynf075otkwpygrq:
6
+ args:
7
+ - +experiment=re10k_ablation_24v
8
+ - wandb.mode=online
9
+ - wandb.name=ABLATION_0225_noRefineModule
10
+ - model.density_control.use_refine_module=false
11
+ cpu_count: 128
12
+ cpu_count_logical: 256
13
+ cudaVersion: "13.1"
14
+ disk:
15
+ /:
16
+ total: "1170378588160"
17
+ used: "708558733312"
18
+ email: dna9041@korea.ac.kr
19
+ executable: /venv/main/bin/python
20
+ git:
21
+ commit: 2512754c6c27ca5150bf17fbcbdde3f192fd53cc
22
+ remote: git@github.com:K-nowing/CVPR2026.git
23
+ gpu: NVIDIA H200
24
+ gpu_count: 8
25
+ gpu_nvidia:
26
+ - architecture: Hopper
27
+ cudaCores: 16896
28
+ memoryTotal: "150754820096"
29
+ name: NVIDIA H200
30
+ uuid: GPU-2649ab80-a3a6-5a1c-0fa5-12bc11bd75e9
31
+ - architecture: Hopper
32
+ cudaCores: 16896
33
+ memoryTotal: "150754820096"
34
+ name: NVIDIA H200
35
+ uuid: GPU-e92921d9-c681-246f-af93-637e0dc938ca
36
+ - architecture: Hopper
37
+ cudaCores: 16896
38
+ memoryTotal: "150754820096"
39
+ name: NVIDIA H200
40
+ uuid: GPU-ffe12ffc-9bb7-82de-5692-1ec0ee2e68d8
41
+ - architecture: Hopper
42
+ cudaCores: 16896
43
+ memoryTotal: "150754820096"
44
+ name: NVIDIA H200
45
+ uuid: GPU-499e5acd-b6ab-2010-c51b-ee9b5aa65825
46
+ - architecture: Hopper
47
+ cudaCores: 16896
48
+ memoryTotal: "150754820096"
49
+ name: NVIDIA H200
50
+ uuid: GPU-3b2522d9-1c72-e49b-2c30-96165680b74a
51
+ - architecture: Hopper
52
+ cudaCores: 16896
53
+ memoryTotal: "150754820096"
54
+ name: NVIDIA H200
55
+ uuid: GPU-a9a280c5-b2f9-dc1e-a8a9-7326a74001ff
56
+ - architecture: Hopper
57
+ cudaCores: 16896
58
+ memoryTotal: "150754820096"
59
+ name: NVIDIA H200
60
+ uuid: GPU-07d0167b-a6a1-1900-2d27-7c6c11598409
61
+ - architecture: Hopper
62
+ cudaCores: 16896
63
+ memoryTotal: "150754820096"
64
+ name: NVIDIA H200
65
+ uuid: GPU-8362a999-20d1-c27b-5d18-032d23f859ab
66
+ host: 27d18dedec6d
67
+ memory:
68
+ total: "1622948257792"
69
+ os: Linux-6.8.0-90-generic-x86_64-with-glibc2.39
70
+ program: -m src.main
71
+ python: CPython 3.12.12
72
+ root: /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_noRefineModule
73
+ startedAt: "2026-02-25T07:32:27.352870Z"
74
+ writerId: z1winms0ab80rmcbaynf075otkwpygrq
75
+ m:
76
+ - "1": trainer/global_step
77
+ "6":
78
+ - 3
79
+ "7": []
80
+ - "2": '*'
81
+ "5": 1
82
+ "6":
83
+ - 1
84
+ "7": []
85
+ python_version: 3.12.12
86
+ t:
87
+ "1":
88
+ - 1
89
+ - 41
90
+ - 49
91
+ - 50
92
+ - 106
93
+ "2":
94
+ - 1
95
+ - 41
96
+ - 49
97
+ - 50
98
+ - 106
99
+ "3":
100
+ - 7
101
+ - 13
102
+ - 15
103
+ - 16
104
+ - 66
105
+ "4": 3.12.12
106
+ "5": 0.25.0
107
+ "12": 0.25.0
108
+ "13": linux-x86_64
109
+ checkpointing:
110
+ value:
111
+ every_n_train_steps: 1500
112
+ load: null
113
+ save_top_k: 2
114
+ save_weights_only: false
115
+ data_loader:
116
+ value:
117
+ test:
118
+ batch_size: 1
119
+ num_workers: 4
120
+ persistent_workers: false
121
+ seed: 2345
122
+ train:
123
+ batch_size: 16
124
+ num_workers: 16
125
+ persistent_workers: true
126
+ seed: 1234
127
+ val:
128
+ batch_size: 1
129
+ num_workers: 1
130
+ persistent_workers: true
131
+ seed: 3456
132
+ dataset:
133
+ value:
134
+ re10k:
135
+ augment: true
136
+ background_color:
137
+ - 0
138
+ - 0
139
+ - 0
140
+ baseline_max: 1e+10
141
+ baseline_min: 0.001
142
+ cameras_are_circular: false
143
+ dynamic_context_views: true
144
+ input_image_shape:
145
+ - 256
146
+ - 256
147
+ make_baseline_1: true
148
+ max_context_views_per_gpu: 24
149
+ max_fov: 100
150
+ name: re10k
151
+ original_image_shape:
152
+ - 360
153
+ - 640
154
+ overfit_to_scene: null
155
+ relative_pose: true
156
+ roots:
157
+ - datasets/re10k
158
+ skip_bad_shape: true
159
+ view_sampler:
160
+ initial_max_distance_between_context_views: 25
161
+ initial_min_distance_between_context_views: 25
162
+ max_distance_between_context_views: 90
163
+ min_distance_between_context_views: 45
164
+ min_distance_to_context_views: 0
165
+ name: bounded
166
+ num_context_views: 2
167
+ num_target_set: 3
168
+ num_target_views: 4
169
+ same_target_gap: false
170
+ warm_up_steps: 1000
171
+ density_control_loss:
172
+ value:
173
+ error_score:
174
+ grad_scale: 10000
175
+ log_scale: false
176
+ mode: original
177
+ weight: 0.01
178
+ direct_loss:
179
+ value:
180
+ l1:
181
+ weight: 0.8
182
+ ssim:
183
+ weight: 0.2
184
+ mode:
185
+ value: train
186
+ model:
187
+ value:
188
+ decoder:
189
+ background_color:
190
+ - 0
191
+ - 0
192
+ - 0
193
+ make_scale_invariant: false
194
+ name: splatting_cuda
195
+ density_control:
196
+ aggregation_mode: mean
197
+ aux_refine: false
198
+ grad_mode: absgrad
199
+ gs_param_dim: 256
200
+ latent_dim: 128
201
+ mean_dim: 32
202
+ name: density_control_module
203
+ num_heads: 1
204
+ num_latents: 64
205
+ num_level: 3
206
+ num_self_attn_per_block: 2
207
+ refine_error: false
208
+ refinement_hidden_dim: 32
209
+ refinement_layer_num: 1
210
+ refinement_type: voxelize
211
+ score_mode: absgrad
212
+ use_depth: false
213
+ use_mean_features: true
214
+ use_refine_module: false
215
+ voxel_size: 0.001
216
+ voxelize_activate: true
217
+ encoder:
218
+ align_corners: false
219
+ gs_param_dim: 256
220
+ head_mode: pcd
221
+ input_image_shape:
222
+ - 518
223
+ - 518
224
+ name: dcsplat
225
+ num_level: 3
226
+ use_voxelize: true
227
+ optimizer:
228
+ value:
229
+ accumulate: 1
230
+ backbone_lr_multiplier: 0.1
231
+ backbone_trainable: T+H
232
+ lr: 0.0002
233
+ warm_up_steps: 25
234
+ render_loss:
235
+ value:
236
+ lpips:
237
+ apply_after_step: 0
238
+ weight: 0.05
239
+ mse:
240
+ weight: 1
241
+ seed:
242
+ value: 111123
243
+ test:
244
+ value:
245
+ align_pose: false
246
+ compute_scores: true
247
+ error_threshold: 0.4
248
+ error_threshold_list:
249
+ - 0.2
250
+ - 0.4
251
+ - 0.6
252
+ - 0.8
253
+ - 1
254
+ nvs_view_N_list:
255
+ - 3
256
+ - 6
257
+ - 16
258
+ - 32
259
+ - 64
260
+ output_path: test/ablation/re10k
261
+ pose_align_steps: 100
262
+ pred_intrinsic: false
263
+ rot_opt_lr: 0.005
264
+ save_active_mask_image: false
265
+ save_compare: false
266
+ save_error_score_image: false
267
+ save_image: false
268
+ save_video: false
269
+ threshold_mode: ratio
270
+ trans_opt_lr: 0.005
271
+ train:
272
+ value:
273
+ align_corners: false
274
+ beta_dist_param:
275
+ - 0.5
276
+ - 4
277
+ cam_scale_mode: sum
278
+ camera_loss: 10
279
+ context_view_train: false
280
+ ext_scale_detach: false
281
+ extended_visualization: false
282
+ intrinsic_scaling: false
283
+ one_sample_validation: null
284
+ print_log_every_n_steps: 10
285
+ scene_scale_reg_loss: 0.01
286
+ train_aux: true
287
+ train_gs_num: 1
288
+ train_target_set: true
289
+ use_refine_aux: false
290
+ verbose: false
291
+ vggt_cam_loss: true
292
+ vggt_distil: false
293
+ trainer:
294
+ value:
295
+ gradient_clip_val: 0.5
296
+ max_steps: 3001
297
+ num_nodes: 1
298
+ val_check_interval: 250
299
+ wandb:
300
+ value:
301
+ entity: scene-representation-group
302
+ mode: online
303
+ name: ABLATION_0225_noRefineModule
304
+ project: DCSplat
305
+ tags:
306
+ - re10k
307
+ - 256x256
ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/output.log ADDED
The diff for this file is too large to render. See raw diff
 
ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/requirements.txt ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wheel==0.45.1
2
+ pytz==2025.2
3
+ easydict==1.13
4
+ antlr4-python3-runtime==4.9.3
5
+ wadler_lindig==0.1.7
6
+ urllib3==2.5.0
7
+ tzdata==2025.2
8
+ typing-inspection==0.4.1
9
+ tabulate==0.9.0
10
+ smmap==5.0.2
11
+ kornia_rs==0.1.9
12
+ setuptools==78.1.1
13
+ safetensors==0.5.3
14
+ PyYAML==6.0.2
15
+ PySocks==1.7.1
16
+ pyparsing==3.2.5
17
+ pydantic_core==2.33.2
18
+ pycparser==2.23
19
+ protobuf==6.32.1
20
+ propcache==0.3.2
21
+ proglog==0.1.12
22
+ fsspec==2024.6.1
23
+ platformdirs==4.4.0
24
+ pip==25.2
25
+ pillow==10.4.0
26
+ frozenlist==1.7.0
27
+ packaging==24.2
28
+ opt_einsum==3.4.0
29
+ numpy==1.26.4
30
+ ninja==1.13.0
31
+ fonttools==4.60.0
32
+ networkx==3.4.2
33
+ multidict==6.6.4
34
+ mdurl==0.1.2
35
+ MarkupSafe==3.0.2
36
+ kiwisolver==1.4.9
37
+ imageio-ffmpeg==0.6.0
38
+ idna==3.7
39
+ hf-xet==1.1.10
40
+ gmpy2==2.2.1
41
+ einops==0.8.1
42
+ filelock==3.17.0
43
+ decorator==4.4.2
44
+ dacite==1.9.2
45
+ cycler==0.12.1
46
+ colorama==0.4.6
47
+ click==8.3.0
48
+ nvidia-nvtx-cu12==12.8.90
49
+ charset-normalizer==3.3.2
50
+ certifi==2025.8.3
51
+ beartype==0.19.0
52
+ attrs==25.3.0
53
+ async-timeout==5.0.1
54
+ annotated-types==0.7.0
55
+ aiohappyeyeballs==2.6.1
56
+ yarl==1.20.1
57
+ tifffile==2025.5.10
58
+ sentry-sdk==2.39.0
59
+ scipy==1.15.3
60
+ pydantic==2.11.9
61
+ pandas==2.3.2
62
+ opencv-python==4.11.0.86
63
+ omegaconf==2.3.0
64
+ markdown-it-py==4.0.0
65
+ lightning-utilities==0.14.3
66
+ lazy_loader==0.4
67
+ jaxtyping==0.2.37
68
+ imageio==2.37.0
69
+ gitdb==4.0.12
70
+ contourpy==1.3.2
71
+ colorspacious==1.1.2
72
+ cffi==1.17.1
73
+ aiosignal==1.4.0
74
+ scikit-video==1.1.11
75
+ scikit-image==0.25.2
76
+ rich==14.1.0
77
+ moviepy==1.0.3
78
+ matplotlib==3.10.6
79
+ hydra-core==1.3.2
80
+ nvidia-nccl-cu12==2.27.3
81
+ huggingface-hub==0.35.1
82
+ GitPython==3.1.45
83
+ brotlicffi==1.0.9.2
84
+ aiohttp==3.12.15
85
+ torchmetrics==1.8.2
86
+ opt-einsum-fx==0.1.4
87
+ kornia==0.8.1
88
+ pytorch-lightning==2.5.1
89
+ lpips==0.1.4
90
+ e3nn==0.6.0
91
+ lightning==2.5.1
92
+ nvidia-cusparselt-cu12==0.7.1
93
+ triton==3.4.0
94
+ nvidia-nvjitlink-cu12==12.8.93
95
+ nvidia-curand-cu12==10.3.9.90
96
+ nvidia-cufile-cu12==1.13.1.3
97
+ nvidia-cuda-runtime-cu12==12.8.90
98
+ nvidia-cuda-nvrtc-cu12==12.8.93
99
+ nvidia-cuda-cupti-cu12==12.8.90
100
+ nvidia-cublas-cu12==12.8.4.1
101
+ nvidia-cusparse-cu12==12.5.8.93
102
+ nvidia-cufft-cu12==11.3.3.83
103
+ nvidia-cudnn-cu12==9.10.2.21
104
+ nvidia-cusolver-cu12==11.7.3.90
105
+ torch==2.8.0+cu128
106
+ torchvision==0.23.0+cu128
107
+ torchaudio==2.8.0+cu128
108
+ torch_scatter==2.1.2+pt28cu128
109
+ gsplat==1.5.3
110
+ wandb==0.25.0
111
+ cuda-bindings==13.0.3
112
+ cuda-pathfinder==1.3.3
113
+ Jinja2==3.1.6
114
+ mpmath==1.3.0
115
+ nvidia-cublas==13.1.0.3
116
+ nvidia-cuda-cupti==13.0.85
117
+ nvidia-cuda-nvrtc==13.0.88
118
+ nvidia-cuda-runtime==13.0.96
119
+ nvidia-cudnn-cu13==9.15.1.9
120
+ nvidia-cufft==12.0.0.61
121
+ nvidia-cufile==1.15.1.6
122
+ nvidia-curand==10.4.0.35
123
+ nvidia-cusolver==12.0.4.66
124
+ nvidia-cusparse==12.6.3.3
125
+ nvidia-cusparselt-cu13==0.8.0
126
+ nvidia-nccl-cu13==2.28.9
127
+ nvidia-nvjitlink==13.0.88
128
+ nvidia-nvshmem-cu13==3.4.5
129
+ nvidia-nvtx==13.0.85
130
+ requests==2.32.5
131
+ sentencepiece==0.2.1
132
+ sympy==1.14.0
133
+ torchcodec==0.10.0
134
+ torchdata==0.10.0
135
+ torchtext==0.6.0
136
+ anyio==4.12.0
137
+ asttokens==3.0.1
138
+ comm==0.2.3
139
+ debugpy==1.8.19
140
+ executing==2.2.1
141
+ h11==0.16.0
142
+ httpcore==1.0.9
143
+ httpx==0.28.1
144
+ ipykernel==7.1.0
145
+ ipython==9.8.0
146
+ ipython_pygments_lexers==1.1.1
147
+ ipywidgets==8.1.8
148
+ jedi==0.19.2
149
+ jupyter_client==8.7.0
150
+ jupyter_core==5.9.1
151
+ jupyterlab_widgets==3.0.16
152
+ matplotlib-inline==0.2.1
153
+ nest-asyncio==1.6.0
154
+ parso==0.8.5
155
+ pexpect==4.9.0
156
+ prompt_toolkit==3.0.52
157
+ psutil==7.2.1
158
+ ptyprocess==0.7.0
159
+ pure_eval==0.2.3
160
+ Pygments==2.19.2
161
+ python-dateutil==2.9.0.post0
162
+ pyzmq==27.1.0
163
+ shellingham==1.5.4
164
+ six==1.17.0
165
+ stack-data==0.6.3
166
+ tornado==6.5.4
167
+ tqdm==4.67.1
168
+ traitlets==5.14.3
169
+ typer-slim==0.21.0
170
+ typing_extensions==4.15.0
171
+ wcwidth==0.2.14
172
+ widgetsnbextension==4.0.15
ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/wandb-metadata.json ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-6.8.0-90-generic-x86_64-with-glibc2.39",
3
+ "python": "CPython 3.12.12",
4
+ "startedAt": "2026-02-25T07:32:27.352870Z",
5
+ "args": [
6
+ "+experiment=re10k_ablation_24v",
7
+ "wandb.mode=online",
8
+ "wandb.name=ABLATION_0225_noRefineModule",
9
+ "model.density_control.use_refine_module=false"
10
+ ],
11
+ "program": "-m src.main",
12
+ "git": {
13
+ "remote": "git@github.com:K-nowing/CVPR2026.git",
14
+ "commit": "2512754c6c27ca5150bf17fbcbdde3f192fd53cc"
15
+ },
16
+ "email": "dna9041@korea.ac.kr",
17
+ "root": "/workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_noRefineModule",
18
+ "host": "27d18dedec6d",
19
+ "executable": "/venv/main/bin/python",
20
+ "cpu_count": 128,
21
+ "cpu_count_logical": 256,
22
+ "gpu": "NVIDIA H200",
23
+ "gpu_count": 8,
24
+ "disk": {
25
+ "/": {
26
+ "total": "1170378588160",
27
+ "used": "708558733312"
28
+ }
29
+ },
30
+ "memory": {
31
+ "total": "1622948257792"
32
+ },
33
+ "gpu_nvidia": [
34
+ {
35
+ "name": "NVIDIA H200",
36
+ "memoryTotal": "150754820096",
37
+ "cudaCores": 16896,
38
+ "architecture": "Hopper",
39
+ "uuid": "GPU-2649ab80-a3a6-5a1c-0fa5-12bc11bd75e9"
40
+ },
41
+ {
42
+ "name": "NVIDIA H200",
43
+ "memoryTotal": "150754820096",
44
+ "cudaCores": 16896,
45
+ "architecture": "Hopper",
46
+ "uuid": "GPU-e92921d9-c681-246f-af93-637e0dc938ca"
47
+ },
48
+ {
49
+ "name": "NVIDIA H200",
50
+ "memoryTotal": "150754820096",
51
+ "cudaCores": 16896,
52
+ "architecture": "Hopper",
53
+ "uuid": "GPU-ffe12ffc-9bb7-82de-5692-1ec0ee2e68d8"
54
+ },
55
+ {
56
+ "name": "NVIDIA H200",
57
+ "memoryTotal": "150754820096",
58
+ "cudaCores": 16896,
59
+ "architecture": "Hopper",
60
+ "uuid": "GPU-499e5acd-b6ab-2010-c51b-ee9b5aa65825"
61
+ },
62
+ {
63
+ "name": "NVIDIA H200",
64
+ "memoryTotal": "150754820096",
65
+ "cudaCores": 16896,
66
+ "architecture": "Hopper",
67
+ "uuid": "GPU-3b2522d9-1c72-e49b-2c30-96165680b74a"
68
+ },
69
+ {
70
+ "name": "NVIDIA H200",
71
+ "memoryTotal": "150754820096",
72
+ "cudaCores": 16896,
73
+ "architecture": "Hopper",
74
+ "uuid": "GPU-a9a280c5-b2f9-dc1e-a8a9-7326a74001ff"
75
+ },
76
+ {
77
+ "name": "NVIDIA H200",
78
+ "memoryTotal": "150754820096",
79
+ "cudaCores": 16896,
80
+ "architecture": "Hopper",
81
+ "uuid": "GPU-07d0167b-a6a1-1900-2d27-7c6c11598409"
82
+ },
83
+ {
84
+ "name": "NVIDIA H200",
85
+ "memoryTotal": "150754820096",
86
+ "cudaCores": 16896,
87
+ "architecture": "Hopper",
88
+ "uuid": "GPU-8362a999-20d1-c27b-5d18-032d23f859ab"
89
+ }
90
+ ],
91
+ "cudaVersion": "13.1",
92
+ "writerId": "z1winms0ab80rmcbaynf075otkwpygrq"
93
+ }
ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"trainer/global_step":3001,"_wandb":{"runtime":11183},"loss/aux_1/lpips":0.009582722559571266,"loss/aux_1/error_score":0.23707404732704163,"loss/aux_0/error_score":0.37395116686820984,"_timestamp":1.7720159252726524e+09,"train/psnr_probabilistic":20.766376495361328,"loss/aux_1/mse":0.011007795110344887,"val/psnr":21.622608184814453,"train/error_scores":{"count":1,"filenames":["media/images/train/error_scores_201_99cdf460841ea0543ea7.png"],"captions":[["0621c7675fab1418"]],"_type":"images/separated","width":1328,"height":2120,"format":"png"},"active_mask_imgs":{"filenames":["media/images/active_mask_imgs_198_d9a5bace8f25f1101b30.png"],"captions":["a76028640ffa1ef9"],"_type":"images/separated","width":536,"height":800,"format":"png","count":1},"loss/aux_0/lpips":0.010782335884869099,"lr-AdamW/pg1-momentum":0.9,"epoch":0,"loss/total":0.08648455888032913,"lr-AdamW/pg2":2e-05,"loss/final_3dgs/mse":0.009266000241041183,"error_scores":{"_type":"images/separated","width":800,"height":536,"format":"png","count":1,"filenames":["media/images/error_scores_199_e79b447934cce3e14bdb.png"],"captions":["a76028640ffa1ef9"]},"train/comparison":{"_type":"images/separated","width":1328,"height":2154,"format":"png","count":1,"filenames":["media/images/train/comparison_202_b6bf8b4d2d9219d977fa.png"],"captions":[["0621c7675fab1418"]]},"lr-AdamW/pg2-momentum":0.9,"train/scene_scale":1.0070030689239502,"comparison":{"count":1,"filenames":["media/images/comparison_197_d1042a2aa788751a412f.png"],"captions":["a76028640ffa1ef9"],"_type":"images/separated","width":1064,"height":1098,"format":"png"},"val/gaussian_num_ratio":0.3997650146484375,"loss/aux_0/mse":0.009491334669291973,"_runtime":11183,"info/global_step":3000,"_step":202,"loss/aux_2/mse":0.010797698982059956,"loss/scene_scale_reg":0.00019978880300186574,"lr-AdamW/pg1":2.003594834351718e-05,"val/ssim":0.8318922519683838,"loss/final_3dgs/lpips":0.00893520936369896,"loss/aux_2/lpips":0.009327557869255543,"loss/camera":9.838641562964767e-05,"val/lpips":0.1536986380815506}
ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug-core.log ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-25T07:32:27.422522053Z","level":"INFO","msg":"main: starting server","port-filename":"/tmp/tmpwu965jc4/port-137621.txt","pid":137621,"log-level":0,"disable-analytics":false,"shutdown-on-parent-exit":false,"enable-dcgm-profiling":false}
2
+ {"time":"2026-02-25T07:32:27.423426767Z","level":"INFO","msg":"server: will exit if parent process dies","ppid":137621}
3
+ {"time":"2026-02-25T07:32:27.423393077Z","level":"INFO","msg":"server: accepting connections","addr":{"Name":"/tmp/wandb-137621-140053-2081743564/socket","Net":"unix"}}
4
+ {"time":"2026-02-25T07:32:27.602024695Z","level":"INFO","msg":"connection: ManageConnectionData: new connection created","id":"1(@)"}
5
+ {"time":"2026-02-25T07:32:27.611595513Z","level":"INFO","msg":"handleInformInit: received","streamId":"2f0bcys0","id":"1(@)"}
6
+ {"time":"2026-02-25T07:32:28.037979247Z","level":"INFO","msg":"handleInformInit: stream started","streamId":"2f0bcys0","id":"1(@)"}
7
+ {"time":"2026-02-25T07:32:33.742044945Z","level":"INFO","msg":"connection: cancelling request","id":"1(@)","requestId":"v0xp4cjc1l9g"}
8
+ {"time":"2026-02-25T10:38:52.520680299Z","level":"INFO","msg":"handleInformTeardown: server teardown initiated","id":"1(@)"}
9
+ {"time":"2026-02-25T10:38:52.520838241Z","level":"INFO","msg":"server is shutting down"}
10
+ {"time":"2026-02-25T10:38:52.520822771Z","level":"INFO","msg":"connection: closing","id":"1(@)"}
11
+ {"time":"2026-02-25T10:38:52.520922373Z","level":"INFO","msg":"connection: closed successfully","id":"1(@)"}
12
+ {"time":"2026-02-25T10:38:52.520970143Z","level":"INFO","msg":"server: listener closed","addr":{"Name":"/tmp/wandb-137621-140053-2081743564/socket","Net":"unix"}}
13
+ {"time":"2026-02-25T10:38:53.701442926Z","level":"INFO","msg":"handleInformTeardown: server shutdown complete","id":"1(@)"}
14
+ {"time":"2026-02-25T10:38:53.701488686Z","level":"INFO","msg":"connection: ManageConnectionData: connection closed","id":"1(@)"}
15
+ {"time":"2026-02-25T10:38:53.701513197Z","level":"INFO","msg":"server is closed"}
ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug-internal.log ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-25T07:32:27.611867617Z","level":"INFO","msg":"stream: starting","core version":"0.25.0"}
2
+ {"time":"2026-02-25T07:32:28.03755666Z","level":"INFO","msg":"stream: created new stream","id":"2f0bcys0"}
3
+ {"time":"2026-02-25T07:32:28.037863635Z","level":"INFO","msg":"handler: started","stream_id":"2f0bcys0"}
4
+ {"time":"2026-02-25T07:32:28.037970207Z","level":"INFO","msg":"stream: started","id":"2f0bcys0"}
5
+ {"time":"2026-02-25T07:32:28.038020847Z","level":"INFO","msg":"writer: started","stream_id":"2f0bcys0"}
6
+ {"time":"2026-02-25T07:32:28.038027757Z","level":"INFO","msg":"sender: started","stream_id":"2f0bcys0"}
7
+ {"time":"2026-02-25T10:38:52.520830581Z","level":"INFO","msg":"stream: closing","id":"2f0bcys0"}
8
+ {"time":"2026-02-25T10:38:53.390340772Z","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
9
+ {"time":"2026-02-25T10:38:53.699950002Z","level":"INFO","msg":"handler: closed","stream_id":"2f0bcys0"}
10
+ {"time":"2026-02-25T10:38:53.700227926Z","level":"INFO","msg":"sender: closed","stream_id":"2f0bcys0"}
11
+ {"time":"2026-02-25T10:38:53.700251656Z","level":"INFO","msg":"stream: closed","id":"2f0bcys0"}
ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug.log ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_setup.py:_flush():81] Current SDK version is 0.25.0
2
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_setup.py:_flush():81] Configure stats pid to 137621
3
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_setup.py:_flush():81] Loading settings from environment variables
4
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:setup_run_log_directory():717] Logging user logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug.log
5
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:setup_run_log_directory():718] Logging internal logs to /workspace/code/CVPR2026/outputs/ablation/re10k/ABLATION_0225_noRefineModule/wandb/run-20260225_073227-2f0bcys0/logs/debug-internal.log
6
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:init():844] calling init triggers
7
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:init():849] wandb.init called with sweep_config: {}
8
+ config: {'model': {'encoder': {'name': 'dcsplat', 'input_image_shape': [518, 518], 'head_mode': 'pcd', 'num_level': 3, 'gs_param_dim': 256, 'align_corners': False, 'use_voxelize': True}, 'decoder': {'name': 'splatting_cuda', 'background_color': [0.0, 0.0, 0.0], 'make_scale_invariant': False}, 'density_control': {'name': 'density_control_module', 'mean_dim': 32, 'gs_param_dim': 256, 'refinement_layer_num': 1, 'num_level': 3, 'grad_mode': 'absgrad', 'use_mean_features': True, 'refinement_type': 'voxelize', 'refinement_hidden_dim': 32, 'aggregation_mode': 'mean', 'num_heads': 1, 'score_mode': 'absgrad', 'latent_dim': 128, 'num_latents': 64, 'num_self_attn_per_block': 2, 'voxel_size': 0.001, 'aux_refine': False, 'refine_error': False, 'use_refine_module': False, 'voxelize_activate': True, 'use_depth': False}}, 'render_loss': {'mse': {'weight': 1.0}, 'lpips': {'weight': 0.05, 'apply_after_step': 0}}, 'density_control_loss': {'error_score': {'weight': 0.01, 'log_scale': False, 'grad_scale': 10000.0, 'mode': 'original'}}, 'direct_loss': {'l1': {'weight': 0.8}, 'ssim': {'weight': 0.2}}, 'wandb': {'project': 'DCSplat', 'entity': 'scene-representation-group', 'name': 'ABLATION_0225_noRefineModule', 'mode': 'online', 'tags': ['re10k', '256x256']}, 'mode': 'train', 'data_loader': {'train': {'num_workers': 16, 'persistent_workers': True, 'batch_size': 16, 'seed': 1234}, 'test': {'num_workers': 4, 'persistent_workers': False, 'batch_size': 1, 'seed': 2345}, 'val': {'num_workers': 1, 'persistent_workers': True, 'batch_size': 1, 'seed': 3456}}, 'optimizer': {'lr': 0.0002, 'warm_up_steps': 25, 'backbone_lr_multiplier': 0.1, 'backbone_trainable': 'T+H', 'accumulate': 1}, 'checkpointing': {'load': None, 'every_n_train_steps': 1500, 'save_top_k': 2, 'save_weights_only': False}, 'train': {'extended_visualization': False, 'print_log_every_n_steps': 10, 'camera_loss': 10.0, 'one_sample_validation': None, 'align_corners': False, 'intrinsic_scaling': False, 'verbose': False, 'beta_dist_param': [0.5, 4.0], 'use_refine_aux': False, 'train_target_set': True, 'train_gs_num': 1, 'ext_scale_detach': False, 'cam_scale_mode': 'sum', 'scene_scale_reg_loss': 0.01, 'train_aux': True, 'vggt_cam_loss': True, 'vggt_distil': False, 'context_view_train': False}, 'test': {'output_path': 'test/ablation/re10k', 'align_pose': False, 'pose_align_steps': 100, 'rot_opt_lr': 0.005, 'trans_opt_lr': 0.005, 'compute_scores': True, 'save_image': False, 'save_video': False, 'save_active_mask_image': False, 'save_error_score_image': False, 'save_compare': False, 'pred_intrinsic': False, 'error_threshold': 0.4, 'error_threshold_list': [0.2, 0.4, 0.6, 0.8, 1.0], 'threshold_mode': 'ratio', 'nvs_view_N_list': [3, 6, 16, 32, 64]}, 'seed': 111123, 'trainer': {'max_steps': 3001, 'val_check_interval': 250, 'gradient_clip_val': 0.5, 'num_nodes': 1}, 'dataset': {'re10k': {'make_baseline_1': True, 'relative_pose': True, 'augment': True, 'background_color': [0.0, 0.0, 0.0], 'overfit_to_scene': None, 'skip_bad_shape': True, 'view_sampler': {'name': 'bounded', 'num_target_views': 4, 'num_context_views': 2, 'min_distance_between_context_views': 45, 'max_distance_between_context_views': 90, 'min_distance_to_context_views': 0, 'warm_up_steps': 1000, 'initial_min_distance_between_context_views': 25, 'initial_max_distance_between_context_views': 25, 'same_target_gap': False, 'num_target_set': 3}, 'name': 're10k', 'roots': ['datasets/re10k'], 'input_image_shape': [256, 256], 'original_image_shape': [360, 640], 'cameras_are_circular': False, 'baseline_min': 0.001, 'baseline_max': 10000000000.0, 'max_fov': 100.0, 'dynamic_context_views': True, 'max_context_views_per_gpu': 24}}, '_wandb': {}}
9
+ 2026-02-25 07:32:27,354 INFO MainThread:137621 [wandb_init.py:init():892] starting backend
10
+ 2026-02-25 07:32:27,602 INFO MainThread:137621 [wandb_init.py:init():895] sending inform_init request
11
+ 2026-02-25 07:32:27,609 INFO MainThread:137621 [wandb_init.py:init():903] backend started and connected
12
+ 2026-02-25 07:32:27,613 INFO MainThread:137621 [wandb_init.py:init():973] updated telemetry
13
+ 2026-02-25 07:32:27,622 INFO MainThread:137621 [wandb_init.py:init():997] communicating run to backend with 90.0 second timeout
14
+ 2026-02-25 07:32:28,628 INFO MainThread:137621 [wandb_init.py:init():1042] starting run threads in backend
15
+ 2026-02-25 07:32:28,738 INFO MainThread:137621 [wandb_run.py:_console_start():2524] atexit reg
16
+ 2026-02-25 07:32:28,738 INFO MainThread:137621 [wandb_run.py:_redirect():2373] redirect: wrap_raw
17
+ 2026-02-25 07:32:28,738 INFO MainThread:137621 [wandb_run.py:_redirect():2442] Wrapping output streams.
18
+ 2026-02-25 07:32:28,738 INFO MainThread:137621 [wandb_run.py:_redirect():2465] Redirects installed.
19
+ 2026-02-25 07:32:28,740 INFO MainThread:137621 [wandb_init.py:init():1082] run started, returning control to user process
20
+ 2026-02-25 10:38:52,520 INFO wandb-AsyncioManager-main:137621 [service_client.py:_forward_responses():134] Reached EOF.
21
+ 2026-02-25 10:38:52,520 INFO wandb-AsyncioManager-main:137621 [mailbox.py:close():155] Closing mailbox, abandoning 1 handles.
ABLATION_0225_randomSelect/main.log ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 10:39:03,453][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 10:39:09,556][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 10:39:09,556][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 10:39:59,700][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:425: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=31` in the `DataLoader` to improve performance.
9
+
10
+ [2026-02-25 10:39:59,701][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
11
+ warnings.warn( # warn only once
12
+
13
+ [2026-02-25 10:40:02,283][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
14
+ result[selector] = overlay
15
+
16
+ [2026-02-25 10:40:02,292][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/utilities/data.py:79: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 1. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
17
+
18
+ [2026-02-25 10:40:02,292][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
19
+ warnings.warn(
20
+
21
+ [2026-02-25 10:40:02,293][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
22
+ warnings.warn(msg)
23
+
24
+ [2026-02-25 10:40:03,984][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/functional.py:554: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /pytorch/aten/src/ATen/native/TensorShape.cpp:4322.)
25
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
26
+
27
+ [2026-02-25 10:40:04,284][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('val/psnr', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
28
+
29
+ [2026-02-25 10:40:04,285][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('val/lpips', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
30
+
31
+ [2026-02-25 10:40:04,286][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('val/ssim', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
32
+
33
+ [2026-02-25 10:40:04,286][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('val/gaussian_num_ratio', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
34
+
35
+ [2026-02-25 10:40:04,286][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py:434: It is recommended to use `self.log('info/global_step', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
36
+
37
+ [2026-02-25 10:40:13,358][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
38
+ grad.sizes() = [256, 256, 1, 1], strides() = [256, 1, 256, 256]
39
+ bucket_view.sizes() = [256, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
40
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
41
+
42
+ [2026-02-25 10:40:13,429][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
43
+ result[selector] = overlay
44
+
45
+ [2026-02-25 10:41:49,278][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
46
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
47
+
48
+ [2026-02-25 10:52:55,864][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
49
+ result[selector] = overlay
50
+
51
+ [2026-02-25 10:56:11,418][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
52
+ result[selector] = overlay
53
+
54
+ [2026-02-25 11:05:44,777][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
55
+ result[selector] = overlay
56
+
57
+ [2026-02-25 11:12:03,856][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
58
+ result[selector] = overlay
59
+
60
+ [2026-02-25 11:18:29,813][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
61
+ result[selector] = overlay
62
+
63
+ [2026-02-25 11:27:57,672][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
64
+ result[selector] = overlay
65
+
66
+ [2026-02-25 11:31:13,251][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
67
+ result[selector] = overlay
68
+
69
+ [2026-02-25 11:43:50,712][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
70
+ result[selector] = overlay
71
+
72
+ [2026-02-25 11:43:54,528][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
73
+ result[selector] = overlay
74
+
75
+ [2026-02-25 11:56:39,754][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
76
+ result[selector] = overlay
77
+
78
+ [2026-02-25 11:59:50,137][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
79
+ result[selector] = overlay
80
+
81
+ [2026-02-25 12:09:23,940][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
82
+ result[selector] = overlay
83
+
84
+ [2026-02-25 12:16:01,606][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
85
+ result[selector] = overlay
86
+
87
+ [2026-02-25 12:22:24,120][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
88
+ result[selector] = overlay
89
+
90
+ [2026-02-25 12:32:00,789][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
91
+ result[selector] = overlay
92
+
93
+ [2026-02-25 12:35:14,407][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
94
+ result[selector] = overlay
95
+
96
+ [2026-02-25 12:47:51,723][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
97
+ result[selector] = overlay
98
+
99
+ [2026-02-25 12:47:56,086][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
100
+ result[selector] = overlay
101
+
102
+ [2026-02-25 13:00:42,868][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
103
+ result[selector] = overlay
104
+
105
+ [2026-02-25 13:03:57,852][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
106
+ result[selector] = overlay
107
+
108
+ [2026-02-25 13:13:32,568][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
109
+ result[selector] = overlay
110
+
111
+ [2026-02-25 13:19:49,814][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
112
+ result[selector] = overlay
113
+
114
+ [2026-02-25 13:26:12,524][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
115
+ result[selector] = overlay
116
+
ABLATION_0225_randomSelect/train_ddp_process_1.log ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 10:39:19,964][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 10:39:37,919][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 10:39:37,920][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 10:39:59,701][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
9
+ warnings.warn( # warn only once
10
+
11
+ [2026-02-25 10:40:13,353][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
12
+ grad.sizes() = [256, 256, 1, 1], strides() = [256, 1, 256, 256]
13
+ bucket_view.sizes() = [256, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
14
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
15
+
16
+ [2026-02-25 10:40:13,462][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
17
+ result[selector] = overlay
18
+
19
+ [2026-02-25 10:41:49,277][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
20
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
21
+
22
+ [2026-02-25 10:52:55,864][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
23
+ result[selector] = overlay
24
+
25
+ [2026-02-25 11:05:44,777][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
26
+ result[selector] = overlay
27
+
28
+ [2026-02-25 11:18:29,813][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
29
+ result[selector] = overlay
30
+
31
+ [2026-02-25 11:31:13,252][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
32
+ result[selector] = overlay
33
+
34
+ [2026-02-25 11:43:54,528][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
35
+ result[selector] = overlay
36
+
37
+ [2026-02-25 11:56:39,755][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
38
+ result[selector] = overlay
39
+
40
+ [2026-02-25 12:09:23,940][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
41
+ result[selector] = overlay
42
+
43
+ [2026-02-25 12:22:24,117][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
44
+ result[selector] = overlay
45
+
46
+ [2026-02-25 12:35:14,407][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
47
+ result[selector] = overlay
48
+
49
+ [2026-02-25 12:47:56,084][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
50
+ result[selector] = overlay
51
+
52
+ [2026-02-25 13:00:42,869][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
53
+ result[selector] = overlay
54
+
55
+ [2026-02-25 13:13:32,567][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
56
+ result[selector] = overlay
57
+
58
+ [2026-02-25 13:26:12,524][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
59
+ result[selector] = overlay
60
+
ABLATION_0225_randomSelect/train_ddp_process_2.log ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 10:39:19,879][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 10:39:48,721][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 10:39:48,721][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 10:39:59,701][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
9
+ warnings.warn( # warn only once
10
+
11
+ [2026-02-25 10:40:13,356][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
12
+ grad.sizes() = [256, 256, 1, 1], strides() = [256, 1, 256, 256]
13
+ bucket_view.sizes() = [256, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
14
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
15
+
16
+ [2026-02-25 10:40:13,461][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
17
+ result[selector] = overlay
18
+
19
+ [2026-02-25 10:41:49,278][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
20
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
21
+
22
+ [2026-02-25 10:52:55,864][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
23
+ result[selector] = overlay
24
+
25
+ [2026-02-25 11:05:44,777][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
26
+ result[selector] = overlay
27
+
28
+ [2026-02-25 11:18:29,813][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
29
+ result[selector] = overlay
30
+
31
+ [2026-02-25 11:31:13,250][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
32
+ result[selector] = overlay
33
+
34
+ [2026-02-25 11:43:54,529][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
35
+ result[selector] = overlay
36
+
37
+ [2026-02-25 11:56:39,755][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
38
+ result[selector] = overlay
39
+
40
+ [2026-02-25 12:09:23,942][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
41
+ result[selector] = overlay
42
+
43
+ [2026-02-25 12:22:24,117][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
44
+ result[selector] = overlay
45
+
46
+ [2026-02-25 12:35:14,408][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
47
+ result[selector] = overlay
48
+
49
+ [2026-02-25 12:47:56,084][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
50
+ result[selector] = overlay
51
+
52
+ [2026-02-25 13:00:42,869][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
53
+ result[selector] = overlay
54
+
55
+ [2026-02-25 13:13:32,568][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
56
+ result[selector] = overlay
57
+
58
+ [2026-02-25 13:26:12,524][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
59
+ result[selector] = overlay
60
+
ABLATION_0225_randomSelect/train_ddp_process_4.log ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 10:39:19,958][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 10:39:38,019][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 10:39:38,020][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 10:39:59,701][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
9
+ warnings.warn( # warn only once
10
+
11
+ [2026-02-25 10:40:13,354][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
12
+ grad.sizes() = [256, 256, 1, 1], strides() = [256, 1, 256, 256]
13
+ bucket_view.sizes() = [256, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
14
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
15
+
16
+ [2026-02-25 10:40:13,462][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
17
+ result[selector] = overlay
18
+
19
+ [2026-02-25 10:41:49,306][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
20
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
21
+
22
+ [2026-02-25 10:52:55,864][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
23
+ result[selector] = overlay
24
+
25
+ [2026-02-25 11:05:44,777][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
26
+ result[selector] = overlay
27
+
28
+ [2026-02-25 11:18:29,814][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
29
+ result[selector] = overlay
30
+
31
+ [2026-02-25 11:31:13,251][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
32
+ result[selector] = overlay
33
+
34
+ [2026-02-25 11:43:54,528][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
35
+ result[selector] = overlay
36
+
37
+ [2026-02-25 11:56:39,755][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
38
+ result[selector] = overlay
39
+
40
+ [2026-02-25 12:09:23,940][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
41
+ result[selector] = overlay
42
+
43
+ [2026-02-25 12:22:24,117][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
44
+ result[selector] = overlay
45
+
46
+ [2026-02-25 12:35:14,408][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
47
+ result[selector] = overlay
48
+
49
+ [2026-02-25 12:47:56,084][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
50
+ result[selector] = overlay
51
+
52
+ [2026-02-25 13:00:42,869][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
53
+ result[selector] = overlay
54
+
55
+ [2026-02-25 13:13:32,567][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
56
+ result[selector] = overlay
57
+
58
+ [2026-02-25 13:26:12,524][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
59
+ result[selector] = overlay
60
+
ABLATION_0225_randomSelect/train_ddp_process_5.log ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 10:39:19,922][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 10:39:48,615][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 10:39:48,615][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 10:39:59,701][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
9
+ warnings.warn( # warn only once
10
+
11
+ [2026-02-25 10:40:12,614][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
12
+ grad.sizes() = [256, 256, 1, 1], strides() = [256, 1, 256, 256]
13
+ bucket_view.sizes() = [256, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
14
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
15
+
16
+ [2026-02-25 10:40:13,463][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
17
+ result[selector] = overlay
18
+
19
+ [2026-02-25 10:41:49,305][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
20
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
21
+
22
+ [2026-02-25 10:52:55,865][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
23
+ result[selector] = overlay
24
+
25
+ [2026-02-25 11:05:44,777][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
26
+ result[selector] = overlay
27
+
28
+ [2026-02-25 11:18:29,814][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
29
+ result[selector] = overlay
30
+
31
+ [2026-02-25 11:31:13,251][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
32
+ result[selector] = overlay
33
+
34
+ [2026-02-25 11:43:54,528][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
35
+ result[selector] = overlay
36
+
37
+ [2026-02-25 11:56:39,756][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
38
+ result[selector] = overlay
39
+
40
+ [2026-02-25 12:09:23,941][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
41
+ result[selector] = overlay
42
+
43
+ [2026-02-25 12:22:24,117][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
44
+ result[selector] = overlay
45
+
46
+ [2026-02-25 12:35:14,407][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
47
+ result[selector] = overlay
48
+
49
+ [2026-02-25 12:47:56,084][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
50
+ result[selector] = overlay
51
+
52
+ [2026-02-25 13:00:42,869][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
53
+ result[selector] = overlay
54
+
55
+ [2026-02-25 13:13:32,570][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
56
+ result[selector] = overlay
57
+
58
+ [2026-02-25 13:26:12,524][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
59
+ result[selector] = overlay
60
+
ABLATION_0225_randomSelect/train_ddp_process_6.log ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-25 10:39:19,844][dinov2][INFO] - using MLP layer as FFN
2
+ [2026-02-25 10:39:46,759][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
3
+ warnings.warn(
4
+
5
+ [2026-02-25 10:39:46,759][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
6
+ warnings.warn(msg)
7
+
8
+ [2026-02-25 10:39:59,701][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
9
+ warnings.warn( # warn only once
10
+
11
+ [2026-02-25 10:40:12,842][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/autograd/graph.py:829: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
12
+ grad.sizes() = [256, 256, 1, 1], strides() = [256, 1, 256, 256]
13
+ bucket_view.sizes() = [256, 256, 1, 1], strides() = [256, 1, 1, 1] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:334.)
14
+ return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
15
+
16
+ [2026-02-25 10:40:13,462][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
17
+ result[selector] = overlay
18
+
19
+ [2026-02-25 10:41:49,277][py.warnings][WARNING] - /venv/main/lib/python3.12/site-packages/torch/optim/lr_scheduler.py:209: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
20
+ warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
21
+
22
+ [2026-02-25 10:52:55,864][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
23
+ result[selector] = overlay
24
+
25
+ [2026-02-25 11:05:44,777][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
26
+ result[selector] = overlay
27
+
28
+ [2026-02-25 11:18:29,813][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
29
+ result[selector] = overlay
30
+
31
+ [2026-02-25 11:31:13,251][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
32
+ result[selector] = overlay
33
+
34
+ [2026-02-25 11:43:54,528][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
35
+ result[selector] = overlay
36
+
37
+ [2026-02-25 11:56:39,755][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
38
+ result[selector] = overlay
39
+
40
+ [2026-02-25 12:09:23,940][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
41
+ result[selector] = overlay
42
+
43
+ [2026-02-25 12:22:24,118][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
44
+ result[selector] = overlay
45
+
46
+ [2026-02-25 12:35:14,410][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
47
+ result[selector] = overlay
48
+
49
+ [2026-02-25 12:47:56,084][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
50
+ result[selector] = overlay
51
+
52
+ [2026-02-25 13:00:42,869][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
53
+ result[selector] = overlay
54
+
55
+ [2026-02-25 13:13:32,567][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
56
+ result[selector] = overlay
57
+
58
+ [2026-02-25 13:26:12,524][py.warnings][WARNING] - /workspace/code/CVPR2026/src/visualization/layout.py:105: UserWarning: Using a non-tuple sequence for multidimensional indexing is deprecated and will be changed in pytorch 2.9; use x[tuple(seq)] instead of x[seq]. In pytorch 2.9 this will be interpreted as tensor index, x[torch.tensor(seq)], which will result either in an error or a different result (Triggered internally at /pytorch/torch/csrc/autograd/python_variable_indexing.cpp:316.)
59
+ result[selector] = overlay
60
+