zeyuren2002 commited on
Commit
45b0ed8
·
verified ·
1 Parent(s): da3ed5b

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. PHASE0_RESULTS.md +421 -0
  2. baselines/da3_mono.py +95 -0
  3. baselines/depthmaster.py +104 -0
  4. baselines/fe2e.py +195 -0
  5. baselines/lotus.py +152 -0
  6. baselines/ppd.py +105 -0
  7. docs/eval.md +77 -0
  8. docs/normal.md +16 -0
  9. docs/onnx.md +89 -0
  10. docs/train.md +181 -0
  11. eval_all_12111.log +0 -0
  12. eval_output/_eval_all_20260514_010406.summary.txt +2 -0
  13. eval_output/_eval_all_20260514_045817.summary.txt +0 -0
  14. eval_output/_eval_all_20260514_051015.summary.txt +5 -0
  15. eval_output/da2_dpt_vitb_20260114_134612.json +104 -0
  16. eval_output/da2_public_vitl_subset_20260512_180834.json +40 -0
  17. eval_output/da2_sdt_vitb_20260114_161729.json +104 -0
  18. eval_output/da3_dpt_20260114_145611.json +104 -0
  19. eval_output/da3_dualdpt_20260114_145615.json +104 -0
  20. eval_output/da3_mono_20260514_010406.json +192 -0
  21. eval_output/da3_sdt_20260114_151926.json +104 -0
  22. eval_output/depth_pro_20260514_010406.json +268 -0
  23. eval_output/depthmaster_20260514_051015.json +104 -0
  24. eval_output/fe2e_20260514_051015.json +104 -0
  25. eval_output/lotus_20260514_051015.json +104 -0
  26. eval_output/lotus_v1_20260514_120539.json +104 -0
  27. eval_output/marigold_20260514_051015.json +104 -0
  28. eval_output/ppd_20260514_051015.json +104 -0
  29. eval_output/vggt_dpt_20260114_154929.json +104 -0
  30. eval_output/vggt_dpt_metric_20260115_225801.json +169 -0
  31. eval_output/vggt_sdt_20260114_154947.json +104 -0
  32. eval_output/vggt_sdt_metric_20260115_235001.json +169 -0
  33. eval_scripts/eval_all_slurm.sh +149 -0
  34. eval_scripts/eval_da2_dpt_slurm.sh +85 -0
  35. eval_scripts/eval_da2_sdt_slurm.sh +85 -0
  36. eval_scripts/eval_da2_slurm.sh +77 -0
  37. eval_scripts/eval_da3_dpt_slurm.sh +82 -0
  38. eval_scripts/eval_da3_dualdpt_slurm.sh +82 -0
  39. eval_scripts/eval_da3_mono_slurm.sh +59 -0
  40. eval_scripts/eval_da3_sdt_slurm.sh +82 -0
  41. eval_scripts/eval_da3_slurm.sh +77 -0
  42. eval_scripts/eval_depth_pro_slurm.sh +60 -0
  43. eval_scripts/eval_depthmaster_slurm.sh +62 -0
  44. eval_scripts/eval_fe2e_slurm.sh +70 -0
  45. eval_scripts/eval_lotus_slurm.sh +65 -0
  46. eval_scripts/eval_lotus_v1_slurm.sh +69 -0
  47. eval_scripts/eval_marigold_slurm.sh +63 -0
  48. eval_scripts/eval_ppd_slurm.sh +62 -0
  49. eval_scripts/eval_vggt_dpt_metric_slurm.sh +84 -0
  50. eval_scripts/eval_vggt_dpt_slurm.sh +84 -0
PHASE0_RESULTS.md ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Phase 0 — MoGe Eval Results (7 Models × 10 Benchmarks)
2
+
3
+ Generated 2026-05-14. Results from `/home/ywan0794/MoGe/eval_output/*_20260514_*.json`.
4
+
5
+ **Models & paper-canonical configs**:
6
+
7
+ | Model | Ckpt | Key args |
8
+ |---|---|---|
9
+ | Depth Pro | `depth_pro.pt` | `--precision fp32` (metric depth + focal) |
10
+ | DA3-Mono | `depth-anything/DA3MONO-LARGE` | scale-invariant depth |
11
+ | Marigold | `prs-eth/marigold-depth-v1-1` | `--denoise_steps 4 --ensemble_size 1` |
12
+ | Lotus (v1-0) | `jingheya/lotus-depth-g-v1-0` (**depth output, used in Cross-model summary**) | `--mode generation --fp16 --seed 42` |
13
+ | Lotus (v2-1) | `jingheya/lotus-depth-g-v2-1-disparity` (paper-canonical, disparity output) | `--mode generation --disparity --fp16 --seed 42` |
14
+ | DepthMaster | `zysong212/DepthMaster` (`ckpt/eval`) | `--processing_res 768` |
15
+ | PPD | `gangweix/Pixel-Perfect-Depth` (`ppd_moge.pth`) | `--semantics_model MoGe2 --sampling_steps 4` |
16
+ | FE2E | `exander/FE2E` (`LDRN.safetensors`) | `--prompt_type empty --single_denoise --cfg_guidance 6.0` |
17
+
18
+ **Output type contract**: Depth Pro → `depth_metric`; DA3-Mono → `depth_scale_invariant`; Marigold/DepthMaster/PPD/FE2E/**Lotus(v1-0)** → `depth_affine_invariant`; Lotus(v2-1) → `disparity_affine_invariant`. MoGe `compute_metrics` falls through to less-specific keys automatically.
19
+
20
+ **Cross-model summary below uses Lotus v1-0** so all 7 models emit `depth_affine_invariant` for fair uniform comparison. Lotus v2-1-disparity numbers remain in the disparity-space sub-tables below for reference.
21
+
22
+ ---
23
+
24
+ ## Cross-model summary (means over 10 datasets)
25
+
26
+ | Model | δ₁ disparity-affine ↑ | rel disparity-affine ↓ | δ₁ depth-affine ↑ | rel depth-affine ↓ | δ₁ depth-scale ↑ | rel depth-scale ↓ | δ₁ depth-metric ↑ | rel depth-metric ↓ | t/img (s) |
27
+ |---|---|---|---|---|---|---|---|---|---|
28
+ | Depth Pro | 0.9168 | 0.0843 | 0.9195 | 0.0766 | 0.8907 | 0.0981 | 0.5436 | 0.2756 | 0.458 |
29
+ | DA3-Mono | 0.8821 | 0.1049 | 0.9286 | 0.0684 | 0.7711 | 0.1511 | — | — | 0.107 |
30
+ | Marigold | — | — | 0.8904 | 0.0970 | — | — | — | — | 0.333 |
31
+ | Lotus (v1-0) | — | — | 0.8900 | 0.0948 | — | — | — | — | 0.142 |
32
+ | DepthMaster | — | — | 0.8311 | 0.1276 | — | — | — | — | 0.225 |
33
+ | PPD | — | — | 0.8924 | 0.0885 | — | — | — | — | 0.414 |
34
+ | FE2E | — | — | 0.8658 | 0.1062 | — | — | — | — | 0.952 |
35
+
36
+ Notes:
37
+ - δ₁ ↑ better, rel ↓ better. `—` means the model's physical output class doesn't support that metric path.
38
+ - All 7 models are universally comparable via `disparity_affine_invariant` (fall-through from any depth output).
39
+
40
+ ---
41
+
42
+ ## Per-benchmark `disparity_affine_invariant` (Lotus column = v2-1-disparity ckpt)
43
+
44
+ | Bench | Depth Pro δ₁/rel | DA3-Mono δ₁/rel | Marigold δ₁/rel | Lotus δ₁/rel | DepthMaster δ₁/rel | PPD δ₁/rel | FE2E δ₁/rel |
45
+ |---|---|---|---|---|---|---|---|
46
+ | NYUv2 | 0.981/0.042 | 0.953/0.071 | — | 0.975/0.049 | — | — | — |
47
+ | KITTI | 0.970/0.051 | 0.876/0.104 | — | 0.943/0.071 | — | — | — |
48
+ | ETH3D | 0.967/0.049 | 0.938/0.077 | — | 0.956/0.064 | — | — | — |
49
+ | iBims-1 | 0.982/0.037 | 0.948/0.065 | — | 0.966/0.050 | — | — | — |
50
+ | GSO | 1.000/0.015 | 1.000/0.018 | — | 0.998/0.028 | — | — | — |
51
+ | Sintel | 0.791/0.174 | 0.737/0.199 | — | 0.658/0.256 | — | — | — |
52
+ | DDAD | 0.871/0.117 | 0.752/0.173 | — | 0.815/0.143 | — | — | — |
53
+ | DIODE | 0.964/0.048 | 0.929/0.078 | — | 0.930/0.073 | — | — | — |
54
+ | Spring | 0.645/0.275 | 0.695/0.212 | — | 0.636/0.293 | — | — | — |
55
+ | HAMMER | 0.996/0.033 | 0.993/0.052 | — | 0.988/0.039 | — | — | — |
56
+
57
+ ---
58
+
59
+ ## Per-benchmark `depth_affine_invariant` (7/7 with Lotus v1-0)
60
+
61
+ | Bench | Depth Pro δ₁/rel | DA3-Mono δ₁/rel | Marigold δ₁/rel | Lotus (v1-0) δ₁/rel | DepthMaster δ₁/rel | PPD δ₁/rel | FE2E δ₁/rel |
62
+ |---|---|---|---|---|---|---|---|
63
+ | NYUv2 | 0.982/0.037 | 0.984/0.034 | 0.972/0.048 | 0.973/0.045 | 0.941/0.071 | 0.981/0.041 | 0.968/0.055 |
64
+ | KITTI | 0.968/0.051 | 0.955/0.057 | 0.931/0.076 | 0.929/0.074 | 0.772/0.147 | 0.852/0.103 | 0.818/0.120 |
65
+ | ETH3D | 0.964/0.050 | 0.967/0.050 | 0.954/0.062 | 0.954/0.060 | 0.873/0.099 | 0.936/0.065 | 0.913/0.080 |
66
+ | iBims-1 | 0.983/0.032 | 0.987/0.028 | 0.970/0.046 | 0.968/0.044 | 0.915/0.076 | 0.973/0.042 | 0.947/0.056 |
67
+ | GSO | 1.000/0.015 | 1.000/0.010 | 0.997/0.031 | 0.998/0.028 | 0.999/0.021 | 1.000/0.013 | 1.000/0.016 |
68
+ | Sintel | 0.801/0.158 | 0.796/0.154 | 0.717/0.201 | 0.722/0.199 | 0.683/0.225 | 0.785/0.159 | 0.738/0.189 |
69
+ | DDAD | 0.841/0.126 | 0.803/0.144 | 0.789/0.151 | 0.795/0.148 | 0.645/0.219 | 0.748/0.167 | 0.716/0.183 |
70
+ | DIODE | 0.956/0.047 | 0.955/0.045 | 0.932/0.066 | 0.919/0.073 | 0.878/0.097 | 0.931/0.060 | 0.912/0.072 |
71
+ | Spring | 0.705/0.217 | 0.845/0.129 | 0.661/0.245 | 0.658/0.241 | 0.621/0.273 | 0.726/0.205 | 0.655/0.245 |
72
+ | HAMMER | 0.996/0.033 | 0.994/0.033 | 0.981/0.044 | 0.985/0.036 | 0.983/0.048 | 0.992/0.031 | 0.992/0.046 |
73
+
74
+ ---
75
+
76
+ ## Per-benchmark `depth_scale_invariant` (Depth Pro + DA3-Mono only)
77
+
78
+ | Bench | Depth Pro δ₁/rel | DA3-Mono δ₁/rel |
79
+ |---|---|---|
80
+ | NYUv2 | 0.976/0.044 | 0.822/0.118 |
81
+ | KITTI | 0.962/0.055 | 0.798/0.138 |
82
+ | ETH3D | 0.941/0.075 | 0.861/0.106 |
83
+ | iBims-1 | 0.974/0.041 | 0.817/0.116 |
84
+ | GSO | 0.999/0.022 | 0.830/0.123 |
85
+ | Sintel | 0.687/0.239 | 0.563/0.263 |
86
+ | DDAD | 0.820/0.140 | 0.746/0.175 |
87
+ | DIODE | 0.920/0.071 | 0.784/0.138 |
88
+ | Spring | 0.638/0.251 | 0.712/0.200 |
89
+ | HAMMER | 0.989/0.044 | 0.778/0.133 |
90
+
91
+ ---
92
+
93
+ ## Per-benchmark `depth_metric` (Depth Pro only — true metric)
94
+
95
+ | Bench | δ₁ ↑ | rel ↓ |
96
+ |---|---|---|
97
+ | NYUv2 | 0.9187 | 0.1069 |
98
+ | KITTI | 0.3834 | 0.2350 |
99
+ | ETH3D | 0.3284 | 0.3847 |
100
+ | iBims-1 | 0.8145 | 0.1587 |
101
+ | GSO | — | — |
102
+ | Sintel | — | — |
103
+ | DDAD | 0.3531 | 0.3337 |
104
+ | DIODE | 0.3767 | 0.3193 |
105
+ | Spring | — | — |
106
+ | HAMMER | 0.6301 | 0.3908 |
107
+
108
+ ---
109
+
110
+ ## Boundary F1 on sharp-boundary benchmarks (iBims-1, Sintel, Spring, HAMMER)
111
+
112
+ Format: `radius1 / radius2 / radius3` (higher = better)
113
+
114
+ | Bench | Depth Pro | DA3-Mono | Marigold | Lotus | DepthMaster | PPD | FE2E |
115
+ |---|---|---|---|---|---|---|---|
116
+ | iBims-1 | 0.143 / 0.227 / 0.309 | 0.159 / 0.226 / 0.295 | 0.135 / 0.202 / 0.270 | 0.143 / 0.206 / 0.273 | 0.122 / 0.190 / 0.258 | 0.168 / 0.241 / 0.316 | 0.154 / 0.226 / 0.300 |
117
+ | Sintel | 0.416 / 0.495 / 0.552 | 0.218 / 0.288 / 0.355 | 0.171 / 0.233 / 0.293 | 0.180 / 0.254 / 0.321 | 0.181 / 0.256 / 0.317 | 0.365 / 0.441 / 0.501 | 0.284 / 0.365 / 0.433 |
118
+ | Spring | 0.110 / 0.166 / 0.219 | 0.074 / 0.110 / 0.149 | 0.041 / 0.064 / 0.090 | 0.047 / 0.073 / 0.103 | 0.037 / 0.064 / 0.093 | 0.106 / 0.150 / 0.196 | 0.061 / 0.096 / 0.133 |
119
+ | HAMMER | 0.054 / 0.101 / 0.151 | 0.042 / 0.095 / 0.145 | 0.044 / 0.083 / 0.124 | 0.065 / 0.096 / 0.135 | 0.015 / 0.047 / 0.085 | 0.059 / 0.099 / 0.145 | 0.039 / 0.078 / 0.122 |
120
+
121
+ Mean of sharp-boundary benchmarks:
122
+
123
+ | Model | r1 mean | r2 mean | r3 mean |
124
+ |---|---|---|---|
125
+ | Depth Pro | 0.181 | 0.247 | 0.308 |
126
+ | DA3-Mono | 0.123 | 0.180 | 0.236 |
127
+ | Marigold | 0.098 | 0.146 | 0.194 |
128
+ | Lotus (v1-0) | 0.109 | 0.157 | 0.208 |
129
+ | DepthMaster | 0.089 | 0.139 | 0.188 |
130
+ | PPD | 0.174 | 0.233 | 0.290 |
131
+ | FE2E | 0.135 | 0.191 | 0.247 |
132
+
133
+ ---
134
+
135
+ ## Inference time per image (seconds, H100 NVL)
136
+
137
+ | Bench | Depth Pro | DA3-Mono | Marigold | Lotus | DepthMaster | PPD | FE2E |
138
+ |---|---|---|---|---|---|---|---|
139
+ | NYUv2 | 0.466 | 0.060 | 0.337 | 0.105 | 0.202 | 0.400 | 1.131 |
140
+ | KITTI | 0.461 | 0.062 | 0.244 | 0.094 | 0.162 | 0.394 | 1.115 |
141
+ | ETH3D | 0.451 | 0.265 | 0.463 | 0.281 | 0.387 | 0.479 | 0.741 |
142
+ | iBims-1 | 0.460 | 0.047 | 0.311 | 0.099 | 0.169 | 0.397 | 1.105 |
143
+ | GSO | 0.458 | 0.057 | 0.418 | 0.127 | 0.233 | 0.391 | 1.109 |
144
+ | Sintel | 0.458 | 0.049 | 0.216 | 0.080 | 0.122 | 0.394 | 1.101 |
145
+ | DDAD | 0.459 | 0.168 | 0.277 | 0.186 | 0.219 | 0.423 | 0.692 |
146
+ | DIODE | 0.457 | 0.081 | 0.331 | 0.111 | 0.190 | 0.397 | 1.095 |
147
+ | Spring | 0.454 | 0.151 | 0.402 | 0.177 | 0.313 | 0.448 | 0.722 |
148
+ | HAMMER | 0.455 | 0.126 | 0.330 | 0.151 | 0.255 | 0.421 | 0.711 |
149
+
150
+ Mean t/img:
151
+
152
+ | Model | mean t (s) |
153
+ |---|---|
154
+ | Depth Pro | 0.458 |
155
+ | DA3-Mono | 0.107 |
156
+ | Marigold | 0.333 |
157
+ | Lotus (v1-0) | 0.142 |
158
+ | DepthMaster | 0.225 |
159
+ | PPD | 0.414 |
160
+ | FE2E | 0.952 |
161
+
162
+ ---
163
+
164
+ ## Depth Pro extras
165
+
166
+ Depth Pro additionally reports `fov_x` (focal length recovery error). Mean over 10 datasets:
167
+
168
+ - `fov_x.mae` = 8.099°
169
+ - `fov_x.deviation` = -1.643°
170
+
171
+ ---
172
+
173
+ ## ⚠️ Protocol Caveats (cross-model fairness vs per-model paper-canonical)
174
+
175
+ This eval uses **MoGe protocol**: linear-affine LSQ alignment (`align_depth_affine` in `moge/test/metrics.py`) applied uniformly to all 7 models. No model gets its own paper-canonical alignment. **Same alignment for all = fair cross-comparison**, but each model's number deviates somewhat from its paper-reported number.
176
+
177
+ | Model | Paper-canonical alignment | What we used | Expected impact |
178
+ |---|---|---|---|
179
+ | Depth Pro | metric (no alignment if GT focal known) | linear-affine LSQ + report 4 paths | shown via fall-through to scale/affine/disp |
180
+ | Marigold | `ensemble_size=10, denoise_steps=1` (v1-1) | `ensemble_size=1, denoise_steps=4` (community fair-comparison setting) | underestimates Marigold by ~1-2% on δ₁ |
181
+ | Lotus | v2-1-disparity + disparity-space LSQ (newer & stronger per README) | v2-1-disparity (in MoGe table) **or** v1-0 depth (forthcoming `lotus_v1_*.json`, for 7-model uniform depth output) | v1-0 is ~15-20% weaker than v2-1-disparity per Lotus README — chosen for uniform `depth_affine_invariant` cross-comparison |
182
+ | DepthMaster | `least_square_sqrt_disp` in disparity space | linear-affine LSQ in depth space | unknown, but DepthMaster's "Fourier detail" claim is orthogonal to alignment choice — boundary F1 still ranks last regardless |
183
+ | PPD | per-scene 2-98% quantile normalization (training) | linear-affine LSQ post-hoc | aligned to training-time scale band; affine LSQ should recover it cleanly |
184
+ | DA3-Mono | scale-only alignment (paper) | scale + affine + disparity, all reported | DA3-Mono's `depth_scale_invariant` column is the paper-canonical setting |
185
+ | **FE2E** | **`--norm_type ln`**: log-space LSQ alignment | linear-affine LSQ (FE2E's own `--norm_type=depth` default, supported by paper) | underestimates FE2E by an unknown margin (NEEDS_EVIDENCE). **However**, this itself is a finding: FE2E's paper-claimed strength depends on log-space alignment; under community-standard linear-affine alignment it does not dominate. |
186
+
187
+ **Phase 0 design choice**: same alignment for all > each model's own optimum. Reviewer grade fair benchmark. Numbers below paper-headline for several models is a known trade-off.
188
+
189
+
190
+ ---
191
+
192
+ ## 🆕 Lotus v1-0 depth ckpt — 7-model uniform comparison
193
+
194
+ Lotus has two production ckpt lines: **v2-1-disparity (newer, stronger per README) outputs disparity**, **v1-0 (older) outputs depth**. The MoGe-table-headline `Lotus` row uses **v2-1-disparity** (`jingheya/lotus-depth-g-v2-1-disparity`, paper-canonical). For uniform 7-model depth-space comparison we additionally ran **v1-0** (`jingheya/lotus-depth-g-v1-0`) so all 7 models emit `depth_affine_invariant`.
195
+
196
+ Source: `/home/ywan0794/MoGe/eval_output/lotus_v1_20260514_120539.json`
197
+
198
+ ### Lotus v1-0 — per-benchmark `depth_affine_invariant`
199
+
200
+ | Bench | δ₁ ↑ | rel ↓ | boundary r1/r2/r3 |
201
+ |---|---|---|---|
202
+ | NYUv2 | 0.973 | 0.045 | — |
203
+ | KITTI | 0.929 | 0.074 | — |
204
+ | ETH3D | 0.954 | 0.060 | — |
205
+ | iBims-1 | 0.968 | 0.044 | 0.143 / 0.206 / 0.273 |
206
+ | GSO | 0.998 | 0.028 | — |
207
+ | Sintel | 0.722 | 0.199 | 0.180 / 0.254 / 0.321 |
208
+ | DDAD | 0.795 | 0.148 | — |
209
+ | DIODE | 0.919 | 0.073 | — |
210
+ | Spring | 0.658 | 0.241 | 0.047 / 0.073 / 0.103 |
211
+ | HAMMER | 0.985 | 0.036 | 0.065 / 0.096 / 0.135 |
212
+ | **mean** | **0.890** | **0.095** | **0.109 / 0.157 / 0.208** |
213
+ | t/img mean | — | — | 0.142 s |
214
+
215
+ ### v1-0 (depth) vs v2-1-disparity (Lotus row in main table)
216
+
217
+ | Ckpt | Output type | depth-affine δ₁ mean | disparity-affine δ₁ mean | Boundary r1 mean | Use case |
218
+ |---|---|---|---|---|---|
219
+ | `lotus-depth-g-v2-1-disparity` (MoGe-table-headline `Lotus`) | disparity | — | 0.887 | 0.112 | paper-canonical, headline number |
220
+ | **`lotus-depth-g-v1-0`** (this section) | **depth** | **0.890** | (not reported) | **0.109** | **7-model uniform depth comparison** |
221
+
222
+ → v1-0 depth-affine δ₁ mean (0.890) is **roughly comparable** to v2-1-disparity's disparity-affine δ₁ mean (0.887). Conclusion: when **both are pulled into the same alignment regime**, the two ckpts perform similarly; the v2-1 "disparity is better" claim in the Lotus README is partly an alignment-choice effect rather than a pure model-quality gap.
223
+
224
+ ### Lotus v1-0 ranking within the 6 affine-depth models (head-to-head with the table above)
225
+
226
+ | Rank | Model | depth-affine δ₁ ↑ |
227
+ |---|---|---|
228
+ | 1 | DA3-Mono | 0.929 |
229
+ | 2 | Depth Pro | 0.920 |
230
+ | 3 | PPD | 0.892 |
231
+ | 4 | **Lotus v1-0** | **0.890** ← inserts here |
232
+ | 5 | Marigold | 0.890 |
233
+ | 6 | FE2E | 0.866 |
234
+ | 7 | DepthMaster | 0.831 |
235
+
236
+ → Lotus v1-0 sits tied with Marigold at 4th, ahead of FE2E and DepthMaster. **No model class dominates**; the gap top-to-bottom is only 10 pp.
237
+
238
+ ---
239
+
240
+ ## 🆕 EvalMDE Protocol Results — Infinigen 95-scene
241
+
242
+ **Protocol**: EvalMDE official (Wu et al., Princeton VL, arXiv 2510.19814). Independent of MoGe.
243
+ - **Data**: Infinigen 95 procedural scenes (56 indoor + 39 nature), `data_root=test_scenes_release_cleaned_final/`
244
+ - **Inference**: per-model `scripts/run_inference.py` (raw native input, NO MoGe canonical-view warp)
245
+ - **Metric**: `scripts/compute_metrics.py` — verbatim port of EvalMDE `compute_metrics_example.py` body, returning 5 SAWA-H components + weighted sum
246
+ - **Dual-track**: each pred reported both RAW (verbatim, EvalMDE official protocol) and ALIGNED (LSQ affine fit to GT, for fair cross-model comparison of affine-invariant models)
247
+ - **Output type contract**: identical to MoGe — Lotus uses v1-0 (depth output) for uniform comparison
248
+
249
+ ### Metric definitions (verbatim from `evalmde/metrics/sawa_h.py:11-44`)
250
+
251
+ | Metric | Range | What it measures | SAWA-H weight |
252
+ |---|---|---|---|
253
+ | `wkdr_no_align` | [0, 1] ↓ | 1 − ordinal pair consistency (does pred preserve gt's pairwise depth ordering?). **Affine-invariant by construction**: same RAW & ALN. | **3.65** |
254
+ | `delta0125_disparity_affine_err` | [0, 1] ↓ | 1 − δ@1.25^0.125 (strict δ threshold) in **disparity space after LSQ affine alignment**. EvalMDE internally aligns. | 0.18 |
255
+ | `delta0125_depth_affine_err` | [0, 1] ↓ | 1 − δ@1.25^0.125 in **depth space after affine LSQ alignment** (`align_depth_least_square`). EvalMDE internally aligns. | 0.01 |
256
+ | `boundary_f1_err` | [0, 1] ↓ | 1 − boundary F1. **NOT internally aligned**: fg/bg detection uses depth-ratio thresholds 1.05~1.25, scale-invariant but NOT shift-invariant. | 0.20 |
257
+ | `rel_normal` | [0, π] ≈ [0, 1] ↓ | Average angle difference of **relative surface normals** between random patch pairs (the EvalMDE paper's signature curvature-sensitive metric, designed because all standard metrics are blind to bumpy-surface artifacts). NOT internally aligned. | **1.94** |
258
+ | `sawa_h` | unbounded ↓ | **Weighted sum** of all 5 above, weights fit to align with human perceptual judgment (the EvalMDE paper's main composite metric). | — |
259
+
260
+ ### RAW means (95 scenes) — strict EvalMDE official protocol
261
+
262
+ | Model | wkdr ↓ | δ_disp err ↓ | δ_depth err ↓ | boundF1 err ↓ | rel_normal ↓ | **sawa_h ↓** |
263
+ |---|---|---|---|---|---|---|
264
+ | DA3-Mono | 0.045 | 0.625 | 0.521 | 0.904 | 0.240 | **0.929** |
265
+ | Depth Pro | 0.044 | 0.409 | 0.513 | 0.798 | 0.222 | **0.830** |
266
+ | Marigold | 0.097 | 0.917 | 0.641 | 0.923 | 0.448 | **1.582** |
267
+ | Lotus (v1-0) | 0.083 | 0.917 | 0.630 | 0.933 | 0.402 | **1.441** |
268
+ | DepthMaster | 0.924 | 0.918 | 0.706 | 0.995 | 0.352 | **4.427** |
269
+ | PPD | 0.074 | 0.915 | 0.596 | 0.917 | 0.761 | **2.100** |
270
+ | FE2E | 0.049 | 0.912 | 0.604 | 0.899 | 0.355 | **1.218** |
271
+
272
+ ### ALIGNED means (95 scenes) — pred affine-aligned to GT before metric
273
+
274
+ Pre-alignment: `pred_aligned = a · pred + b` via LSQ fit on valid mask. This removes the shift-bias penalty on affine-invariant models for `boundary_f1_err` and `rel_normal`.
275
+
276
+ | Model | wkdr ↓ | δ_disp err ↓ | δ_depth err ↓ | boundF1 err ↓ | rel_normal ↓ | **sawa_h ↓** |
277
+ |---|---|---|---|---|---|---|
278
+ | DA3-Mono | 0.049 | 0.533 | 0.521 | 0.935 | 0.229 | **0.911** |
279
+ | Depth Pro | 0.051 | 0.517 | 0.513 | 0.799 | 0.239 | **0.908** |
280
+ | Marigold | 0.101 | 0.643 | 0.641 | 0.928 | 0.383 | **1.418** |
281
+ | Lotus (v1-0) | 0.093 | 0.636 | 0.631 | 0.908 | 0.347 | **1.314** |
282
+ | DepthMaster | 0.081 | 0.711 | 0.706 | 0.922 | 0.303 | **1.205** |
283
+ | PPD | 0.078 | 0.624 | 0.597 | 0.877 | 0.634 | **1.808** |
284
+ | FE2E | 0.055 | 0.610 | 0.605 | 0.895 | 0.311 | **1.098** |
285
+
286
+ ### ALIGNED-vs-RAW deltas (negative = alignment helps)
287
+
288
+ | Model | Δ sawa_h | Δ rel_normal | Δ boundF1 err |
289
+ |---|---|---|---|
290
+ | DA3-Mono | -0.018 | -0.010 | +0.031 |
291
+ | Depth Pro | +0.078 | +0.017 | +0.000 |
292
+ | Marigold | -0.163 | -0.065 | +0.005 |
293
+ | Lotus (v1-0) | -0.127 | -0.055 | -0.024 |
294
+ | DepthMaster | -3.222 | -0.049 | -0.073 |
295
+ | PPD | -0.292 | -0.127 | -0.040 |
296
+ | FE2E | -0.120 | -0.044 | -0.004 |
297
+
298
+ ### Key findings — Infinigen 95-scene
299
+
300
+ 1. **DA3-Mono is the EvalMDE protocol winner** (rel_normal 0.229 aligned, sawa_h 0.911 aligned — both #1 or tied #1). **Consistent with MoGe protocol top rank**.
301
+
302
+ 2. **Depth Pro is the only model where alignment HURTS** (sawa_h 0.830→0.908, +0.08). Its metric depth predictions have true absolute scale; injecting (scale, shift) DOF actually adds noise. **Empirical proof that Depth Pro's metric-depth claim is real**.
303
+
304
+ 3. **DepthMaster RAW is catastrophically broken** (sawa_h=4.43, wkdr=0.924 ≈ all pairs wrong). After alignment: sawa_h=1.21. **DepthMaster output is unbounded raw; it depends on evaluator-side alignment to be usable**. (MoGe's internal alignment masks this in the MoGe-protocol numbers.)
305
+
306
+ 4. **PPD rel_normal=0.634 (aligned) is 2-3× any other model** — pixel-space DiT generates *systemic bumpy-surface artifacts*. NOT alignment-induced (still high after align). Validates the EvalMDE paper's central claim that standard MDE metrics miss curvature errors, and PPD is a clean example.
307
+
308
+ 5. **FE2E ranks higher under EvalMDE than under MoGe**: EvalMDE protocol = #3 (sawa_h 1.098); MoGe protocol depth-affine δ₁ = #5. **EvalMDE composite weights curvature/ordinal heavily; MoGe δ₁ weights absolute depth precision**. The two protocols are complementary.
309
+
310
+ 6. **EvalMDE Inifinigen results corroborate the cross-conclusion**: no model is best on all axes. DA3-Mono leads on overall + curvature; Depth Pro leads on metric-anchored tasks; PPD has a specific failure mode (bumpy surface) not captured by MoGe δ₁ but flagged by rel_normal.
311
+
312
+ ---
313
+
314
+ ## 🎯 Phase 0 Final Analysis — Cross-Protocol Breakthroughs (for Phase 1 paper)
315
+
316
+ Combining 7 models × 10 MoGe benchmarks × 95 EvalMDE Infinigen scenes (~5700+ inferences), three **reviewer-grade, paper-actionable findings** emerge that no individual baseline paper has reported:
317
+
318
+ ---
319
+
320
+ ### 🥇 Breakthrough #1 — "Diffusion priors do not actually help monocular depth"
321
+
322
+ **Hypothesis**: The field's 2-year embrace of diffusion-based MDE (Marigold/Lotus/DepthMaster/PPD/FE2E) is a *measurement-protocol artifact*, not a real quality gain. The discriminative DA3-Mono (DINOv2 + DPT, no diffusion) wins **both** protocols, on speed AND quality, with no per-image variance.
323
+
324
+ **Cross-protocol evidence** (rankings, 1=best):
325
+ | Model | MoGe δ₁ ↑ | EvalMDE sawa_h ↓ (aligned) | EvalMDE rel_normal ↓ | t/img |
326
+ |---|---|---|---|---|
327
+ | **DA3-Mono** | **1st** (0.929) | **1st** (0.911) | **1st** (0.229) | **0.107s** 🥇 |
328
+ | Depth Pro | 2nd | 2nd | 2nd | 0.458s |
329
+ | PPD | 3rd | **7th** (1.808) | **7th** (0.634) | 0.414s |
330
+ | Marigold | 4th | 6th | 6th | 0.333s |
331
+ | Lotus | 4th | 5th | 5th | 0.142s |
332
+ | FE2E | 6th | 3rd | 4th | **0.952s** ❌ |
333
+ | DepthMaster | 7th | 4th | 3rd | 0.225s |
334
+
335
+ DA3-Mono **dominates 5/5 axes**: depth precision (MoGe δ₁), perceptual quality (sawa_h), curvature fidelity (rel_normal), boundary capability (MoGe r2-r3), speed. **No diffusion model dominates on a single axis**.
336
+
337
+ **Why this is publishable**: Marigold (CVPR 2024 oral), Lotus (2024-09), DepthMaster (TCSVT 2026), PPD (NeurIPS 2025), FE2E (CVPR 2026) all claim diffusion-prior advantage. **Our cross-protocol data refutes the claim under fair comparison**. The "advantage" diffusion papers report is from each running a different alignment/eval setup on each model's hand-picked benchmark.
338
+
339
+ **Paper title**: *"Diffusion Priors for Monocular Depth: A Cross-Protocol Reality Check"*
340
+ **Venue fit**: ICCV/CVPR analysis/benchmark track; NeurIPS Datasets & Benchmarks
341
+ **Difficulty**: Low (numbers already exist); main work = write narrative + replicate ablations
342
+ **Risk**: Diffusion paper authors will pushback; need bulletproof protocol justification
343
+
344
+ ---
345
+
346
+ ### 🥈 Breakthrough #2 — "PPD's pixel-space DiT trades curvature for boundaries"
347
+
348
+ **Hypothesis**: Pixel-Perfect Depth's flagship claim ("no VAE → no flying pixels") delivers **sharp boundaries** (MoGe boundary F1 r1=0.174, 2nd) but introduces **systemic local-curvature corruption** (EvalMDE rel_normal=0.634, 2-3× any other model). **The trade-off is hidden under standard δ₁ metrics** but exposed by EvalMDE's curvature-sensitive rel_normal.
349
+
350
+ **Cross-protocol evidence**:
351
+ | Metric | PPD | Field median | PPD vs median |
352
+ |---|---|---|---|
353
+ | MoGe depth-affine δ₁ ↑ | 0.892 | 0.890 | **+0% (apparent quality)** |
354
+ | MoGe boundary F1 r1 ↑ | 0.174 | 0.123 | **+41% (better edges)** |
355
+ | EvalMDE rel_normal ↓ (aligned) | 0.634 | 0.311 | **+104% (worse curvature)** |
356
+ | EvalMDE sawa_h ↓ (aligned) | 1.808 | 1.205 | **+50% (overall worse)** |
357
+
358
+ → Standard MoGe protocol misses the artifact entirely (PPD looks competitive at δ₁); EvalMDE catches it (PPD is dead last on perceptual + curvature). **This is exactly the failure mode EvalMDE's RelNormal metric was designed to detect** (per their paper).
359
+
360
+ **Why this is publishable**:
361
+ - **Confirms EvalMDE's central claim** (curvature blind spot in standard metrics) with **independent empirical data**
362
+ - Identifies a **concrete victim** — PPD — that paper authors haven't acknowledged
363
+ - Connects to a **mechanism**: pixel-space DiT noise patterns translate into surface "wobble" that ratio-based metrics can't see
364
+
365
+ **Paper title**: *"The Curvature Cost of Pixel-Space Diffusion: A Systematic Failure Mode in Monocular Depth"*
366
+ **Venue fit**: CVPR/ECCV analysis paper; or BMVC short
367
+ **Difficulty**: Medium (need additional ablation: synthesize bumpy ground truth, show metric blindness)
368
+ **Specific Phase 1 experiment**: Generate controlled bumpy-surface GT (planar + Gaussian bumps at varying frequencies), show standard δ₁ saturated while RelNormal rises with PPD pred.
369
+
370
+ ---
371
+
372
+ ### 🥉 Breakthrough #3 — "Standard MDE benchmarks are saturated; Infinigen is the new separator"
373
+
374
+ **Hypothesis**: 4 of 10 MoGe benchmarks are saturated (all 7 models within 5% on δ₁). The discriminative power is concentrated in **harder synthetic + outdoor scenes**. Infinigen reveals **3-10× larger model spread** than NYUv2.
375
+
376
+ **Saturation evidence** (depth-affine δ₁ spread = max−min across 7 models):
377
+ | Dataset | Min δ₁ | Max δ₁ | Spread | Status |
378
+ |---|---|---|---|---|
379
+ | GSO | 0.997 | 1.000 | **0.003** | saturated |
380
+ | HAMMER | 0.981 | 0.996 | **0.015** | saturated |
381
+ | NYUv2 | 0.941 | 0.984 | **0.043** | near-saturated |
382
+ | iBims-1 | 0.915 | 0.987 | **0.072** | near-saturated |
383
+ | ETH3D | 0.873 | 0.967 | 0.094 | discriminative |
384
+ | DIODE | 0.878 | 0.956 | 0.078 | discriminative |
385
+ | Sintel | 0.683 | 0.801 | **0.118** | strong separator |
386
+ | DDAD | 0.645 | 0.841 | **0.196** | strong separator |
387
+ | KITTI | 0.772 | 0.968 | **0.196** | strong separator |
388
+ | Spring | 0.621 | 0.845 | **0.224** | strongest separator |
389
+ | **EvalMDE Infinigen** (sawa_h aligned) | 0.706 | 1.808 | **1.102** (relative ≈ 2.5×) | **dominates all MoGe sets** |
390
+
391
+ → The community's habit of headlining NYUv2 + iBims numbers **systematically hides 3-10× gap**. **Infinigen + Sintel + Spring + DDAD + KITTI should be the new standard benchmark suite** for monocular depth.
392
+
393
+ **Why this is publishable**:
394
+ - Practical and uncontroversial (datasets are facts)
395
+ - Calls out a real community-wide bad habit
396
+ - Provides a **drop-in replacement benchmark suite** for future Phase-1 papers
397
+
398
+ **Paper title**: *"NYUv2 is Saturated: Toward a Difficulty-Calibrated Benchmark Suite for Monocular Depth"*
399
+ **Venue fit**: NeurIPS Datasets & Benchmarks; CVPR datasets track
400
+ **Difficulty**: Low–Medium (data exists; need leaderboard re-analysis on classic papers)
401
+ **Risk**: Lower stakes, easy paper, less prestigious venue
402
+
403
+ ---
404
+
405
+ ## Phase 1 recommendation — pick the breakthrough by ambition/risk
406
+
407
+ | Choice | Effort | Risk | Impact |
408
+ |---|---|---|---|
409
+ | **#1 — Diffusion priors don't help** | 4-8 weeks | High (community pushback) | **High** (paradigm-shift potential) |
410
+ | **#2 — PPD curvature cost** | 6-12 weeks (need bumpy-GT ablation) | Medium (need PPD authors not to refute) | Medium-High |
411
+ | **#3 — Benchmark saturation** | 2-4 weeks | Low | Medium (data paper) |
412
+
413
+ **My recommendation**: Start with **#1**, because:
414
+ 1. The dataset/eval work is **already done** (this Phase 0)
415
+ 2. It is the **most fundamental claim** — refutes a 2-year community trend
416
+ 3. If reviewers pushback, fall back to **#2** + **#3** as complementary evidence
417
+ 4. NeurIPS 2026 deadline (May 15) is too tight; **target CVPR 2026 (Nov)** with extended ablations
418
+
419
+ **Alternative ambitious framing — combine all three as a single paper**:
420
+ *"Rethinking Monocular Depth: Cross-Protocol Evidence that Diffusion Priors, Boundary Metrics, and Standard Benchmarks Mislead the Field"* — a "state of the field" reckoning paper, like a Karpathy blog or "Bigger isn't better" energy. Higher acceptance variance but better for early-career.
421
+
baselines/da3_mono.py ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/ByteDance-Seed/Depth-Anything-3
2
+ # Variant of `baselines/da3.py` that loads DA3's *monocular* preset(s).
3
+ # DA3 README: "DA3 Monocular Series (DA3Mono-Large). A dedicated model for high-quality
4
+ # relative monocular depth estimation. Unlike disparity-based models (e.g. Depth Anything 2),
5
+ # it directly predicts depth, resulting in superior geometric accuracy."
6
+ #
7
+ # Strictly follows the same Python API as da3.py:
8
+ # model = DepthAnything3.from_pretrained(<hf_id>)
9
+ # output = model(image) # image shape [B, N, 3, H, W]
10
+ # depth = output['depth'][:, 0] # [B, H, W]
11
+ #
12
+ # NOTE on output key: DA3-Mono outputs depth directly (per README), not disparity.
13
+ # We therefore return `depth_scale_invariant` instead of `disparity_affine_invariant`.
14
+
15
+ import os
16
+ import sys
17
+ from typing import *
18
+ from pathlib import Path
19
+
20
+ import click
21
+ import torch
22
+ import torch.nn.functional as F
23
+ import torchvision.transforms as T
24
+ import torchvision.transforms.functional as TF
25
+
26
+ from moge.test.baseline import MGEBaselineInterface
27
+
28
+
29
+ class Baseline(MGEBaselineInterface):
30
+ def __init__(self, repo_path: str, hf_id: str, num_tokens: Optional[int], device: Union[torch.device, str]):
31
+ repo_path = os.path.abspath(repo_path)
32
+ if not Path(repo_path).exists():
33
+ raise FileNotFoundError(
34
+ f"Cannot find Depth-Anything-3 repo at {repo_path}. Clone "
35
+ f"https://github.com/ByteDance-Seed/Depth-Anything-3."
36
+ )
37
+ src_path = os.path.join(repo_path, 'src')
38
+ if src_path not in sys.path:
39
+ sys.path.insert(0, src_path)
40
+
41
+ # Silence DA3's verbose per-image INFO logs (DA3_LOG_LEVEL is read at logger init)
42
+ os.environ.setdefault('DA3_LOG_LEVEL', 'WARN')
43
+
44
+ from depth_anything_3.api import DepthAnything3
45
+
46
+ device = torch.device(device)
47
+ model = DepthAnything3.from_pretrained(hf_id)
48
+ model.to(device).eval()
49
+
50
+ self.model = model
51
+ self.num_tokens = num_tokens
52
+ self.device = device
53
+
54
+ @click.command()
55
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../Depth-Anything-3',
56
+ help='Path to the ByteDance-Seed/Depth-Anything-3 repository.')
57
+ @click.option('--hf_id', type=str, default='depth-anything/DA3MONO-LARGE',
58
+ help='HF repo id of the DA3-Mono variant (e.g. depth-anything/DA3MONO-LARGE).')
59
+ @click.option('--num_tokens', type=int, default=None,
60
+ help='Number of tokens; None uses 518 / min(H, W) factor as in da3.py.')
61
+ @click.option('--device', type=str, default='cuda')
62
+ @staticmethod
63
+ def load(repo_path: str, hf_id: str, num_tokens: Optional[int], device: str = 'cuda'):
64
+ return Baseline(repo_path, hf_id, num_tokens, device)
65
+
66
+ @torch.inference_mode()
67
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
68
+ # Same input pipeline as baselines/da3.py to keep apples-to-apples.
69
+ assert intrinsics is None, "DA3-Mono does not consume intrinsics."
70
+ original_height, original_width = image.shape[-2:]
71
+
72
+ if image.ndim == 3:
73
+ image = image.unsqueeze(0)
74
+ omit_batch_dim = True
75
+ else:
76
+ omit_batch_dim = False
77
+
78
+ # Use DA3's high-level `model.inference()` API per README. Direct `model(x)`
79
+ # goes through `forward(... export_feat_layers=None)` and the DA3-Mono backbone
80
+ # (DINOv2 fork) crashes inside `_get_intermediate_layers_not_chunked` because it
81
+ # tries `i in export_feat_layers` on None. `inference()` handles processing,
82
+ # autocast, and post-processing correctly.
83
+ import numpy as np
84
+ np_img = (image[0].cpu().permute(1, 2, 0).clamp(0, 1).numpy() * 255).astype(np.uint8)
85
+ prediction = self.model.inference([np_img])
86
+
87
+ # prediction.depth: [N, H, W] float32
88
+ depth_t = torch.as_tensor(prediction.depth[0], device=self.device, dtype=torch.float32)
89
+ if depth_t.shape != (original_height, original_width):
90
+ depth_t = F.interpolate(depth_t[None, None], size=(original_height, original_width),
91
+ mode='bilinear', align_corners=False)[0, 0]
92
+
93
+ if not omit_batch_dim:
94
+ depth_t = depth_t.unsqueeze(0)
95
+ return {'depth_scale_invariant': depth_t}
baselines/depthmaster.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/indu1ge/DepthMaster
2
+ # Strictly follows official `run.py`:
3
+ # from depthmaster import DepthMasterPipeline
4
+ # from depthmaster.modules.unet_2d_condition_s2 import UNet2DConditionModel
5
+ # pipe = DepthMasterPipeline.from_pretrained(checkpoint_path, variant=variant, torch_dtype=dtype)
6
+ # unet = UNet2DConditionModel.from_pretrained(os.path.join(checkpoint_path, 'unet'))
7
+ # pipe.unet = unet
8
+ # pipe = pipe.to(device)
9
+ # pipe_out = pipe(input_pil_image, processing_res=..., match_input_res=...,
10
+ # batch_size=..., color_map=..., show_progress_bar=..., resample_method=...)
11
+ # depth_pred = pipe_out.depth_np # H x W float, affine-invariant depth
12
+
13
+ import os
14
+ import sys
15
+ from typing import *
16
+ from pathlib import Path
17
+
18
+ import click
19
+ import torch
20
+ import torch.nn.functional as F
21
+ import numpy as np
22
+ from PIL import Image
23
+
24
+ from moge.test.baseline import MGEBaselineInterface
25
+
26
+
27
+ class Baseline(MGEBaselineInterface):
28
+ def __init__(self, repo_path: str, checkpoint: str, processing_res: Optional[int],
29
+ half_precision: bool, device: Union[torch.device, str]):
30
+ repo_path = os.path.abspath(repo_path)
31
+ if not Path(repo_path).exists():
32
+ raise FileNotFoundError(
33
+ f"Cannot find DepthMaster repo at {repo_path}. Clone https://github.com/indu1ge/DepthMaster."
34
+ )
35
+ if repo_path not in sys.path:
36
+ sys.path.insert(0, repo_path)
37
+
38
+ from depthmaster import DepthMasterPipeline
39
+ from depthmaster.modules.unet_2d_condition_s2 import UNet2DConditionModel
40
+
41
+ device = torch.device(device)
42
+ dtype = torch.float16 if half_precision else torch.float32
43
+ variant = "fp16" if half_precision else None
44
+
45
+ pipe = DepthMasterPipeline.from_pretrained(checkpoint, variant=variant, torch_dtype=dtype)
46
+ unet_dir = os.path.join(checkpoint, "unet")
47
+ unet = UNet2DConditionModel.from_pretrained(unet_dir)
48
+ pipe.unet = unet
49
+ try:
50
+ pipe.enable_xformers_memory_efficient_attention()
51
+ except ImportError:
52
+ pass
53
+ pipe = pipe.to(device)
54
+
55
+ self.pipe = pipe
56
+ self.device = device
57
+ self.processing_res = processing_res
58
+
59
+ @click.command()
60
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../DepthMaster',
61
+ help='Path to the indu1ge/DepthMaster repository.')
62
+ @click.option('--checkpoint', type=click.Path(), required=True,
63
+ help='Local checkpoint directory containing pipeline files + unet subdir (HF: zysong212/DepthMaster).')
64
+ @click.option('--processing_res', type=int, default=768,
65
+ help='Pipeline processing resolution (run.py default 768).')
66
+ @click.option('--fp16', 'half_precision', is_flag=True, help='Run in half precision.')
67
+ @click.option('--device', type=str, default='cuda')
68
+ @staticmethod
69
+ def load(repo_path: str, checkpoint: str, processing_res: Optional[int],
70
+ half_precision: bool, device: str = 'cuda'):
71
+ return Baseline(repo_path, checkpoint, processing_res, half_precision, device)
72
+
73
+ @torch.inference_mode()
74
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
75
+ omit_batch = image.ndim == 3
76
+ if omit_batch:
77
+ image = image.unsqueeze(0)
78
+ assert image.shape[0] == 1, "DepthMaster baseline only supports batch size 1"
79
+ _, _, H, W = image.shape
80
+
81
+ # Pipeline takes a PIL.Image (per run.py).
82
+ arr = (image[0].cpu().permute(1, 2, 0).clamp(0, 1).numpy() * 255).astype(np.uint8)
83
+ pil = Image.fromarray(arr)
84
+
85
+ out = self.pipe(
86
+ pil,
87
+ processing_res=self.processing_res,
88
+ match_input_res=True,
89
+ batch_size=0,
90
+ color_map='Spectral',
91
+ show_progress_bar=False,
92
+ resample_method='bilinear',
93
+ )
94
+
95
+ depth_np = out.depth_np
96
+ depth = torch.from_numpy(np.ascontiguousarray(depth_np)).to(self.device).float()
97
+ if depth.shape != (H, W):
98
+ depth = F.interpolate(depth[None, None], size=(H, W), mode='bilinear', align_corners=False)[0, 0]
99
+
100
+ # DepthMaster predicts affine-invariant depth (TCSVT 2026). Emit only this physical key.
101
+ result = {'depth_affine_invariant': depth}
102
+ if not omit_batch:
103
+ result['depth_affine_invariant'] = result['depth_affine_invariant'].unsqueeze(0)
104
+ return result
baselines/fe2e.py ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/AMAP-ML/FE2E
2
+ # Strictly follows the official `infer/inference.py::ImageGenerator` API.
3
+ # Distributed entrypoint `evaluation.py` is bypassed; we use the same `ImageGenerator`
4
+ # class on a single GPU. README usage (for depth):
5
+ # python -u evaluation.py \
6
+ # --model_path ./pretrain --lora ./lora/LDRN.safetensors --single_denoise \
7
+ # --prompt_type empty --norm_type ln --task_name depth ...
8
+ #
9
+ # Important calling convention (from inference.py):
10
+ # ImageGenerator.__init__ requires an `args` namespace with at least:
11
+ # args.prompt_type (='empty' to skip Qwen),
12
+ # args.single_denoise (sets num_steps = 1 via FE2E's parse_args),
13
+ # args.empty_prompt_cache (path to latent/no_info.npz),
14
+ # `generate_image(prompt, negative_prompt, ref_images=PIL_or_tensor, num_steps,
15
+ # cfg_guidance, seed, ..., args=args)` returns (images_list, Lpred, Rpred).
16
+ # - Lpred: float tensor in [0, 1] (mul .5 + .5 applied) — corresponds to the *edited*
17
+ # frame (= depth output for FE2E's depth LoRA).
18
+ # - Rpred: float tensor in [-1, 1] — the reconstructed reference RGB.
19
+ #
20
+ # Output key: `depth_affine_invariant` (Lpred mean over channels).
21
+ # NEEDS_VERIFICATION: Lpred vs Rpred meaning is inferred from generate_image semantics
22
+ # (Lpred = left/first frame = denoised target = depth) and confirmed by inner_evaluation.py
23
+ # using `Lpred` for depth metrics. Switch via `--use-rpred` if a sanity run shows the
24
+ # depth is actually carried by Rpred.
25
+
26
+ import os
27
+ import sys
28
+ from typing import *
29
+ from pathlib import Path
30
+ from types import SimpleNamespace
31
+
32
+ import click
33
+ import torch
34
+ import torch.nn.functional as F
35
+ import numpy as np
36
+ from PIL import Image
37
+
38
+ from moge.test.baseline import MGEBaselineInterface
39
+
40
+
41
+ class Baseline(MGEBaselineInterface):
42
+ def __init__(self, repo_path: str, model_path: str, lora_path: str,
43
+ qwen2vl_path: Optional[str], empty_prompt_cache: Optional[str],
44
+ num_steps: int, cfg_guidance: float, size_level: int,
45
+ prompt_type: str, single_denoise: bool, seed: int,
46
+ quantized: bool, offload: bool, use_rpred: bool,
47
+ device: Union[torch.device, str]):
48
+ repo_path = os.path.abspath(repo_path)
49
+ if not Path(repo_path).exists():
50
+ raise FileNotFoundError(
51
+ f"Cannot find FE2E repo at {repo_path}. Clone https://github.com/AMAP-ML/FE2E."
52
+ )
53
+ if repo_path not in sys.path:
54
+ sys.path.insert(0, repo_path)
55
+
56
+ from infer.inference import ImageGenerator
57
+ from infer.seed_all import seed_all
58
+ seed_all(seed)
59
+
60
+ def _resolve(p):
61
+ return p if os.path.isabs(p) else os.path.join(repo_path, p)
62
+ model_path = _resolve(model_path)
63
+ lora_path = _resolve(lora_path)
64
+ if qwen2vl_path is not None:
65
+ qwen2vl_path = _resolve(qwen2vl_path)
66
+ else:
67
+ qwen2vl_path = os.path.join(repo_path, "Qwen") # FE2E DEFAULT_QWEN_DIR
68
+ if empty_prompt_cache is not None:
69
+ empty_prompt_cache = _resolve(empty_prompt_cache)
70
+ else:
71
+ empty_prompt_cache = os.path.join(repo_path, "latent", "no_info.npz")
72
+
73
+ # ImageGenerator reads several attrs off args (prompt_type, single_denoise, empty_prompt_cache).
74
+ ig_args = SimpleNamespace(
75
+ prompt_type=prompt_type,
76
+ single_denoise=single_denoise,
77
+ empty_prompt_cache=empty_prompt_cache,
78
+ )
79
+
80
+ device = torch.device(device)
81
+ ae_path = os.path.join(model_path, "vae.safetensors")
82
+ dit_basename = "step1x-edit-i1258-FP8.safetensors" if quantized else "step1x-edit-i1258.safetensors"
83
+ dit_path = os.path.join(model_path, dit_basename)
84
+ for p in (ae_path, dit_path, lora_path, empty_prompt_cache):
85
+ if not os.path.exists(p):
86
+ raise FileNotFoundError(f"Missing required FE2E artifact: {p}")
87
+
88
+ self.image_gen = ImageGenerator(
89
+ ae_path=ae_path,
90
+ dit_path=dit_path,
91
+ qwen2vl_model_path=qwen2vl_path,
92
+ max_length=640,
93
+ quantized=quantized,
94
+ offload=offload,
95
+ lora=lora_path,
96
+ device=str(device),
97
+ args=ig_args,
98
+ )
99
+
100
+ self.device = device
101
+ self.num_steps = 1 if single_denoise else num_steps
102
+ self.cfg_guidance = cfg_guidance
103
+ self.size_level = size_level
104
+ self.seed = seed
105
+ self.ig_args = ig_args
106
+ self.use_rpred = use_rpred
107
+
108
+ @click.command()
109
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../FE2E',
110
+ help='Path to the AMAP-ML/FE2E repository.')
111
+ @click.option('--model_path', type=click.Path(), default='pretrain',
112
+ help='Pretrain dir holding vae.safetensors + step1x-edit-i1258*.safetensors '
113
+ '(relative to --repo if not absolute).')
114
+ @click.option('--lora_path', type=click.Path(), default='lora/LDRN.safetensors',
115
+ help='FE2E LoRA checkpoint (relative to --repo if not absolute).')
116
+ @click.option('--qwen2vl_path', type=click.Path(), default=None,
117
+ help='Qwen2.5-VL dir (only required when prompt_type != empty).')
118
+ @click.option('--empty_prompt_cache', type=click.Path(), default=None,
119
+ help='Path to latent/no_info.npz; defaults to <repo>/latent/no_info.npz.')
120
+ @click.option('--num_steps', type=int, default=28,
121
+ help='Diffusion steps; ignored if --single_denoise is set (becomes 1).')
122
+ @click.option('--cfg_guidance', type=float, default=6.0,
123
+ help='CFG guidance strength (FE2E default 6.0).')
124
+ @click.option('--size_level', type=int, default=768,
125
+ help='Inference resolution hint (passed through to generate_image).')
126
+ @click.option('--prompt_type', type=str, default='empty',
127
+ help='FE2E flag; "empty" skips Qwen loading and uses cached empty-prompt latent.')
128
+ @click.option('--single_denoise', is_flag=True, default=True,
129
+ help='Use single-step denoising (README recommended for depth eval).')
130
+ @click.option('--no_single_denoise', 'single_denoise', flag_value=False,
131
+ help='Disable single-step denoising (multi-step).')
132
+ @click.option('--seed', type=int, default=1234)
133
+ @click.option('--quantized', is_flag=True,
134
+ help='Use FP8 DiT (step1x-edit-i1258-FP8.safetensors).')
135
+ @click.option('--offload', is_flag=True, help='CPU offload to save VRAM.')
136
+ @click.option('--use_rpred', is_flag=True,
137
+ help='[Sanity-check] Use Rpred instead of Lpred as the depth output.')
138
+ @click.option('--device', type=str, default='cuda')
139
+ @staticmethod
140
+ def load(repo_path: str, model_path: str, lora_path: str,
141
+ qwen2vl_path: Optional[str], empty_prompt_cache: Optional[str],
142
+ num_steps: int, cfg_guidance: float, size_level: int,
143
+ prompt_type: str, single_denoise: bool, seed: int,
144
+ quantized: bool, offload: bool, use_rpred: bool,
145
+ device: str = 'cuda'):
146
+ return Baseline(repo_path, model_path, lora_path, qwen2vl_path,
147
+ empty_prompt_cache, num_steps, cfg_guidance, size_level,
148
+ prompt_type, single_denoise, seed, quantized, offload,
149
+ use_rpred, device)
150
+
151
+ @torch.inference_mode()
152
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
153
+ omit_batch = image.ndim == 3
154
+ if omit_batch:
155
+ image = image.unsqueeze(0)
156
+ assert image.shape[0] == 1, "FE2E baseline only supports batch size 1"
157
+ _, _, H, W = image.shape
158
+
159
+ # generate_image accepts PIL or torch.Tensor for ref_images.
160
+ arr = (image[0].cpu().permute(1, 2, 0).clamp(0, 1).numpy() * 255).astype(np.uint8)
161
+ pil = Image.fromarray(arr)
162
+
163
+ images_list, Lpred, Rpred = self.image_gen.generate_image(
164
+ prompt='',
165
+ negative_prompt='',
166
+ ref_images=pil,
167
+ num_samples=1,
168
+ num_steps=self.num_steps,
169
+ cfg_guidance=self.cfg_guidance,
170
+ seed=self.seed,
171
+ show_progress=False,
172
+ size_level=self.size_level,
173
+ args=self.ig_args,
174
+ )
175
+
176
+ # Lpred: [1, 3, h', w'] in [0, 1]; Rpred: [1, 3, h', w'] in [-1, 1].
177
+ if self.use_rpred:
178
+ pred = Rpred.clamp(-1, 1)
179
+ pred = pred.mul(0.5).add(0.5)
180
+ else:
181
+ pred = Lpred # already in [0, 1]
182
+
183
+ # Mean over the channel dim to get scalar depth (same convention as
184
+ # Marigold / Lotus / DepthMaster). The eval pipeline aligns affine afterwards.
185
+ depth = pred[0].mean(dim=0).to(self.device).float()
186
+ if depth.shape != (H, W):
187
+ depth = F.interpolate(depth[None, None], size=(H, W),
188
+ mode='bilinear', align_corners=False)[0, 0]
189
+
190
+ # FE2E predicts affine-invariant depth via Step1X-Edit + LDRN LoRA (Wang et al., CVPR 2026).
191
+ # Emit only this physical key.
192
+ result = {'depth_affine_invariant': depth}
193
+ if not omit_batch:
194
+ result['depth_affine_invariant'] = result['depth_affine_invariant'].unsqueeze(0)
195
+ return result
baselines/lotus.py ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/EnVision-Research/Lotus
2
+ # Strictly follows official `infer.py`:
3
+ # from pipeline import LotusGPipeline, LotusDPipeline
4
+ # pipeline = LotusXPipeline.from_pretrained(args.pretrained_model_name_or_path, torch_dtype=dtype)
5
+ # # image in [-1, 1] tensor, shape (1, 3, H, W)
6
+ # task_emb = torch.tensor([1, 0]).float().unsqueeze(0)
7
+ # task_emb = torch.cat([torch.sin(task_emb), torch.cos(task_emb)], dim=-1)
8
+ # pred = pipeline(rgb_in=image, prompt='', num_inference_steps=1,
9
+ # timesteps=[args.timestep], task_emb=task_emb,
10
+ # processing_res=processing_res, match_input_res=match_input_res,
11
+ # resample_method=resample_method).images[0]
12
+ # if args.task_name == 'depth':
13
+ # output_npy = pred.mean(axis=-1)
14
+ #
15
+ # Default released depth checkpoints (per README):
16
+ # jingheya/lotus-depth-g-v1-0 (generation, depth)
17
+ # jingheya/lotus-depth-d-v1-0 (regression, depth)
18
+ # jingheya/lotus-depth-g-v2-1-disparity (generation, disparity)
19
+ # jingheya/lotus-depth-d-v2-0-disparity (regression, disparity)
20
+ # Output key depends on whether the checkpoint predicts depth or disparity.
21
+
22
+ import os
23
+ import sys
24
+ from typing import *
25
+ from pathlib import Path
26
+
27
+ import click
28
+ import torch
29
+ import torch.nn.functional as F
30
+ import numpy as np
31
+
32
+ from moge.test.baseline import MGEBaselineInterface
33
+
34
+
35
+ class Baseline(MGEBaselineInterface):
36
+ def __init__(self, repo_path: str, pretrained: str, mode: str, task_name: str,
37
+ disparity: bool, timestep: int, processing_res: Optional[int],
38
+ half_precision: bool, seed: Optional[int], device: Union[torch.device, str]):
39
+ repo_path = os.path.abspath(repo_path)
40
+ if not Path(repo_path).exists():
41
+ raise FileNotFoundError(
42
+ f"Cannot find Lotus repo at {repo_path}. Clone https://github.com/EnVision-Research/Lotus."
43
+ )
44
+ # Lotus' pipeline / utils packages are at the repo root.
45
+ if repo_path not in sys.path:
46
+ sys.path.insert(0, repo_path)
47
+ # MoGe's dataloader imports a different top-level package also named `pipeline`
48
+ # (from EasternJournalist/pipeline). It is already cached in sys.modules by the
49
+ # time we reach here, so `from pipeline import LotusGPipeline` would resolve to
50
+ # the wrong module. Evict the cached entry so Python re-resolves against
51
+ # Lotus' repo (which is first on sys.path).
52
+ sys.modules.pop('pipeline', None)
53
+ from pipeline import LotusGPipeline, LotusDPipeline
54
+
55
+ device = torch.device(device)
56
+ dtype = torch.float16 if half_precision else torch.float32
57
+
58
+ if mode == 'generation':
59
+ pipeline = LotusGPipeline.from_pretrained(pretrained, torch_dtype=dtype)
60
+ elif mode == 'regression':
61
+ pipeline = LotusDPipeline.from_pretrained(pretrained, torch_dtype=dtype)
62
+ else:
63
+ raise ValueError(f"Invalid mode: {mode}")
64
+ pipeline = pipeline.to(device)
65
+ pipeline.set_progress_bar_config(disable=True)
66
+
67
+ self.pipeline = pipeline
68
+ self.device = device
69
+ self.dtype = dtype
70
+ self.mode = mode
71
+ self.task_name = task_name
72
+ self.disparity = disparity
73
+ self.timestep = timestep
74
+ self.processing_res = processing_res
75
+ self.generator = torch.Generator(device=device).manual_seed(seed) if seed is not None else None
76
+
77
+ @click.command()
78
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../Lotus',
79
+ help='Path to the EnVision-Research/Lotus repository.')
80
+ @click.option('--pretrained', type=str, default='jingheya/lotus-depth-d-v2-0-disparity',
81
+ help='HF checkpoint name or local dir. README default disparity v2 is recommended.')
82
+ @click.option('--mode', type=click.Choice(['generation', 'regression']), default='regression',
83
+ help='Which Lotus pipeline (G/generation or D/regression).')
84
+ @click.option('--task_name', type=click.Choice(['depth', 'normal']), default='depth')
85
+ @click.option('--disparity', is_flag=True,
86
+ help='Set if the checkpoint predicts disparity (e.g. *-disparity ckpts).')
87
+ @click.option('--timestep', type=int, default=999)
88
+ @click.option('--processing_res', type=int, default=None,
89
+ help='Pipeline processing resolution. None uses default in checkpoint.')
90
+ @click.option('--fp16', 'half_precision', is_flag=True, help='Run in half precision.')
91
+ @click.option('--seed', type=int, default=None, help='Reproducibility seed (Lotus eval.sh uses 42).')
92
+ @click.option('--device', type=str, default='cuda')
93
+ @staticmethod
94
+ def load(repo_path: str, pretrained: str, mode: str, task_name: str, disparity: bool,
95
+ timestep: int, processing_res: Optional[int], half_precision: bool,
96
+ seed: Optional[int], device: str = 'cuda'):
97
+ return Baseline(repo_path, pretrained, mode, task_name, disparity, timestep,
98
+ processing_res, half_precision, seed, device)
99
+
100
+ @torch.inference_mode()
101
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
102
+ omit_batch = image.ndim == 3
103
+ if omit_batch:
104
+ image = image.unsqueeze(0)
105
+ assert image.shape[0] == 1, "Lotus baseline only supports batch size 1"
106
+ _, _, H, W = image.shape
107
+
108
+ # infer.py converts uint8 [0,255] to [-1,1] via `/127.5 - 1.0`. MoGe gives [0,1] floats,
109
+ # so the equivalent normalization is `image * 2 - 1`.
110
+ rgb_in = (image.to(self.device, dtype=self.dtype) * 2.0 - 1.0)
111
+
112
+ task_emb = torch.tensor([1, 0], device=self.device, dtype=self.dtype).unsqueeze(0)
113
+ task_emb = torch.cat([torch.sin(task_emb), torch.cos(task_emb)], dim=-1)
114
+
115
+ pred = self.pipeline(
116
+ rgb_in=rgb_in,
117
+ prompt='',
118
+ num_inference_steps=1,
119
+ generator=self.generator,
120
+ output_type='np',
121
+ timesteps=[self.timestep],
122
+ task_emb=task_emb,
123
+ processing_res=self.processing_res,
124
+ match_input_res=True,
125
+ resample_method='bilinear',
126
+ ).images[0]
127
+
128
+ # Per infer.py: depth uses mean over channels; pred is HxWx3 in [0, 1].
129
+ if self.task_name == 'depth':
130
+ arr = pred.mean(axis=-1)
131
+ else:
132
+ raise NotImplementedError("Normal task is not exposed by this baseline.")
133
+ depth_or_disp = torch.from_numpy(np.ascontiguousarray(arr)).to(self.device).float()
134
+ if depth_or_disp.shape != (H, W):
135
+ depth_or_disp = F.interpolate(depth_or_disp[None, None], size=(H, W),
136
+ mode='bilinear', align_corners=False)[0, 0]
137
+
138
+ # Lotus disparity ckpts: model physically predicts disparity in [0, 1]. Emit
139
+ # ONLY `disparity_affine_invariant`. We previously synthesized `depth_affine_invariant`
140
+ # via 1/disp, but this is numerically unstable near disparity=0 — the resulting
141
+ # depth-space affine alignment is dominated by inverted-small-disparity outliers,
142
+ # not by the model's actual depth quality. Cross-comparison with depth-emitting
143
+ # models happens via MoGe's fall-through to `disparity_affine_invariant` (1/depth),
144
+ # which IS numerically stable.
145
+ if self.disparity:
146
+ result = {'disparity_affine_invariant': depth_or_disp}
147
+ else:
148
+ # Lotus depth ckpt: directly affine-invariant depth.
149
+ result = {'depth_affine_invariant': depth_or_disp}
150
+ if not omit_batch:
151
+ for k in result: result[k] = result[k].unsqueeze(0)
152
+ return result
baselines/ppd.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/gangweiX/Pixel-Perfect-Depth
2
+ # Strictly follows official `run.py`:
3
+ # from ppd.models.ppd import PixelPerfectDepth
4
+ # model = PixelPerfectDepth(semantics_model='DA2', semantics_pth='checkpoints/depth_anything_v2_vitl.pth',
5
+ # sampling_steps=4)
6
+ # model.load_state_dict(torch.load(model_pth, map_location='cpu'), strict=False)
7
+ # model = model.to(DEVICE).eval()
8
+ # image = cv2.imread(filename) # BGR uint8 numpy
9
+ # H, W = image.shape[:2]
10
+ # depth, _ = model.infer_image(image) # torch.Tensor, may be (1, 1, h, w)
11
+ # depth = F.interpolate(depth, size=(H, W), mode='bilinear', align_corners=False)[0, 0]
12
+
13
+ import os
14
+ import sys
15
+ from typing import *
16
+ from pathlib import Path
17
+
18
+ import click
19
+ import torch
20
+ import torch.nn.functional as F
21
+ import numpy as np
22
+
23
+ from moge.test.baseline import MGEBaselineInterface
24
+
25
+
26
+ class Baseline(MGEBaselineInterface):
27
+ def __init__(self, repo_path: str, semantics_model: str, semantics_pth: str,
28
+ model_pth: str, sampling_steps: int, device: Union[torch.device, str]):
29
+ repo_path = os.path.abspath(repo_path)
30
+ if not Path(repo_path).exists():
31
+ raise FileNotFoundError(
32
+ f"Cannot find PPD repo at {repo_path}. Clone https://github.com/gangweiX/Pixel-Perfect-Depth."
33
+ )
34
+ if repo_path not in sys.path:
35
+ sys.path.insert(0, repo_path)
36
+
37
+ from ppd.models.ppd import PixelPerfectDepth
38
+ from ppd.utils.set_seed import set_seed
39
+ set_seed(666) # mirror run.py
40
+
41
+ # Allow relative paths against repo root (mirror run.py expectations).
42
+ if not os.path.isabs(semantics_pth):
43
+ semantics_pth = os.path.join(repo_path, semantics_pth)
44
+ if not os.path.isabs(model_pth):
45
+ model_pth = os.path.join(repo_path, model_pth)
46
+ if not os.path.exists(semantics_pth):
47
+ raise FileNotFoundError(f"Cannot find PPD semantics checkpoint at {semantics_pth}.")
48
+ if not os.path.exists(model_pth):
49
+ raise FileNotFoundError(f"Cannot find PPD model checkpoint at {model_pth}.")
50
+
51
+ device = torch.device(device)
52
+ model = PixelPerfectDepth(
53
+ semantics_model=semantics_model,
54
+ semantics_pth=semantics_pth,
55
+ sampling_steps=sampling_steps,
56
+ )
57
+ model.load_state_dict(torch.load(model_pth, map_location='cpu'), strict=False)
58
+ model = model.to(device).eval()
59
+
60
+ self.model = model
61
+ self.device = device
62
+
63
+ @click.command()
64
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../Pixel-Perfect-Depth',
65
+ help='Path to the gangweiX/Pixel-Perfect-Depth repository.')
66
+ @click.option('--semantics_model', type=click.Choice(['DA2', 'MoGe2']), default='DA2',
67
+ help='Semantics encoder used by PPD (run.py default DA2).')
68
+ @click.option('--semantics_pth', type=click.Path(),
69
+ default='checkpoints/depth_anything_v2_vitl.pth',
70
+ help='Semantics encoder ckpt path (relative to --repo if not absolute).')
71
+ @click.option('--model_pth', type=click.Path(), default='checkpoints/ppd.pth',
72
+ help='PPD model ckpt path (relative to --repo if not absolute).')
73
+ @click.option('--sampling_steps', type=int, default=4,
74
+ help='Number of DiT sampling steps (run.py default 4).')
75
+ @click.option('--device', type=str, default='cuda')
76
+ @staticmethod
77
+ def load(repo_path: str, semantics_model: str, semantics_pth: str,
78
+ model_pth: str, sampling_steps: int, device: str = 'cuda'):
79
+ return Baseline(repo_path, semantics_model, semantics_pth, model_pth, sampling_steps, device)
80
+
81
+ @torch.inference_mode()
82
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
83
+ omit_batch = image.ndim == 3
84
+ if omit_batch:
85
+ image = image.unsqueeze(0)
86
+ assert image.shape[0] == 1, "PPD baseline only supports batch size 1"
87
+ _, _, H, W = image.shape
88
+
89
+ # run.py calls cv2.imread which returns BGR uint8 numpy (H, W, 3).
90
+ rgb_uint8 = (image[0].cpu().permute(1, 2, 0).clamp(0, 1).numpy() * 255).astype(np.uint8)
91
+ bgr_uint8 = rgb_uint8[..., ::-1].copy() # BGR for cv2 parity
92
+
93
+ depth, _ = self.model.infer_image(bgr_uint8)
94
+ # run.py: depth = F.interpolate(depth, size=(H, W), ...)[0, 0]; so depth here is 4D.
95
+ if depth.ndim == 4:
96
+ depth = F.interpolate(depth, size=(H, W), mode='bilinear', align_corners=False)[0, 0]
97
+ elif depth.ndim == 2 and depth.shape != (H, W):
98
+ depth = F.interpolate(depth[None, None], size=(H, W), mode='bilinear', align_corners=False)[0, 0]
99
+ depth = depth.to(self.device).float()
100
+
101
+ # PPD predicts affine-invariant depth (Xu et al., 2025). Emit only this physical key.
102
+ result = {'depth_affine_invariant': depth}
103
+ if not omit_batch:
104
+ result['depth_affine_invariant'] = result['depth_affine_invariant'].unsqueeze(0)
105
+ return result
docs/eval.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Evaluation
2
+
3
+ We provide a unified evaluation script that runs baselines on multiple benchmarks. It takes a baseline model and evaluation configurations, evaluates on-the-fly, and reports results instantly in a JSON file.
4
+
5
+ ## Benchmarks
6
+
7
+ Donwload the processed datasets from [Huggingface Datasets](https://huggingface.co/datasets/Ruicheng/monocular-geometry-evaluation) and put them in the `data/eval` directory, using `huggingface-cli`:
8
+
9
+ ```bash
10
+ mkdir -p data/eval
11
+ huggingface-cli download Ruicheng/monocular-geometry-evaluation --repo-type dataset --local-dir data/eval --local-dir-use-symlinks False
12
+ ```
13
+
14
+ Then unzip the downloaded files:
15
+
16
+ ```bash
17
+ cd data/eval
18
+ unzip '*.zip'
19
+ # rm *.zip # if you don't keep the zip files
20
+ ```
21
+
22
+ ## Configuration
23
+
24
+ See [`configs/eval/all_benchmarks.json`](../configs/eval/all_benchmarks.json) for an example of evaluation configurations on all benchmarks. You can modify this file to evaluate on different benchmarks or different baselines.
25
+
26
+ ## Baseline
27
+
28
+ Some examples of baselines are provided in [`baselines/`](../baselines/). Pass the path to the baseline model python code to the `--baseline` argument of the evaluation script.
29
+
30
+ ## Run Evaluation
31
+
32
+ Run the script [`moge/scripts/eval_baseline.py`](../moge/scripts/eval_baseline.py).
33
+ For example,
34
+
35
+ ```bash
36
+ # Evaluate MoGe on the 10 benchmarks
37
+ python moge/scripts/eval_baseline.py --baseline baselines/moge.py --config configs/eval/all_benchmarks.json --output eval_output/moge.json --pretrained Ruicheng/moge-vitl --resolution_level 9
38
+
39
+ # Evaluate Depth Anything V2 on the 10 benchmarks. (NOTE: affine disparity)
40
+ python moge/scripts/eval_baseline.py --baseline baselines/da_v2.py --config configs/eval/all_benchmarks.json --output eval_output/da_v2.json
41
+ ```
42
+
43
+ The `--baselies` `--input` `--output` arguments are for the inference script. The rest arguments, e.g. `--pretrained` `--resolution_level`, are custormized for loading the baseline model.
44
+
45
+ Details of the arguments:
46
+
47
+ ```
48
+ Usage: eval_baseline.py [OPTIONS]
49
+
50
+ Evaluation script.
51
+
52
+ Options:
53
+ --baseline PATH Path to the baseline model python code.
54
+ --config PATH Path to the evaluation configurations. Defaults to
55
+ "configs/eval/all_benchmarks.json".
56
+ --output PATH Path to the output json file.
57
+ --oracle Use oracle mode for evaluation, i.e., use the GT intrinsics
58
+ input.
59
+ --dump_pred Dump predition results.
60
+ --dump_gt Dump ground truth.
61
+ --help Show this message and exit.
62
+ ```
63
+
64
+
65
+
66
+ ## Wrap a Customized Baseline
67
+
68
+ Wrap any baseline method with [`moge.test.baseline.MGEBaselineInterface`](../moge/test/baseline.py).
69
+ See [`baselines/`](../baselines/) for more examples.
70
+
71
+ It is a good idea to check the correctness of the baseline implementation by running inference on a small set of images via [`moge/scripts/infer_baselines.py`](../moge/scripts/infer_baselines.py):
72
+
73
+ ```base
74
+ python moge/scripts/infer_baselines.py --baseline baselines/moge.py --input example_images/ --output infer_outupt/moge --pretrained Ruicheng/moge-vitl --maps --ply
75
+ ```
76
+
77
+
docs/normal.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MoGe-2 Normal Estimation
2
+
3
+ <img src="..\assets\normal_comaprison.jpg">
4
+ <div align="center">
5
+ <p style="text-align:center;">Qualitative comparison of normal estimation with <a href="https://github.com/prs-eth/marigold">Marigold</a> and <a href="https://github.com/YvanYin/Metric3D">Metric3D V2</a></p>
6
+ </div>
7
+
8
+ > NOTE: Normal estimation was implemented after the submission of the MoGe-2 paper and is therefore not included in the original publication. This feature required minimal additional effort, and we do not claim any novel technical contribution.
9
+
10
+ We added a lightweight convolutional head and trained the normal output using a squared angular loss:
11
+
12
+ $$
13
+ \mathcal L_{\rm normal} = {1\over |\mathcal M|}\sum_{i\in\mathcal M} \angle (\hat{\mathbf n}_i,\mathbf n_i)^2
14
+ $$
15
+
16
+ where $\hat{\mathbf{n}}_i$ is the predicted normal, $\mathbf{n}_i$ is the ground-truth normal, and $\mathcal{M}$ denotes the set of valid pixels. For convenience, we did not collect ground-truth normal maps for training. Instead, we derived surface normals from the depth map and camera intrinsics. The resulting estimates are visually and numerically satisfactory.
docs/onnx.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MoGe ONNX Support
2
+
3
+ MoGe-2 is compatible with the ONNX format (opset version ≥ 14). We have exported several models for use in ONNXRuntime or deployment on other compatible inference engines.
4
+
5
+ > **Important Note:** The `.infer()` method in our PyTorch code includes some post-processing logic (e.g., recovering focal and shift and reprojection) that cannot be exported to ONNX. The ONNX model only includes the raw forward() pass, which outputs intermediate predictions (affine point map, normal map, floating point mask, metric scale). You will need to implement any required post-processing steps separately if replicating the full inference pipeline.
6
+
7
+ The exported models are in **FP32** precision, with **dynamic input resolution** and **variable-length** token support. You can further optimize these models based on your target deployment platform.
8
+
9
+ <table>
10
+ <thead>
11
+ <tr>
12
+ <th>Version</th>
13
+ <th>Hugging Face Model</th>
14
+ </tr>
15
+ </thead>
16
+ <tbody>
17
+ <tr>
18
+ <td rowspan="3">MoGe-2</td>
19
+ <td><a href="https://huggingface.co/Ruicheng/moge-2-vitl-normal-onnx" target="_blank"><code>Ruicheng/moge-2-vitl-normal-onnx</code></a></td>
20
+ </tr>
21
+ <tr>
22
+ <td><a href="https://huggingface.co/Ruicheng/moge-2-vitb-normal-onnx" target="_blank"><code>Ruicheng/moge-2-vitb-normal-onnx</code></a></td>
23
+ </tr>
24
+ <tr>
25
+ <td><a href="https://huggingface.co/Ruicheng/moge-2-vits-normal-onnx" target="_blank"><code>Ruicheng/moge-2-vits-normal-onnx</code></a></td>
26
+ </tr>
27
+ </tbody>
28
+ </table>
29
+
30
+ ## Customized Exportation
31
+
32
+ ### Dynamic Shape & Variable Number of Tokens
33
+ ```python
34
+ import os
35
+ os.environ['XFORMERS_DISABLED'] = '1' # Disable xformers
36
+ import numpy as np
37
+ import torch
38
+ from moge.model.v2 import MoGeModel
39
+
40
+ PRETRAINED_MODEL = 'Ruicheng/moge-2-vits-normal.pt'
41
+ ONNX_FILE = 'moge-2-vits-normal.onnx'
42
+
43
+ model = MoGeModel.from_pretrained(PRETRAINED_MODEL)
44
+ model.onnx_compatible_mode = True # Enable ONNX compatible mode
45
+
46
+ torch.onnx.export(
47
+ model,
48
+ (torch.rand(1, 3, 518, 518), torch.tensor(1800)),
49
+ ONNX_FILE,
50
+ input_names=['image', 'num_tokens'],
51
+ output_names=['points', 'normal', 'mask', 'metric_scale'],
52
+ dynamic_axes={
53
+ 'image': {0: 'batch_size', 2: 'height', 3: 'width'},
54
+ },
55
+ opset_version=14
56
+ )
57
+ ```
58
+
59
+ ### Static Shape & Fixed Number of Tokens
60
+
61
+ ```python
62
+ import os
63
+ os.environ['XFORMERS_DISABLED'] = '1' # Disable xformers
64
+ import numpy as np
65
+ import torch
66
+ from moge.model.v2 import MoGeModel
67
+
68
+ class MoGeStatic(MoGeModel):
69
+ def forward(self, image: torch.Tensor):
70
+ return super().forward(image, NUM_TOKENS)
71
+
72
+ NUM_TOKENS = 1800
73
+ FIXED_IMAGE_INPUT = torch.rand(1, 3, 518, 518)
74
+ PRETRAINED_MODEL = 'Ruicheng/moge-2-vits-normal.pt'
75
+ ONNX_FILE = 'moge-2-vits-normal.onnx'
76
+
77
+ model = MoGeStatic.from_pretrained(PRETRAINED_MODEL)
78
+ model.onnx_compatible_mode = True # Enable ONNX compatible mode
79
+
80
+ torch.onnx.export(
81
+ model,
82
+ (FIXED_IMAGE_INPUT,),
83
+ ONNX_FILE,
84
+ input_names=['image'],
85
+ output_names=['points', 'normal', 'mask', 'metric_scale'],
86
+ dynamic_axes=None,
87
+ opset_version=14
88
+ )
89
+ ```
docs/train.md ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Training
3
+
4
+ This document provides instructions for training and finetuning the MoGe model.
5
+
6
+ ## Additional Requirements
7
+
8
+ The following packages other than those listed in [`pyproject.toml`](../pyproject.toml) are required for training and finetuning the MoGe model:
9
+
10
+ ```
11
+ accelerate
12
+ sympy
13
+ mlflow
14
+ ```
15
+
16
+ ## Data preparation
17
+
18
+ ### Dataset format
19
+
20
+ Each dataset should be organized as follows:
21
+
22
+ ```
23
+ somedataset
24
+ ├── .index.txt # A list of instance paths
25
+ ├── folder1
26
+ │ ├── instance1 # Each instance is in a folder
27
+ │ │ ├── image.jpg # RGB image.
28
+ │ │ ├── depth.png # 16-bit depth. See moge/utils/io.py for details
29
+ │ │ ├── meta.json # Stores "intrinsics" as a 3x3 matrix
30
+ │ │ └── ... # Other componests such as segmentation mask, normal map etc.
31
+ ...
32
+ ```
33
+
34
+ * `.index.txt` is placed at top directory to store a list of instance paths in this dataset. The dataloader will look for instances in this list. You may also use a custom split, e.g. `.train.txt`, `.val.txt` and specify it in the configuration file.
35
+
36
+ * For depth images, it is recommended to use `read_depth()` and `write_depth()` in [`moge/utils/io.py`](../moge/utils/io.py) to read and write depth images. The depth is stored in logarithmic scale in 16-bit PNG format, offering a balanced precision, dynamic range and compression ratio compared to 16-bit and 32-bit EXR and linear depth formats. It also encodes `NaN` and `Inf` values for invalid depth values.
37
+
38
+ * The `meta.json` should be a dictionary containing the key `intrinsics`, which are **normalized** camera parameters. You may put more metadata.
39
+
40
+ * We also support reading and storing segementation masks for evaluation data (see paper evaluation of local points), which are saved in PNG format with semantic labels stored in png metadata as JSON strings. See `read_segmentation()` and `write_segmentation()` in [`moge/utils/io.py`](../moge/utils/io.py) for details.
41
+
42
+
43
+ ### Visual inspection
44
+
45
+ We provide a script to visualize the data and check the data quality. It will export the instance as a PLY file for visualization of point cloud.
46
+
47
+ ```bash
48
+ python moge/scripts/vis_data.py PATH_TO_INSTANCE --ply [-o SOMEWHERE_ELSE_TO_SAVE_VIS]
49
+ ```
50
+
51
+ ### DataLoader
52
+
53
+ Our training dataloaders is customized to handle loading data, performing perspective crop, and augmentation in a multithreading pipeline. Please refer to [`moge/train/dataloader.py`](../moge/train/dataloader.py) if you have any concern.
54
+
55
+
56
+ ## Configuration
57
+
58
+ See [`configs/train/v1.json`](../configs/train/v1.json) for an example configuration file. The configuration file defines the hyperparameters for training the MoGe model.
59
+ Here is a commented configuration for reference:
60
+
61
+ ```json
62
+ {
63
+ "data": {
64
+ "aspect_ratio_range": [0.5, 2.0], # Range of aspect ratio of sampled images
65
+ "area_range": [250000, 1000000], # Range of sampled image area in pixels
66
+ "clamp_max_depth": 1000.0, # Maximum far/near
67
+ "center_augmentation": 0.5, # Ratio of center crop augmentation
68
+ "fov_range_absolute": [1, 179], # Absolute range of FOV in degrees
69
+ "fov_range_relative": [0.01, 1.0], # Relative range of FOV to the original FOV
70
+ "image_augmentation": ["jittering", "jpeg_loss", "blurring"], # List of image augmentation techniques
71
+ "datasets": [
72
+ {
73
+ "name": "TartanAir", # Name of the dataset. Name it as you like.
74
+ "path": "data/TartanAir", # Path to the dataset
75
+ "label_type": "synthetic", # Label type for this dataset. Losses will be applied accordingly. see "loss" config
76
+ "weight": 4.8, # Probability of sampling this dataset
77
+ "index": ".index.txt", # File name of the index file. Defaults to .index.txt
78
+ "depth": "depth.png", # File name of depth images. Defaults to depth.png
79
+ "center_augmentation": 0.25, # Below are dataset-specific hyperparameters. Overriding the global ones above.
80
+ "fov_range_absolute": [30, 150],
81
+ "fov_range_relative": [0.5, 1.0],
82
+ "image_augmentation": ["jittering", "jpeg_loss", "blurring", "shot_noise"]
83
+ }
84
+ ]
85
+ },
86
+ "model_version": "v1", # Model version. If you have multiple model variants, you can use this to switch between them.
87
+ "model": { # Model hyperparameters. Will be passed to Model __init__() as kwargs.
88
+ "encoder": "dinov2_vitl14",
89
+ "remap_output": "exp",
90
+ "intermediate_layers": 4,
91
+ "dim_upsample": [256, 128, 64],
92
+ "dim_times_res_block_hidden": 2,
93
+ "num_res_blocks": 2,
94
+ "num_tokens_range": [1200, 2500],
95
+ "last_conv_channels": 32,
96
+ "last_conv_size": 1
97
+ },
98
+ "optimizer": { # Reflection-like optimizer configurations. See moge.train.utils.py build_optimizer() for details.
99
+ "type": "AdamW",
100
+ "params": [
101
+ {"params": {"include": ["*"], "exclude": ["*backbone.*"]}, "lr": 1e-4},
102
+ {"params": {"include": ["*backbone.*"]}, "lr": 1e-5}
103
+ ]
104
+ },
105
+ "lr_scheduler": { # Reflection-like lr_scheduler configurations. See moge.train.utils.py build_lr_scheduler() for details.
106
+ "type": "SequentialLR",
107
+ "params": {
108
+ "schedulers": [
109
+ {"type": "LambdaLR", "params": {"lr_lambda": ["1.0", "max(0.0, min(1.0, (epoch - 1000) / 1000))"]}},
110
+ {"type": "StepLR", "params": {"step_size": 25000, "gamma": 0.5}}
111
+ ],
112
+ "milestones": [2000]
113
+ }
114
+ },
115
+ "low_resolution_training_steps": 50000, # Total number of low-resolution training steps. It makes the early stage training faster. Later stage training on varying size images will be slower.
116
+ "loss": {
117
+ "invalid": {}, # invalid instance due to runtime error when loading data
118
+ "synthetic": { # Below are loss hyperparameters
119
+ "global": {"function": "affine_invariant_global_loss", "weight": 1.0, "params": {"align_resolution": 32}},
120
+ "patch_4": {"function": "affine_invariant_local_loss", "weight": 1.0, "params": {"level": 4, "align_resolution": 16, "num_patches": 16}},
121
+ "patch_16": {"function": "affine_invariant_local_loss", "weight": 1.0, "params": {"level": 16, "align_resolution": 8, "num_patches": 256}},
122
+ "patch_64": {"function": "affine_invariant_local_loss", "weight": 1.0, "params": {"level": 64, "align_resolution": 4, "num_patches": 4096}},
123
+ "normal": {"function": "normal_loss", "weight": 1.0},
124
+ "mask": {"function": "mask_l2_loss", "weight": 1.0}
125
+ },
126
+ "sfm": {
127
+ "global": {"function": "affine_invariant_global_loss", "weight": 1.0, "params": {"align_resolution": 32}},
128
+ "patch_4": {"function": "affine_invariant_local_loss", "weight": 1.0, "params": {"level": 4, "align_resolution": 16, "num_patches": 16}},
129
+ "patch_16": {"function": "affine_invariant_local_loss", "weight": 1.0, "params": {"level": 16, "align_resolution": 8, "num_patches": 256}},
130
+ "mask": {"function": "mask_l2_loss", "weight": 1.0}
131
+ },
132
+ "lidar": {
133
+ "global": {"function": "affine_invariant_global_loss", "weight": 1.0, "params": {"align_resolution": 32}},
134
+ "patch_4": {"function": "affine_invariant_local_loss", "weight": 1.0, "params": {"level": 4, "align_resolution": 16, "num_patches": 16}},
135
+ "mask": {"function": "mask_l2_loss", "weight": 1.0}
136
+ }
137
+ }
138
+ }
139
+ ```
140
+
141
+ ## Run Training
142
+
143
+ Launch the training script [`moge/scripts/train.py`](../moge/scripts/train.py). Note that we use [`accelerate`](https://github.com/huggingface/accelerate) for distributed training.
144
+
145
+ ```bash
146
+ accelerate launch \
147
+ --num_processes 8 \
148
+ moge/scripts/train.py \
149
+ --config configs/train/v1.json \
150
+ --workspace workspace/debug \
151
+ --gradient_accumulation_steps 2 \
152
+ --batch_size_forward 2 \
153
+ --checkpoint latest \
154
+ --enable_gradient_checkpointing True \
155
+ --vis_every 1000 \
156
+ --enable_mlflow True
157
+ ```
158
+
159
+
160
+ ## Finetuning
161
+
162
+ To finetune the pre-trained MoGe model, download the model checkpoint and put it in a local directory, e.g. `pretrained/moge-vitl.pt`.
163
+
164
+ > NOTE: when finetuning pretrained MoGe model, a much lower learning rate is required.
165
+ The suggested learning rate for finetuning is not greater than 1e-5 for the head and 1e-6 for the backbone.
166
+ And the batch size is recommended to be 32 at least.
167
+ The settings in default configuration are not optimal for specific datasets and may require further tuning.
168
+
169
+ ```bash
170
+ accelerate launch \
171
+ --num_processes 8 \
172
+ moge/scripts/train.py \
173
+ --config configs/train/v1.json \
174
+ --workspace workspace/debug \
175
+ --gradient_accumulation_steps 2 \
176
+ --batch_size_forward 2 \
177
+ --checkpoint pretrained/moge-vitl.pt \
178
+ --enable_gradient_checkpointing True \
179
+ --vis_every 1000 \
180
+ --enable_mlflow True
181
+ ```
eval_all_12111.log ADDED
The diff for this file is too large to render. See raw diff
 
eval_output/_eval_all_20260514_010406.summary.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [OK] da3_mono -> eval_output/da3_mono_20260514_010406.json (5861 bytes) at Thu May 14 01:23:28 AM AEST 2026
2
+ [OK] depth_pro -> eval_output/depth_pro_20260514_010406.json (8029 bytes) at Thu May 14 02:25:48 AM AEST 2026
eval_output/_eval_all_20260514_045817.summary.txt ADDED
File without changes
eval_output/_eval_all_20260514_051015.summary.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ [OK] marigold -> eval_output/marigold_20260514_051015.json (3009 bytes) at Thu May 14 05:55:14 AM AEST 2026
2
+ [OK] lotus -> eval_output/lotus_20260514_051015.json (3056 bytes) at Thu May 14 06:17:39 AM AEST 2026
3
+ [OK] depthmaster -> eval_output/depthmaster_20260514_051015.json (3015 bytes) at Thu May 14 06:50:08 AM AEST 2026
4
+ [OK] ppd -> eval_output/ppd_20260514_051015.json (3008 bytes) at Thu May 14 07:46:42 AM AEST 2026
5
+ [OK] fe2e -> eval_output/fe2e_20260514_051015.json (3005 bytes) at Thu May 14 09:50:45 AM AEST 2026
eval_output/da2_dpt_vitb_20260114_134612.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "disparity_affine_invariant": {
4
+ "delta1": 0.974833564565087,
5
+ "rel": 0.05241212553119669
6
+ },
7
+ "inference_time": 0.057907959736815284
8
+ },
9
+ "KITTI": {
10
+ "disparity_affine_invariant": {
11
+ "delta1": 0.9280443067199613,
12
+ "rel": 0.07622533171020586
13
+ },
14
+ "inference_time": 0.09258947745422644
15
+ },
16
+ "ETH3D": {
17
+ "disparity_affine_invariant": {
18
+ "delta1": 0.9677766373110238,
19
+ "rel": 0.057696183788536796
20
+ },
21
+ "inference_time": 0.08330194540485937
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.14090071999175213,
26
+ "radius2_f1": 0.20667106920581957,
27
+ "radius3_f1": 0.27633790267740344
28
+ },
29
+ "disparity_affine_invariant": {
30
+ "delta1": 0.9757371485233307,
31
+ "rel": 0.04696349518373608
32
+ },
33
+ "inference_time": 0.057772650718688964
34
+ },
35
+ "GSO": {
36
+ "disparity_affine_invariant": {
37
+ "delta1": 0.9998600973666293,
38
+ "rel": 0.01525101375933375
39
+ },
40
+ "inference_time": 0.035497472818615365
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.28685013216669697,
45
+ "radius2_f1": 0.3630884739812729,
46
+ "radius3_f1": 0.42659517666386715
47
+ },
48
+ "disparity_affine_invariant": {
49
+ "delta1": 0.7117475779610904,
50
+ "rel": 0.2291253272713603
51
+ },
52
+ "inference_time": 0.09121118138607283
53
+ },
54
+ "DDAD": {
55
+ "disparity_affine_invariant": {
56
+ "delta1": 0.8224954205751419,
57
+ "rel": 0.1397515681795776
58
+ },
59
+ "inference_time": 0.10340560173988342
60
+ },
61
+ "DIODE": {
62
+ "disparity_affine_invariant": {
63
+ "delta1": 0.945637157551664,
64
+ "rel": 0.06310866091910136
65
+ },
66
+ "inference_time": 0.06685620430378598
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.07660989755992553,
71
+ "radius2_f1": 0.11204444241417982,
72
+ "radius3_f1": 0.14977762569691006
73
+ },
74
+ "disparity_affine_invariant": {
75
+ "delta1": 0.6547722291359168,
76
+ "rel": 0.27800550251826645
77
+ },
78
+ "inference_time": 0.07754106879234314
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.052440993974481265,
83
+ "radius2_f1": 0.09403067850916828,
84
+ "radius3_f1": 0.13893322044758782
85
+ },
86
+ "disparity_affine_invariant": {
87
+ "delta1": 0.9832104132252355,
88
+ "rel": 0.054671637585326546
89
+ },
90
+ "inference_time": 0.09214629327097247
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.13920043592321396,
95
+ "radius2_f1": 0.19395866602761014,
96
+ "radius3_f1": 0.2479109813714421
97
+ },
98
+ "disparity_affine_invariant": {
99
+ "delta1": 0.896411455293508,
100
+ "rel": 0.10132108464466413
101
+ },
102
+ "inference_time": 0.07582298556262633
103
+ }
104
+ }
eval_output/da2_public_vitl_subset_20260512_180834.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "disparity_affine_invariant": {
4
+ "delta1": 0.982713047517549,
5
+ "rel": 0.04140635070587517
6
+ },
7
+ "inference_time": 0.06813371546042439
8
+ },
9
+ "KITTI": {
10
+ "disparity_affine_invariant": {
11
+ "delta1": 0.9671859955129448,
12
+ "rel": 0.056054825357452855
13
+ },
14
+ "inference_time": 0.10651463708994578
15
+ },
16
+ "iBims-1": {
17
+ "boundary": {
18
+ "radius1_f1": 0.15566007409454363,
19
+ "radius2_f1": 0.23137078445345238,
20
+ "radius3_f1": 0.30988303023981895
21
+ },
22
+ "disparity_affine_invariant": {
23
+ "delta1": 0.9849210739135742,
24
+ "rel": 0.03475647780811414
25
+ },
26
+ "inference_time": 0.06300068140029907
27
+ },
28
+ "mean": {
29
+ "boundary": {
30
+ "radius1_f1": 0.15566007409454363,
31
+ "radius2_f1": 0.23137078445345238,
32
+ "radius3_f1": 0.30988303023981895
33
+ },
34
+ "disparity_affine_invariant": {
35
+ "delta1": 0.9782733723146894,
36
+ "rel": 0.04407255129048072
37
+ },
38
+ "inference_time": 0.07921634465022308
39
+ }
40
+ }
eval_output/da2_sdt_vitb_20260114_161729.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "disparity_affine_invariant": {
4
+ "delta1": 0.9780740512802696,
5
+ "rel": 0.04871464073259258
6
+ },
7
+ "inference_time": 0.04975540083847279
8
+ },
9
+ "KITTI": {
10
+ "disparity_affine_invariant": {
11
+ "delta1": 0.9416156054640109,
12
+ "rel": 0.06891980266449915
13
+ },
14
+ "inference_time": 0.0665209209260765
15
+ },
16
+ "ETH3D": {
17
+ "disparity_affine_invariant": {
18
+ "delta1": 0.9745341998233669,
19
+ "rel": 0.05069851055782337
20
+ },
21
+ "inference_time": 0.07734930410259096
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.14264323215795868,
26
+ "radius2_f1": 0.20852055913595816,
27
+ "radius3_f1": 0.27894027142037453
28
+ },
29
+ "disparity_affine_invariant": {
30
+ "delta1": 0.9792008411884308,
31
+ "rel": 0.04372428907314316
32
+ },
33
+ "inference_time": 0.033993468284606934
34
+ },
35
+ "GSO": {
36
+ "disparity_affine_invariant": {
37
+ "delta1": 0.9998605856617677,
38
+ "rel": 0.0149807150021322
39
+ },
40
+ "inference_time": 0.033762804049890016
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.31112807628301553,
45
+ "radius2_f1": 0.388943978400922,
46
+ "radius3_f1": 0.45282715250076555
47
+ },
48
+ "disparity_affine_invariant": {
49
+ "delta1": 0.7144647518455446,
50
+ "rel": 0.22181324218224763
51
+ },
52
+ "inference_time": 0.0446703142689583
53
+ },
54
+ "DDAD": {
55
+ "disparity_affine_invariant": {
56
+ "delta1": 0.833798732072115,
57
+ "rel": 0.13449553920701146
58
+ },
59
+ "inference_time": 0.08101802349090576
60
+ },
61
+ "DIODE": {
62
+ "disparity_affine_invariant": {
63
+ "delta1": 0.9507016482752121,
64
+ "rel": 0.05917388091380823
65
+ },
66
+ "inference_time": 0.061501885821074055
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.08642734505734705,
71
+ "radius2_f1": 0.12273531602168489,
72
+ "radius3_f1": 0.161491237510747
73
+ },
74
+ "disparity_affine_invariant": {
75
+ "delta1": 0.6414222037201044,
76
+ "rel": 0.29055015282519164
77
+ },
78
+ "inference_time": 0.11224358367919922
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.06043827055101478,
83
+ "radius2_f1": 0.10466954807030585,
84
+ "radius3_f1": 0.15060183037786806
85
+ },
86
+ "disparity_affine_invariant": {
87
+ "delta1": 0.9857739838477104,
88
+ "rel": 0.05342892872770467
89
+ },
90
+ "inference_time": 0.10319059464239305
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.15015923101233403,
95
+ "radius2_f1": 0.2062173504072177,
96
+ "radius3_f1": 0.2609651229524388
97
+ },
98
+ "disparity_affine_invariant": {
99
+ "delta1": 0.8999446603178531,
100
+ "rel": 0.09864997018861542
101
+ },
102
+ "inference_time": 0.06640063001041677
103
+ }
104
+ }
eval_output/da3_dpt_20260114_145611.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "disparity_affine_invariant": {
4
+ "delta1": 0.9829550308007349,
5
+ "rel": 0.04023517982390587
6
+ },
7
+ "inference_time": 0.07876714180004342
8
+ },
9
+ "KITTI": {
10
+ "disparity_affine_invariant": {
11
+ "delta1": 0.9571779830141302,
12
+ "rel": 0.059076304853168185
13
+ },
14
+ "inference_time": 0.11558162727238942
15
+ },
16
+ "ETH3D": {
17
+ "disparity_affine_invariant": {
18
+ "delta1": 0.9850672782780315,
19
+ "rel": 0.03896157294081007
20
+ },
21
+ "inference_time": 0.12514383309738225
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.149422133289895,
26
+ "radius2_f1": 0.22019007211929753,
27
+ "radius3_f1": 0.2951995424284133
28
+ },
29
+ "disparity_affine_invariant": {
30
+ "delta1": 0.9831844353675843,
31
+ "rel": 0.034640795181621796
32
+ },
33
+ "inference_time": 0.07676945447921753
34
+ },
35
+ "GSO": {
36
+ "disparity_affine_invariant": {
37
+ "delta1": 0.9998718873968402,
38
+ "rel": 0.01098453963415028
39
+ },
40
+ "inference_time": 0.053403127540662454
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.33310246305710756,
45
+ "radius2_f1": 0.41483866276085685,
46
+ "radius3_f1": 0.48112728723152426
47
+ },
48
+ "disparity_affine_invariant": {
49
+ "delta1": 0.7802026772611649,
50
+ "rel": 0.18971728492017023
51
+ },
52
+ "inference_time": 0.1108211516437674
53
+ },
54
+ "DDAD": {
55
+ "disparity_affine_invariant": {
56
+ "delta1": 0.8573276385962963,
57
+ "rel": 0.12203943925723433
58
+ },
59
+ "inference_time": 0.13747373247146608
60
+ },
61
+ "DIODE": {
62
+ "disparity_affine_invariant": {
63
+ "delta1": 0.9630032265812506,
64
+ "rel": 0.04880422090352954
65
+ },
66
+ "inference_time": 0.08083388839405026
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.09896901534586253,
71
+ "radius2_f1": 0.14254710535432905,
72
+ "radius3_f1": 0.19018157983297923
73
+ },
74
+ "disparity_affine_invariant": {
75
+ "delta1": 0.7790197404697538,
76
+ "rel": 0.19714444087632
77
+ },
78
+ "inference_time": 0.09614678049087524
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.05755111022815684,
83
+ "radius2_f1": 0.10357544522171157,
84
+ "radius3_f1": 0.1520155659322366
85
+ },
86
+ "disparity_affine_invariant": {
87
+ "delta1": 0.9927957400198906,
88
+ "rel": 0.03768224354713194
89
+ },
90
+ "inference_time": 0.11005015034829417
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.15976118048025548,
95
+ "radius2_f1": 0.22028782136404876,
96
+ "radius3_f1": 0.2796309938562883
97
+ },
98
+ "disparity_affine_invariant": {
99
+ "delta1": 0.9280605637785676,
100
+ "rel": 0.07792860219380422
101
+ },
102
+ "inference_time": 0.09849908875381483
103
+ }
104
+ }
eval_output/da3_dualdpt_20260114_145615.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "disparity_affine_invariant": {
4
+ "delta1": 0.9833382554010514,
5
+ "rel": 0.039658584344323254
6
+ },
7
+ "inference_time": 0.08444627308335144
8
+ },
9
+ "KITTI": {
10
+ "disparity_affine_invariant": {
11
+ "delta1": 0.9559426244599688,
12
+ "rel": 0.057898180216459406
13
+ },
14
+ "inference_time": 0.11981674096335662
15
+ },
16
+ "ETH3D": {
17
+ "disparity_affine_invariant": {
18
+ "delta1": 0.9840508551061942,
19
+ "rel": 0.039160544363502824
20
+ },
21
+ "inference_time": 0.1598292226833394
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.14581032437983824,
26
+ "radius2_f1": 0.216919460644202,
27
+ "radius3_f1": 0.29227499703758414
28
+ },
29
+ "disparity_affine_invariant": {
30
+ "delta1": 0.9829059141874313,
31
+ "rel": 0.03432164325669874
32
+ },
33
+ "inference_time": 0.08566819667816163
34
+ },
35
+ "GSO": {
36
+ "disparity_affine_invariant": {
37
+ "delta1": 0.9998750793702395,
38
+ "rel": 0.01068191551015649
39
+ },
40
+ "inference_time": 0.06294074127975019
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.3334429234927617,
45
+ "radius2_f1": 0.4179714630496576,
46
+ "radius3_f1": 0.4861680043262974
47
+ },
48
+ "disparity_affine_invariant": {
49
+ "delta1": 0.7806656713951099,
50
+ "rel": 0.19088833483594253
51
+ },
52
+ "inference_time": 0.11909952782150499
53
+ },
54
+ "DDAD": {
55
+ "disparity_affine_invariant": {
56
+ "delta1": 0.8588688754439354,
57
+ "rel": 0.12065724640525878
58
+ },
59
+ "inference_time": 0.16672814559936525
60
+ },
61
+ "DIODE": {
62
+ "disparity_affine_invariant": {
63
+ "delta1": 0.9626566964055469,
64
+ "rel": 0.04872392241458919
65
+ },
66
+ "inference_time": 0.09152144587921261
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.0957834383384807,
71
+ "radius2_f1": 0.1383020748565513,
72
+ "radius3_f1": 0.1847438205311289
73
+ },
74
+ "disparity_affine_invariant": {
75
+ "delta1": 0.7878975656181574,
76
+ "rel": 0.19268939194642007
77
+ },
78
+ "inference_time": 0.10515354180335998
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.05564962070024008,
83
+ "radius2_f1": 0.10173857789898587,
84
+ "radius3_f1": 0.15079821900176413
85
+ },
86
+ "disparity_affine_invariant": {
87
+ "delta1": 0.994742604378731,
88
+ "rel": 0.032439182523277495
89
+ },
90
+ "inference_time": 0.12041027715129236
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.15767157672783016,
95
+ "radius2_f1": 0.2187328941123492,
96
+ "radius3_f1": 0.27849626022419366
97
+ },
98
+ "disparity_affine_invariant": {
99
+ "delta1": 0.9290944141766365,
100
+ "rel": 0.07671189458166287
101
+ },
102
+ "inference_time": 0.11156141129426944
103
+ }
104
+ }
eval_output/da3_mono_20260514_010406.json ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "depth_affine_invariant": {
4
+ "delta1": 0.9842329727035779,
5
+ "rel": 0.03365745910070267
6
+ },
7
+ "depth_scale_invariant": {
8
+ "delta1": 0.8215298035184907,
9
+ "rel": 0.11821608891602411
10
+ },
11
+ "disparity_affine_invariant": {
12
+ "delta1": 0.9529819012781896,
13
+ "rel": 0.070600181768167
14
+ },
15
+ "inference_time": 0.06012226317636099
16
+ },
17
+ "KITTI": {
18
+ "depth_affine_invariant": {
19
+ "delta1": 0.9548418280537143,
20
+ "rel": 0.05723362605150309
21
+ },
22
+ "depth_scale_invariant": {
23
+ "delta1": 0.7984360658957914,
24
+ "rel": 0.138132755118965
25
+ },
26
+ "disparity_affine_invariant": {
27
+ "delta1": 0.8756626969649016,
28
+ "rel": 0.10437376339822733
29
+ },
30
+ "inference_time": 0.06163968959469005
31
+ },
32
+ "ETH3D": {
33
+ "depth_affine_invariant": {
34
+ "delta1": 0.9667722892327981,
35
+ "rel": 0.04980472763815673
36
+ },
37
+ "depth_scale_invariant": {
38
+ "delta1": 0.8612703849789215,
39
+ "rel": 0.10587181859867163
40
+ },
41
+ "disparity_affine_invariant": {
42
+ "delta1": 0.937759244034994,
43
+ "rel": 0.07689275385889877
44
+ },
45
+ "inference_time": 0.26546877858922346
46
+ },
47
+ "iBims-1": {
48
+ "boundary": {
49
+ "radius1_f1": 0.1586292494149643,
50
+ "radius2_f1": 0.22572735569984356,
51
+ "radius3_f1": 0.2948077012193139
52
+ },
53
+ "depth_affine_invariant": {
54
+ "delta1": 0.9873817068338394,
55
+ "rel": 0.027764218405354767
56
+ },
57
+ "depth_scale_invariant": {
58
+ "delta1": 0.8168159851431847,
59
+ "rel": 0.11600593734532595
60
+ },
61
+ "disparity_affine_invariant": {
62
+ "delta1": 0.9482934600114823,
63
+ "rel": 0.06542401853716001
64
+ },
65
+ "inference_time": 0.04701090574264526
66
+ },
67
+ "GSO": {
68
+ "depth_affine_invariant": {
69
+ "delta1": 0.9998982329391739,
70
+ "rel": 0.010024072977971936
71
+ },
72
+ "depth_scale_invariant": {
73
+ "delta1": 0.8301085555148356,
74
+ "rel": 0.12271900717349886
75
+ },
76
+ "disparity_affine_invariant": {
77
+ "delta1": 0.9998944644789094,
78
+ "rel": 0.0176014133629579
79
+ },
80
+ "inference_time": 0.05656705597071972
81
+ },
82
+ "Sintel": {
83
+ "boundary": {
84
+ "radius1_f1": 0.21803660339317893,
85
+ "radius2_f1": 0.28839443461901604,
86
+ "radius3_f1": 0.35456692266761547
87
+ },
88
+ "depth_affine_invariant": {
89
+ "delta1": 0.7964421554391545,
90
+ "rel": 0.15418921665575608
91
+ },
92
+ "depth_scale_invariant": {
93
+ "delta1": 0.5632054926057283,
94
+ "rel": 0.2631002835075098
95
+ },
96
+ "disparity_affine_invariant": {
97
+ "delta1": 0.7373875420692874,
98
+ "rel": 0.19922825927439994
99
+ },
100
+ "inference_time": 0.04934158719571909
101
+ },
102
+ "DDAD": {
103
+ "depth_affine_invariant": {
104
+ "delta1": 0.8031078740507365,
105
+ "rel": 0.1441272779572755
106
+ },
107
+ "depth_scale_invariant": {
108
+ "delta1": 0.7456203748509288,
109
+ "rel": 0.1753367228731513
110
+ },
111
+ "disparity_affine_invariant": {
112
+ "delta1": 0.7518493929207325,
113
+ "rel": 0.17297881967574358
114
+ },
115
+ "inference_time": 0.16839831614494324
116
+ },
117
+ "DIODE": {
118
+ "depth_affine_invariant": {
119
+ "delta1": 0.9545116628171095,
120
+ "rel": 0.045432011499015275
121
+ },
122
+ "depth_scale_invariant": {
123
+ "delta1": 0.7835725228662751,
124
+ "rel": 0.13822900867705679
125
+ },
126
+ "disparity_affine_invariant": {
127
+ "delta1": 0.9287748250979858,
128
+ "rel": 0.07804938058559717
129
+ },
130
+ "inference_time": 0.08143034333848767
131
+ },
132
+ "Spring": {
133
+ "boundary": {
134
+ "radius1_f1": 0.07447604309345533,
135
+ "radius2_f1": 0.10999641445550992,
136
+ "radius3_f1": 0.14886539831311954
137
+ },
138
+ "depth_affine_invariant": {
139
+ "delta1": 0.8448349607139826,
140
+ "rel": 0.12854314330220223
141
+ },
142
+ "depth_scale_invariant": {
143
+ "delta1": 0.7121130783446133,
144
+ "rel": 0.20040130407735707
145
+ },
146
+ "disparity_affine_invariant": {
147
+ "delta1": 0.6952347421050071,
148
+ "rel": 0.2119988034758717
149
+ },
150
+ "inference_time": 0.1508751003742218
151
+ },
152
+ "HAMMER": {
153
+ "boundary": {
154
+ "radius1_f1": 0.042047389827641196,
155
+ "radius2_f1": 0.09450678144285812,
156
+ "radius3_f1": 0.1454696488162449
157
+ },
158
+ "depth_affine_invariant": {
159
+ "delta1": 0.994214277959639,
160
+ "rel": 0.032994967244204976
161
+ },
162
+ "depth_scale_invariant": {
163
+ "delta1": 0.7778351359598098,
164
+ "rel": 0.13338640418985198
165
+ },
166
+ "disparity_affine_invariant": {
167
+ "delta1": 0.9933144579395171,
168
+ "rel": 0.05207115354917703
169
+ },
170
+ "inference_time": 0.12588356141121157
171
+ },
172
+ "mean": {
173
+ "boundary": {
174
+ "radius1_f1": 0.12329732143230995,
175
+ "radius2_f1": 0.1796562465543069,
176
+ "radius3_f1": 0.23592741775407344
177
+ },
178
+ "depth_affine_invariant": {
179
+ "delta1": 0.9286237960743726,
180
+ "rel": 0.06837707208321434
181
+ },
182
+ "depth_scale_invariant": {
183
+ "delta1": 0.771050739967858,
184
+ "rel": 0.15113993304774126
185
+ },
186
+ "disparity_affine_invariant": {
187
+ "delta1": 0.8821152726901007,
188
+ "rel": 0.10492185474862006
189
+ },
190
+ "inference_time": 0.10667376015382228
191
+ }
192
+ }
eval_output/da3_sdt_20260114_151926.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "disparity_affine_invariant": {
4
+ "delta1": 0.9822690458654992,
5
+ "rel": 0.041082192084179556
6
+ },
7
+ "inference_time": 0.0956408521815542
8
+ },
9
+ "KITTI": {
10
+ "disparity_affine_invariant": {
11
+ "delta1": 0.9569755639035278,
12
+ "rel": 0.05847841100138755
13
+ },
14
+ "inference_time": 0.13918838947097217
15
+ },
16
+ "ETH3D": {
17
+ "disparity_affine_invariant": {
18
+ "delta1": 0.9851875457469588,
19
+ "rel": 0.03769550322645266
20
+ },
21
+ "inference_time": 0.15542739130851982
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.15659404660254053,
26
+ "radius2_f1": 0.22854945173450625,
27
+ "radius3_f1": 0.30424493579629347
28
+ },
29
+ "disparity_affine_invariant": {
30
+ "delta1": 0.9836022299528122,
31
+ "rel": 0.033844739213818684
32
+ },
33
+ "inference_time": 0.09014980792999268
34
+ },
35
+ "GSO": {
36
+ "disparity_affine_invariant": {
37
+ "delta1": 0.9998617868979001,
38
+ "rel": 0.010903270197444533
39
+ },
40
+ "inference_time": 0.07218850191357067
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.36659458087731406,
45
+ "radius2_f1": 0.44666311786357743,
46
+ "radius3_f1": 0.5097793149911657
47
+ },
48
+ "disparity_affine_invariant": {
49
+ "delta1": 0.7844882020654588,
50
+ "rel": 0.18823953198191984
51
+ },
52
+ "inference_time": 0.12452855078797591
53
+ },
54
+ "DDAD": {
55
+ "disparity_affine_invariant": {
56
+ "delta1": 0.8587977066338063,
57
+ "rel": 0.12046779834106565
58
+ },
59
+ "inference_time": 0.16769639205932618
60
+ },
61
+ "DIODE": {
62
+ "disparity_affine_invariant": {
63
+ "delta1": 0.9638507951403715,
64
+ "rel": 0.048385331335316875
65
+ },
66
+ "inference_time": 0.0943773640114355
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.10996974621165327,
71
+ "radius2_f1": 0.15424793398832415,
72
+ "radius3_f1": 0.20164641573881628
73
+ },
74
+ "disparity_affine_invariant": {
75
+ "delta1": 0.7786549809873105,
76
+ "rel": 0.19937820520531385
77
+ },
78
+ "inference_time": 0.1383480498790741
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.06227834643178843,
83
+ "radius2_f1": 0.1090190175484027,
84
+ "radius3_f1": 0.15767184642353438
85
+ },
86
+ "disparity_affine_invariant": {
87
+ "delta1": 0.9936724638938904,
88
+ "rel": 0.036636853568976925
89
+ },
90
+ "inference_time": 0.14718871239692935
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.17385918003082407,
95
+ "radius2_f1": 0.23461988028370265,
96
+ "radius3_f1": 0.2933356282374524
97
+ },
98
+ "disparity_affine_invariant": {
99
+ "delta1": 0.9287360321087537,
100
+ "rel": 0.07751118361558762
101
+ },
102
+ "inference_time": 0.12247340119393504
103
+ }
104
+ }
eval_output/depth_pro_20260514_010406.json ADDED
@@ -0,0 +1,268 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "depth_affine_invariant": {
4
+ "delta1": 0.9820506870746613,
5
+ "rel": 0.03673129855293428
6
+ },
7
+ "depth_metric": {
8
+ "delta1": 0.9187179086975531,
9
+ "rel": 0.10690058745865337
10
+ },
11
+ "depth_scale_invariant": {
12
+ "delta1": 0.9764589867825173,
13
+ "rel": 0.044194771778635934
14
+ },
15
+ "disparity_affine_invariant": {
16
+ "delta1": 0.9810435123490996,
17
+ "rel": 0.04209366659505652
18
+ },
19
+ "fov_x": {
20
+ "deviation": -2.1936934096343523,
21
+ "mae": 2.2372255268672605
22
+ },
23
+ "inference_time": 0.46555119151369145
24
+ },
25
+ "KITTI": {
26
+ "depth_affine_invariant": {
27
+ "delta1": 0.9675852536606643,
28
+ "rel": 0.05117697268153078
29
+ },
30
+ "depth_metric": {
31
+ "delta1": 0.3833901699530755,
32
+ "rel": 0.23499852794285384
33
+ },
34
+ "depth_scale_invariant": {
35
+ "delta1": 0.9616910178968512,
36
+ "rel": 0.05472872901330788
37
+ },
38
+ "disparity_affine_invariant": {
39
+ "delta1": 0.9703659451812323,
40
+ "rel": 0.05102517709501682
41
+ },
42
+ "fov_x": {
43
+ "deviation": 12.289889667434561,
44
+ "mae": 12.399165404034903
45
+ },
46
+ "inference_time": 0.4610693688772939
47
+ },
48
+ "ETH3D": {
49
+ "depth_affine_invariant": {
50
+ "delta1": 0.9643279904179636,
51
+ "rel": 0.04972138342197312
52
+ },
53
+ "depth_metric": {
54
+ "delta1": 0.32841569779867913,
55
+ "rel": 0.3846802060504645
56
+ },
57
+ "depth_scale_invariant": {
58
+ "delta1": 0.9413498657265195,
59
+ "rel": 0.0753911737227476
60
+ },
61
+ "disparity_affine_invariant": {
62
+ "delta1": 0.9671646553842507,
63
+ "rel": 0.0493954380529026
64
+ },
65
+ "fov_x": {
66
+ "deviation": -2.352314861372343,
67
+ "mae": 7.772609947704698
68
+ },
69
+ "inference_time": 0.4510512887643823
70
+ },
71
+ "iBims-1": {
72
+ "boundary": {
73
+ "radius1_f1": 0.1430420886560772,
74
+ "radius2_f1": 0.22661668153598225,
75
+ "radius3_f1": 0.30913188893016413
76
+ },
77
+ "depth_affine_invariant": {
78
+ "delta1": 0.9832902586460114,
79
+ "rel": 0.03225100587820634
80
+ },
81
+ "depth_metric": {
82
+ "delta1": 0.8145079022071877,
83
+ "rel": 0.15870255175977946
84
+ },
85
+ "depth_scale_invariant": {
86
+ "delta1": 0.9739851438999176,
87
+ "rel": 0.04130583364283666
88
+ },
89
+ "disparity_affine_invariant": {
90
+ "delta1": 0.9821313440799713,
91
+ "rel": 0.03742910316446796
92
+ },
93
+ "fov_x": {
94
+ "deviation": 0.28815779231488703,
95
+ "mae": 4.241044307723642
96
+ },
97
+ "inference_time": 0.459551522731781
98
+ },
99
+ "GSO": {
100
+ "depth_affine_invariant": {
101
+ "delta1": 0.9998734251388068,
102
+ "rel": 0.01455708016885571
103
+ },
104
+ "depth_scale_invariant": {
105
+ "delta1": 0.9992615321886192,
106
+ "rel": 0.02179838776547751
107
+ },
108
+ "disparity_affine_invariant": {
109
+ "delta1": 0.9999304252920799,
110
+ "rel": 0.014888137305670788
111
+ },
112
+ "fov_x": {
113
+ "deviation": -11.242877931375672,
114
+ "mae": 12.318885509185131
115
+ },
116
+ "inference_time": 0.45823356341389776
117
+ },
118
+ "Sintel": {
119
+ "boundary": {
120
+ "radius1_f1": 0.41575817371244517,
121
+ "radius2_f1": 0.49540026435293644,
122
+ "radius3_f1": 0.5517328723128564
123
+ },
124
+ "depth_affine_invariant": {
125
+ "delta1": 0.8007735163183477,
126
+ "rel": 0.1581102736159473
127
+ },
128
+ "depth_scale_invariant": {
129
+ "delta1": 0.6870164791786741,
130
+ "rel": 0.23876511045430499
131
+ },
132
+ "disparity_affine_invariant": {
133
+ "delta1": 0.7913509803032852,
134
+ "rel": 0.1741711959606947
135
+ },
136
+ "fov_x": {
137
+ "deviation": -6.365435130782146,
138
+ "mae": 12.132162683363303
139
+ },
140
+ "inference_time": 0.45765884209396246
141
+ },
142
+ "DDAD": {
143
+ "depth_affine_invariant": {
144
+ "delta1": 0.8406942666694522,
145
+ "rel": 0.12618421931378543
146
+ },
147
+ "depth_metric": {
148
+ "delta1": 0.35307908895720264,
149
+ "rel": 0.3336838957555592
150
+ },
151
+ "depth_scale_invariant": {
152
+ "delta1": 0.8202730208984139,
153
+ "rel": 0.1398495964985341
154
+ },
155
+ "disparity_affine_invariant": {
156
+ "delta1": 0.8705490458011628,
157
+ "rel": 0.11721267482824624
158
+ },
159
+ "fov_x": {
160
+ "deviation": 0.4932797067114152,
161
+ "mae": 6.587841512862127
162
+ },
163
+ "inference_time": 0.4594235451221466
164
+ },
165
+ "DIODE": {
166
+ "depth_affine_invariant": {
167
+ "delta1": 0.9560058739340723,
168
+ "rel": 0.046575217746649945
169
+ },
170
+ "depth_metric": {
171
+ "delta1": 0.37672498267577753,
172
+ "rel": 0.31926306681260563
173
+ },
174
+ "depth_scale_invariant": {
175
+ "delta1": 0.9198972772729858,
176
+ "rel": 0.07051707196910655
177
+ },
178
+ "disparity_affine_invariant": {
179
+ "delta1": 0.9640359624679141,
180
+ "rel": 0.0483729427767345
181
+ },
182
+ "fov_x": {
183
+ "deviation": 2.127677210767291,
184
+ "mae": 4.202043486418427
185
+ },
186
+ "inference_time": 0.4569794463740249
187
+ },
188
+ "Spring": {
189
+ "boundary": {
190
+ "radius1_f1": 0.11046955339914415,
191
+ "radius2_f1": 0.16598339846546073,
192
+ "radius3_f1": 0.2192520383951068
193
+ },
194
+ "depth_affine_invariant": {
195
+ "delta1": 0.70457039347291,
196
+ "rel": 0.2174642860358581
197
+ },
198
+ "depth_scale_invariant": {
199
+ "delta1": 0.6377551994208861,
200
+ "rel": 0.2508939392492175
201
+ },
202
+ "disparity_affine_invariant": {
203
+ "delta1": 0.6454287670111116,
204
+ "rel": 0.275487666961737
205
+ },
206
+ "fov_x": {
207
+ "deviation": -7.68183114505373,
208
+ "mae": 12.20425734708272
209
+ },
210
+ "inference_time": 0.4539069445133209
211
+ },
212
+ "HAMMER": {
213
+ "boundary": {
214
+ "radius1_f1": 0.05357744083974888,
215
+ "radius2_f1": 0.10073427351413128,
216
+ "radius3_f1": 0.15120729159126864
217
+ },
218
+ "depth_affine_invariant": {
219
+ "delta1": 0.9955395747769263,
220
+ "rel": 0.0329740911956516
221
+ },
222
+ "depth_metric": {
223
+ "delta1": 0.630067166679836,
224
+ "rel": 0.3908365362822529
225
+ },
226
+ "depth_scale_invariant": {
227
+ "delta1": 0.9891820696092422,
228
+ "rel": 0.04356253144962172
229
+ },
230
+ "disparity_affine_invariant": {
231
+ "delta1": 0.9964827492929274,
232
+ "rel": 0.03307592345702071
233
+ },
234
+ "fov_x": {
235
+ "deviation": -1.791645229581383,
236
+ "mae": 6.895299943663901
237
+ },
238
+ "inference_time": 0.45521240603539254
239
+ },
240
+ "mean": {
241
+ "boundary": {
242
+ "radius1_f1": 0.18071181415185386,
243
+ "radius2_f1": 0.2471836544671277,
244
+ "radius3_f1": 0.307831022807349
245
+ },
246
+ "depth_affine_invariant": {
247
+ "delta1": 0.9194711240109814,
248
+ "rel": 0.07657458286113926
249
+ },
250
+ "depth_metric": {
251
+ "delta1": 0.5435575595670445,
252
+ "rel": 0.27558076743745274
253
+ },
254
+ "depth_scale_invariant": {
255
+ "delta1": 0.8906870592874627,
256
+ "rel": 0.09810071455437903
257
+ },
258
+ "disparity_affine_invariant": {
259
+ "delta1": 0.9168483387163034,
260
+ "rel": 0.08431519261975479
261
+ },
262
+ "fov_x": {
263
+ "deviation": -1.6428793330571474,
264
+ "mae": 8.09905356689061
265
+ },
266
+ "inference_time": 0.4578638119439894
267
+ }
268
+ }
eval_output/depthmaster_20260514_051015.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "depth_affine_invariant": {
4
+ "delta1": 0.9410472330879364,
5
+ "rel": 0.07134324018003288
6
+ },
7
+ "inference_time": 0.2020905291268585
8
+ },
9
+ "KITTI": {
10
+ "depth_affine_invariant": {
11
+ "delta1": 0.7719076957347935,
12
+ "rel": 0.147030111985064
13
+ },
14
+ "inference_time": 0.16154385234680643
15
+ },
16
+ "ETH3D": {
17
+ "depth_affine_invariant": {
18
+ "delta1": 0.8731387665552715,
19
+ "rel": 0.09873372884795463
20
+ },
21
+ "inference_time": 0.386834826763506
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.12171566219419683,
26
+ "radius2_f1": 0.19005310827006372,
27
+ "radius3_f1": 0.25777294203615136
28
+ },
29
+ "depth_affine_invariant": {
30
+ "delta1": 0.9152990913391114,
31
+ "rel": 0.07639978838153183
32
+ },
33
+ "inference_time": 0.16892967700958253
34
+ },
35
+ "GSO": {
36
+ "depth_affine_invariant": {
37
+ "delta1": 0.9994433435421546,
38
+ "rel": 0.020532162512372276
39
+ },
40
+ "inference_time": 0.23299153939034176
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.18143639975690923,
45
+ "radius2_f1": 0.256254147811002,
46
+ "radius3_f1": 0.31712573788647364
47
+ },
48
+ "depth_affine_invariant": {
49
+ "delta1": 0.6833189267216992,
50
+ "rel": 0.2247265458380089
51
+ },
52
+ "inference_time": 0.1221340639250619
53
+ },
54
+ "DDAD": {
55
+ "depth_affine_invariant": {
56
+ "delta1": 0.6454631027728319,
57
+ "rel": 0.21893314714357257
58
+ },
59
+ "inference_time": 0.21915656995773317
60
+ },
61
+ "DIODE": {
62
+ "depth_affine_invariant": {
63
+ "delta1": 0.8783703333614865,
64
+ "rel": 0.09743429349443057
65
+ },
66
+ "inference_time": 0.19047864803853512
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.03722794386439897,
71
+ "radius2_f1": 0.06379663391575882,
72
+ "radius3_f1": 0.09305325565979894
73
+ },
74
+ "depth_affine_invariant": {
75
+ "delta1": 0.6205925907492638,
76
+ "rel": 0.2726356995031238
77
+ },
78
+ "inference_time": 0.3132643463611603
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.015410055047567622,
83
+ "radius2_f1": 0.046602117536647704,
84
+ "radius3_f1": 0.08473205124883053
85
+ },
86
+ "depth_affine_invariant": {
87
+ "delta1": 0.9825814608604677,
88
+ "rel": 0.04832167550320587
89
+ },
90
+ "inference_time": 0.2550306458627024
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.08894751521576816,
95
+ "radius2_f1": 0.13917650188336805,
96
+ "radius3_f1": 0.18817099670781362
97
+ },
98
+ "depth_affine_invariant": {
99
+ "delta1": 0.8311162544725018,
100
+ "rel": 0.12760903933892973
101
+ },
102
+ "inference_time": 0.22524546987822883
103
+ }
104
+ }
eval_output/fe2e_20260514_051015.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "depth_affine_invariant": {
4
+ "delta1": 0.9677707036153986,
5
+ "rel": 0.054971146056946904
6
+ },
7
+ "inference_time": 1.1307842341402619
8
+ },
9
+ "KITTI": {
10
+ "depth_affine_invariant": {
11
+ "delta1": 0.8177384435216342,
12
+ "rel": 0.12016221975187766
13
+ },
14
+ "inference_time": 1.1146951014278856
15
+ },
16
+ "ETH3D": {
17
+ "depth_affine_invariant": {
18
+ "delta1": 0.9126887560087679,
19
+ "rel": 0.07963626108251283
20
+ },
21
+ "inference_time": 0.7408855166204176
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.15363897436436408,
26
+ "radius2_f1": 0.226419974321106,
27
+ "radius3_f1": 0.3003574887749092
28
+ },
29
+ "depth_affine_invariant": {
30
+ "delta1": 0.9467064309120178,
31
+ "rel": 0.056087284786626695
32
+ },
33
+ "inference_time": 1.1054088115692138
34
+ },
35
+ "GSO": {
36
+ "depth_affine_invariant": {
37
+ "delta1": 0.9997914641227537,
38
+ "rel": 0.015539401213783156
39
+ },
40
+ "inference_time": 1.1094348037127153
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.28425223319177123,
45
+ "radius2_f1": 0.3649714318466705,
46
+ "radius3_f1": 0.4327620468719381
47
+ },
48
+ "depth_affine_invariant": {
49
+ "delta1": 0.7383855917347469,
50
+ "rel": 0.18880830971608148
51
+ },
52
+ "inference_time": 1.101097762808764
53
+ },
54
+ "DDAD": {
55
+ "depth_affine_invariant": {
56
+ "delta1": 0.7156870959103108,
57
+ "rel": 0.18291499907523392
58
+ },
59
+ "inference_time": 0.6922513723373414
60
+ },
61
+ "DIODE": {
62
+ "depth_affine_invariant": {
63
+ "delta1": 0.9124656943399,
64
+ "rel": 0.07244570567798057
65
+ },
66
+ "inference_time": 1.0948944332998551
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.06142362604878909,
71
+ "radius2_f1": 0.09597285666206465,
72
+ "radius3_f1": 0.13305035221522823
73
+ },
74
+ "depth_affine_invariant": {
75
+ "delta1": 0.6554687293097377,
76
+ "rel": 0.24526748671010137
77
+ },
78
+ "inference_time": 0.7223747565746307
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.0391168802405047,
83
+ "radius2_f1": 0.07804784666809074,
84
+ "radius3_f1": 0.12191997364270456
85
+ },
86
+ "depth_affine_invariant": {
87
+ "delta1": 0.991651911504807,
88
+ "rel": 0.04615749980052632
89
+ },
90
+ "inference_time": 0.7110666305788101
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.13460792846135727,
95
+ "radius2_f1": 0.19135302737448298,
96
+ "radius3_f1": 0.24702246537619502
97
+ },
98
+ "depth_affine_invariant": {
99
+ "delta1": 0.8658354820980076,
100
+ "rel": 0.10619903138716709
101
+ },
102
+ "inference_time": 0.9522893423069896
103
+ }
104
+ }
eval_output/lotus_20260514_051015.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "disparity_affine_invariant": {
4
+ "delta1": 0.9749006935215871,
5
+ "rel": 0.04913044609955373
6
+ },
7
+ "inference_time": 0.10546215229442725
8
+ },
9
+ "KITTI": {
10
+ "disparity_affine_invariant": {
11
+ "delta1": 0.9427100896286819,
12
+ "rel": 0.07087309694237595
13
+ },
14
+ "inference_time": 0.09383802143342655
15
+ },
16
+ "ETH3D": {
17
+ "disparity_affine_invariant": {
18
+ "delta1": 0.9558068188604805,
19
+ "rel": 0.06377930769549132
20
+ },
21
+ "inference_time": 0.2809025248766996
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.1475422538588284,
26
+ "radius2_f1": 0.21174579607168767,
27
+ "radius3_f1": 0.2804507080025928
28
+ },
29
+ "disparity_affine_invariant": {
30
+ "delta1": 0.9657111984491348,
31
+ "rel": 0.04997633630875498
32
+ },
33
+ "inference_time": 0.09900510549545288
34
+ },
35
+ "GSO": {
36
+ "disparity_affine_invariant": {
37
+ "delta1": 0.9980975888307813,
38
+ "rel": 0.02760566871124998
39
+ },
40
+ "inference_time": 0.12673539050574442
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.19605536526576497,
45
+ "radius2_f1": 0.2704869553706472,
46
+ "radius3_f1": 0.3372580615703242
47
+ },
48
+ "disparity_affine_invariant": {
49
+ "delta1": 0.6580032162365343,
50
+ "rel": 0.25588860660721374
51
+ },
52
+ "inference_time": 0.07990925495785878
53
+ },
54
+ "DDAD": {
55
+ "disparity_affine_invariant": {
56
+ "delta1": 0.8151900426447392,
57
+ "rel": 0.1427371341045946
58
+ },
59
+ "inference_time": 0.18567005157470703
60
+ },
61
+ "DIODE": {
62
+ "disparity_affine_invariant": {
63
+ "delta1": 0.9299540326202194,
64
+ "rel": 0.07333235301369688
65
+ },
66
+ "inference_time": 0.11087898799881397
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.05017785269721411,
71
+ "radius2_f1": 0.07653649999731367,
72
+ "radius3_f1": 0.10636493496044333
73
+ },
74
+ "disparity_affine_invariant": {
75
+ "delta1": 0.6364590004757047,
76
+ "rel": 0.29262385263852775
77
+ },
78
+ "inference_time": 0.17655703115463256
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.05569910806618575,
83
+ "radius2_f1": 0.08328994911102691,
84
+ "radius3_f1": 0.11831711329281387
85
+ },
86
+ "disparity_affine_invariant": {
87
+ "delta1": 0.9880546555980559,
88
+ "rel": 0.039042790372886
89
+ },
90
+ "inference_time": 0.15112584421711583
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.1123686449719983,
95
+ "radius2_f1": 0.1605148001376689,
96
+ "radius3_f1": 0.21059770445654355
97
+ },
98
+ "disparity_affine_invariant": {
99
+ "delta1": 0.886488733686592,
100
+ "rel": 0.10649895924943449
101
+ },
102
+ "inference_time": 0.1410084364508879
103
+ }
104
+ }
eval_output/lotus_v1_20260514_120539.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "depth_affine_invariant": {
4
+ "delta1": 0.9733936901030555,
5
+ "rel": 0.044766311923936236
6
+ },
7
+ "inference_time": 0.1054389622597884
8
+ },
9
+ "KITTI": {
10
+ "depth_affine_invariant": {
11
+ "delta1": 0.9284938770386338,
12
+ "rel": 0.07409823287644086
13
+ },
14
+ "inference_time": 0.09428072231678875
15
+ },
16
+ "ETH3D": {
17
+ "depth_affine_invariant": {
18
+ "delta1": 0.9544449227365628,
19
+ "rel": 0.06034333982693631
20
+ },
21
+ "inference_time": 0.28604294024900195
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.14307057104446227,
26
+ "radius2_f1": 0.20551033810971794,
27
+ "radius3_f1": 0.272971766481636
28
+ },
29
+ "depth_affine_invariant": {
30
+ "delta1": 0.9676709264516831,
31
+ "rel": 0.043627550932578744
32
+ },
33
+ "inference_time": 0.09930837392807007
34
+ },
35
+ "GSO": {
36
+ "depth_affine_invariant": {
37
+ "delta1": 0.9974662307975362,
38
+ "rel": 0.02767030863516322
39
+ },
40
+ "inference_time": 0.12980866177568157
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.18026835551212994,
45
+ "radius2_f1": 0.2535372614583094,
46
+ "radius3_f1": 0.3207645095218857
47
+ },
48
+ "depth_affine_invariant": {
49
+ "delta1": 0.7216849523715507,
50
+ "rel": 0.1991910549139786
51
+ },
52
+ "inference_time": 0.0799197256565094
53
+ },
54
+ "DDAD": {
55
+ "depth_affine_invariant": {
56
+ "delta1": 0.7946241353303194,
57
+ "rel": 0.14810539987683297
58
+ },
59
+ "inference_time": 0.18895407605171205
60
+ },
61
+ "DIODE": {
62
+ "depth_affine_invariant": {
63
+ "delta1": 0.9193339022420027,
64
+ "rel": 0.07296753192656273
65
+ },
66
+ "inference_time": 0.1117919224245539
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.04726484672877697,
71
+ "radius2_f1": 0.07295104788954535,
72
+ "radius3_f1": 0.10267077996234343
73
+ },
74
+ "depth_affine_invariant": {
75
+ "delta1": 0.6583185461126267,
76
+ "rel": 0.24082366870343686
77
+ },
78
+ "inference_time": 0.17776878333091736
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.06471224833261445,
83
+ "radius2_f1": 0.09622541742791559,
84
+ "radius3_f1": 0.13533487191322333
85
+ },
86
+ "depth_affine_invariant": {
87
+ "delta1": 0.9845967973432234,
88
+ "rel": 0.03617122369968603
89
+ },
90
+ "inference_time": 0.15023052553976735
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.10882900540449592,
95
+ "radius2_f1": 0.15705601622137205,
96
+ "radius3_f1": 0.2079354819697721
97
+ },
98
+ "depth_affine_invariant": {
99
+ "delta1": 0.8900027980527195,
100
+ "rel": 0.09477646233155526
101
+ },
102
+ "inference_time": 0.14235446935327906
103
+ }
104
+ }
eval_output/marigold_20260514_051015.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "depth_affine_invariant": {
4
+ "delta1": 0.971532667266484,
5
+ "rel": 0.04832171756255836
6
+ },
7
+ "inference_time": 0.33676495829124337
8
+ },
9
+ "KITTI": {
10
+ "depth_affine_invariant": {
11
+ "delta1": 0.930904457814123,
12
+ "rel": 0.07571738764666523
13
+ },
14
+ "inference_time": 0.2440401326659267
15
+ },
16
+ "ETH3D": {
17
+ "depth_affine_invariant": {
18
+ "delta1": 0.9536551170806002,
19
+ "rel": 0.06209876361028148
20
+ },
21
+ "inference_time": 0.4632430207886885
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.13460271172428273,
26
+ "radius2_f1": 0.20192436862226398,
27
+ "radius3_f1": 0.27046073919107794
28
+ },
29
+ "depth_affine_invariant": {
30
+ "delta1": 0.9701894813776016,
31
+ "rel": 0.045621545240283015
32
+ },
33
+ "inference_time": 0.31143521070480346
34
+ },
35
+ "GSO": {
36
+ "depth_affine_invariant": {
37
+ "delta1": 0.9972509773032179,
38
+ "rel": 0.031107243107071202
39
+ },
40
+ "inference_time": 0.41820460986165164
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.1710469558910909,
45
+ "radius2_f1": 0.23344429123245902,
46
+ "radius3_f1": 0.2929726391371156
47
+ },
48
+ "depth_affine_invariant": {
49
+ "delta1": 0.717429626326924,
50
+ "rel": 0.2006812098670639
51
+ },
52
+ "inference_time": 0.2158550817267339
53
+ },
54
+ "DDAD": {
55
+ "depth_affine_invariant": {
56
+ "delta1": 0.7889951583892107,
57
+ "rel": 0.15136717859841883
58
+ },
59
+ "inference_time": 0.27667188334465026
60
+ },
61
+ "DIODE": {
62
+ "depth_affine_invariant": {
63
+ "delta1": 0.9315281444931,
64
+ "rel": 0.06614937942629696
65
+ },
66
+ "inference_time": 0.33078888108912763
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.04095831230637608,
71
+ "radius2_f1": 0.06423464202011386,
72
+ "radius3_f1": 0.08963440167647983
73
+ },
74
+ "depth_affine_invariant": {
75
+ "delta1": 0.661320127248764,
76
+ "rel": 0.2447980516143143
77
+ },
78
+ "inference_time": 0.4016411719322205
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.044406097398162196,
83
+ "radius2_f1": 0.08303353511900266,
84
+ "radius3_f1": 0.12434305877673818
85
+ },
86
+ "depth_affine_invariant": {
87
+ "delta1": 0.9814413397542892,
88
+ "rel": 0.044319415512464704
89
+ },
90
+ "inference_time": 0.3298594471716112
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.09775351932997797,
95
+ "radius2_f1": 0.14565920924845988,
96
+ "radius3_f1": 0.1943527096953529
97
+ },
98
+ "depth_affine_invariant": {
99
+ "delta1": 0.8904247097054316,
100
+ "rel": 0.0970181892185418
101
+ },
102
+ "inference_time": 0.3328504397576657
103
+ }
104
+ }
eval_output/ppd_20260514_051015.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "depth_affine_invariant": {
4
+ "delta1": 0.9812857898732573,
5
+ "rel": 0.04137342460225664
6
+ },
7
+ "inference_time": 0.4002604375191785
8
+ },
9
+ "KITTI": {
10
+ "depth_affine_invariant": {
11
+ "delta1": 0.8515733158661544,
12
+ "rel": 0.10252801829126258
13
+ },
14
+ "inference_time": 0.3943723410916475
15
+ },
16
+ "ETH3D": {
17
+ "depth_affine_invariant": {
18
+ "delta1": 0.9357321992647806,
19
+ "rel": 0.0652333986822368
20
+ },
21
+ "inference_time": 0.4787121302230768
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.16781530802169173,
26
+ "radius2_f1": 0.24097226774641198,
27
+ "radius3_f1": 0.31577993435756385
28
+ },
29
+ "depth_affine_invariant": {
30
+ "delta1": 0.9734847605228424,
31
+ "rel": 0.042361440965905786
32
+ },
33
+ "inference_time": 0.3966550397872925
34
+ },
35
+ "GSO": {
36
+ "depth_affine_invariant": {
37
+ "delta1": 0.9998904481674861,
38
+ "rel": 0.012752604447033944
39
+ },
40
+ "inference_time": 0.3909058658822069
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.3650577504387784,
45
+ "radius2_f1": 0.44092156355483897,
46
+ "radius3_f1": 0.5014479743589317
47
+ },
48
+ "depth_affine_invariant": {
49
+ "delta1": 0.7851335345452329,
50
+ "rel": 0.15865576951817043
51
+ },
52
+ "inference_time": 0.3938561131183366
53
+ },
54
+ "DDAD": {
55
+ "depth_affine_invariant": {
56
+ "delta1": 0.7481259640455246,
57
+ "rel": 0.1668533209078014
58
+ },
59
+ "inference_time": 0.4226221845149994
60
+ },
61
+ "DIODE": {
62
+ "depth_affine_invariant": {
63
+ "delta1": 0.9305983974988513,
64
+ "rel": 0.05958025794972734
65
+ },
66
+ "inference_time": 0.396921829800043
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.10583043540885449,
71
+ "radius2_f1": 0.14972916171857392,
72
+ "radius3_f1": 0.1958421938696795
73
+ },
74
+ "depth_affine_invariant": {
75
+ "delta1": 0.7259635306224227,
76
+ "rel": 0.20493754935264588
77
+ },
78
+ "inference_time": 0.4484560537338257
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.05855603994965662,
83
+ "radius2_f1": 0.099414525620475,
84
+ "radius3_f1": 0.14502044984553233
85
+ },
86
+ "depth_affine_invariant": {
87
+ "delta1": 0.9924079025945356,
88
+ "rel": 0.03098387817401559
89
+ },
90
+ "inference_time": 0.4213014704181302
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.17431488345474533,
95
+ "radius2_f1": 0.23275937966007498,
96
+ "radius3_f1": 0.28952263810792683
97
+ },
98
+ "depth_affine_invariant": {
99
+ "delta1": 0.8924195843001088,
100
+ "rel": 0.08852596628910563
101
+ },
102
+ "inference_time": 0.41440634660887365
103
+ }
104
+ }
eval_output/vggt_dpt_20260114_154929.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "disparity_affine_invariant": {
4
+ "delta1": 0.9806799777421747,
5
+ "rel": 0.0476413179724406
6
+ },
7
+ "inference_time": 0.47169270792503243
8
+ },
9
+ "KITTI": {
10
+ "disparity_affine_invariant": {
11
+ "delta1": 0.9156499569949929,
12
+ "rel": 0.07997325157026755
13
+ },
14
+ "inference_time": 0.7560202680482455
15
+ },
16
+ "ETH3D": {
17
+ "disparity_affine_invariant": {
18
+ "delta1": 0.9564565215998284,
19
+ "rel": 0.06715929369946802
20
+ },
21
+ "inference_time": 0.5564538051378359
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.0005542017921009516,
26
+ "radius2_f1": 0.003332474394986895,
27
+ "radius3_f1": 0.010182788110247911
28
+ },
29
+ "disparity_affine_invariant": {
30
+ "delta1": 0.9633645766973495,
31
+ "rel": 0.05318130692932755
32
+ },
33
+ "inference_time": 0.4688900685310364
34
+ },
35
+ "GSO": {
36
+ "disparity_affine_invariant": {
37
+ "delta1": 0.9997842591943092,
38
+ "rel": 0.016423891990635434
39
+ },
40
+ "inference_time": 0.3582769579100377
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.007949931417903353,
45
+ "radius2_f1": 0.03125265156925875,
46
+ "radius3_f1": 0.06713876517006416
47
+ },
48
+ "disparity_affine_invariant": {
49
+ "delta1": 0.6578696623647627,
50
+ "rel": 0.2656546090922101
51
+ },
52
+ "inference_time": 0.7516956127675852
53
+ },
54
+ "DDAD": {
55
+ "disparity_affine_invariant": {
56
+ "delta1": 0.7671807115674019,
57
+ "rel": 0.16585037663578986
58
+ },
59
+ "inference_time": 0.7553685846328735
60
+ },
61
+ "DIODE": {
62
+ "disparity_affine_invariant": {
63
+ "delta1": 0.9265085798189768,
64
+ "rel": 0.07706502821156332
65
+ },
66
+ "inference_time": 0.46985552870655184
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.0011245331893366326,
71
+ "radius2_f1": 0.002786080915791482,
72
+ "radius3_f1": 0.006333296643748226
73
+ },
74
+ "disparity_affine_invariant": {
75
+ "delta1": 0.7057920791767538,
76
+ "rel": 0.2851711636632681
77
+ },
78
+ "inference_time": 0.6627011473178863
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 1.8473324344170764e-05,
83
+ "radius2_f1": 5.771559171829026e-05,
84
+ "radius3_f1": 0.0004300530470218907
85
+ },
86
+ "disparity_affine_invariant": {
87
+ "delta1": 0.935330055375253,
88
+ "rel": 0.08727417080152419
89
+ },
90
+ "inference_time": 0.7497821134136569
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.002411784930921277,
95
+ "radius2_f1": 0.009357230617938854,
96
+ "radius3_f1": 0.021021225742770544
97
+ },
98
+ "disparity_affine_invariant": {
99
+ "delta1": 0.8808616380531804,
100
+ "rel": 0.11453944105664948
101
+ },
102
+ "inference_time": 0.6000736794390742
103
+ }
104
+ }
eval_output/vggt_dpt_metric_20260115_225801.json ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "depth_affine_invariant": {
4
+ "delta1": 0.9801867154942375,
5
+ "rel": 0.04453852714327042
6
+ },
7
+ "depth_metric": {
8
+ "delta1": 0.005133989981584815,
9
+ "rel": 0.6327294991308943
10
+ },
11
+ "depth_scale_invariant": {
12
+ "delta1": 0.9760367622433817,
13
+ "rel": 0.05197374952461468
14
+ },
15
+ "disparity_affine_invariant": {
16
+ "delta1": 0.9806800083647446,
17
+ "rel": 0.04764372832355423
18
+ },
19
+ "inference_time": 0.4678649461232923
20
+ },
21
+ "KITTI": {
22
+ "depth_affine_invariant": {
23
+ "delta1": 0.9109959826330466,
24
+ "rel": 0.07803288246419259
25
+ },
26
+ "depth_metric": {
27
+ "delta1": 0.0,
28
+ "rel": 0.9325332351805974
29
+ },
30
+ "depth_scale_invariant": {
31
+ "delta1": 0.9064337772094399,
32
+ "rel": 0.08303775403511487
33
+ },
34
+ "disparity_affine_invariant": {
35
+ "delta1": 0.9156189503296752,
36
+ "rel": 0.07998403359741613
37
+ },
38
+ "inference_time": 0.75479214469348
39
+ },
40
+ "ETH3D": {
41
+ "depth_affine_invariant": {
42
+ "delta1": 0.957338886639095,
43
+ "rel": 0.061563854038075336
44
+ },
45
+ "depth_metric": {
46
+ "delta1": 4.5257584571620884e-05,
47
+ "rel": 0.7910198320507478
48
+ },
49
+ "depth_scale_invariant": {
50
+ "delta1": 0.9399738760771731,
51
+ "rel": 0.07615267370913653
52
+ },
53
+ "disparity_affine_invariant": {
54
+ "delta1": 0.9564208270826003,
55
+ "rel": 0.06718054266096737
56
+ },
57
+ "inference_time": 0.552636383388536
58
+ },
59
+ "iBims-1": {
60
+ "boundary": {
61
+ "radius1_f1": 0.0003677637735036598,
62
+ "radius2_f1": 0.002355446255774016,
63
+ "radius3_f1": 0.0074868454392673645
64
+ },
65
+ "depth_affine_invariant": {
66
+ "delta1": 0.9665557831525803,
67
+ "rel": 0.04901858689263463
68
+ },
69
+ "depth_metric": {
70
+ "delta1": 0.009229352576726342,
71
+ "rel": 0.669943470954895
72
+ },
73
+ "depth_scale_invariant": {
74
+ "delta1": 0.955943250656128,
75
+ "rel": 0.0585870449244976
76
+ },
77
+ "disparity_affine_invariant": {
78
+ "delta1": 0.9633621990680694,
79
+ "rel": 0.05318264343310147
80
+ },
81
+ "inference_time": 0.4680902934074402
82
+ },
83
+ "DDAD": {
84
+ "depth_affine_invariant": {
85
+ "delta1": 0.7552772147655487,
86
+ "rel": 0.17108516378700733
87
+ },
88
+ "depth_metric": {
89
+ "delta1": 1.2105843257813832e-05,
90
+ "rel": 0.9307275167703628
91
+ },
92
+ "depth_scale_invariant": {
93
+ "delta1": 0.7081841564327478,
94
+ "rel": 0.189800496019423
95
+ },
96
+ "disparity_affine_invariant": {
97
+ "delta1": 0.7671690853908658,
98
+ "rel": 0.1658587598465383
99
+ },
100
+ "inference_time": 0.754448080778122
101
+ },
102
+ "DIODE": {
103
+ "depth_affine_invariant": {
104
+ "delta1": 0.922776798175312,
105
+ "rel": 0.07412641178381699
106
+ },
107
+ "depth_metric": {
108
+ "delta1": 0.01297228950502193,
109
+ "rel": 0.8070648021358229
110
+ },
111
+ "depth_scale_invariant": {
112
+ "delta1": 0.8836784098634373,
113
+ "rel": 0.09425512023134841
114
+ },
115
+ "disparity_affine_invariant": {
116
+ "delta1": 0.9265068600587624,
117
+ "rel": 0.07706254333663544
118
+ },
119
+ "inference_time": 0.4683012115043116
120
+ },
121
+ "HAMMER": {
122
+ "boundary": {
123
+ "radius1_f1": 1.7918760934802236e-05,
124
+ "radius2_f1": 6.109787954819177e-05,
125
+ "radius3_f1": 0.0003014896747057525
126
+ },
127
+ "depth_affine_invariant": {
128
+ "delta1": 0.9251455970733397,
129
+ "rel": 0.08684614974885217
130
+ },
131
+ "depth_metric": {
132
+ "delta1": 0.4813746749437072,
133
+ "rel": 0.31272971090289853
134
+ },
135
+ "depth_scale_invariant": {
136
+ "delta1": 0.8988004839035773,
137
+ "rel": 0.10022151506716205
138
+ },
139
+ "disparity_affine_invariant": {
140
+ "delta1": 0.9353428303810858,
141
+ "rel": 0.08727108752535236
142
+ },
143
+ "inference_time": 0.7437116389120779
144
+ },
145
+ "mean": {
146
+ "boundary": {
147
+ "radius1_f1": 0.000192841267219231,
148
+ "radius2_f1": 0.0012082720676611038,
149
+ "radius3_f1": 0.0038941675569865585
150
+ },
151
+ "depth_affine_invariant": {
152
+ "delta1": 0.9168967111333085,
153
+ "rel": 0.08074451083683563
154
+ },
155
+ "depth_metric": {
156
+ "delta1": 0.07268109577640995,
157
+ "rel": 0.7252497238751741
158
+ },
159
+ "depth_scale_invariant": {
160
+ "delta1": 0.8955786737694121,
161
+ "rel": 0.0934326219301853
162
+ },
163
+ "disparity_affine_invariant": {
164
+ "delta1": 0.9207286800965434,
165
+ "rel": 0.0825976198176522
166
+ },
167
+ "inference_time": 0.6014063855438944
168
+ }
169
+ }
eval_output/vggt_sdt_20260114_154947.json ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "disparity_affine_invariant": {
4
+ "delta1": 0.9817055438272085,
5
+ "rel": 0.04900401620769628
6
+ },
7
+ "inference_time": 0.12931323671195122
8
+ },
9
+ "KITTI": {
10
+ "disparity_affine_invariant": {
11
+ "delta1": 0.8170368472811269,
12
+ "rel": 0.13179129633789688
13
+ },
14
+ "inference_time": 0.17483019316854653
15
+ },
16
+ "ETH3D": {
17
+ "disparity_affine_invariant": {
18
+ "delta1": 0.9124035924049464,
19
+ "rel": 0.09146354772716694
20
+ },
21
+ "inference_time": 0.30268856239738967
22
+ },
23
+ "iBims-1": {
24
+ "boundary": {
25
+ "radius1_f1": 0.009337046273421435,
26
+ "radius2_f1": 0.026981654716551454,
27
+ "radius3_f1": 0.05216983346152288
28
+ },
29
+ "disparity_affine_invariant": {
30
+ "delta1": 0.9522965063154697,
31
+ "rel": 0.062013580692000685
32
+ },
33
+ "inference_time": 0.12733901500701905
34
+ },
35
+ "GSO": {
36
+ "disparity_affine_invariant": {
37
+ "delta1": 0.9997740259448301,
38
+ "rel": 0.01676916247866686
39
+ },
40
+ "inference_time": 0.10789706822737907
41
+ },
42
+ "Sintel": {
43
+ "boundary": {
44
+ "radius1_f1": 0.04033814826066349,
45
+ "radius2_f1": 0.07757145020648981,
46
+ "radius3_f1": 0.1201923520131971
47
+ },
48
+ "disparity_affine_invariant": {
49
+ "delta1": 0.6501715715954626,
50
+ "rel": 0.2835116838455144
51
+ },
52
+ "inference_time": 0.17543501244451767
53
+ },
54
+ "DDAD": {
55
+ "disparity_affine_invariant": {
56
+ "delta1": 0.44051058538258075,
57
+ "rel": 0.3715273478627205
58
+ },
59
+ "inference_time": 0.2224211790561676
60
+ },
61
+ "DIODE": {
62
+ "disparity_affine_invariant": {
63
+ "delta1": 0.8814418018274928,
64
+ "rel": 0.10526727002848486
65
+ },
66
+ "inference_time": 0.13551315661379645
67
+ },
68
+ "Spring": {
69
+ "boundary": {
70
+ "radius1_f1": 0.005658537997772949,
71
+ "radius2_f1": 0.013620454949140175,
72
+ "radius3_f1": 0.023404030065890324
73
+ },
74
+ "disparity_affine_invariant": {
75
+ "delta1": 0.5216043787677744,
76
+ "rel": 0.3801015362627804
77
+ },
78
+ "inference_time": 0.2178178503513336
79
+ },
80
+ "HAMMER": {
81
+ "boundary": {
82
+ "radius1_f1": 0.00037577785167723226,
83
+ "radius2_f1": 0.002590506793077842,
84
+ "radius3_f1": 0.008014710682494383
85
+ },
86
+ "disparity_affine_invariant": {
87
+ "delta1": 0.9146521884779776,
88
+ "rel": 0.09994742711224863
89
+ },
90
+ "inference_time": 0.20828244516926428
91
+ },
92
+ "mean": {
93
+ "boundary": {
94
+ "radius1_f1": 0.013927377595883776,
95
+ "radius2_f1": 0.03019101666631482,
96
+ "radius3_f1": 0.05094523155577617
97
+ },
98
+ "disparity_affine_invariant": {
99
+ "delta1": 0.807159704182487,
100
+ "rel": 0.15913968685551766
101
+ },
102
+ "inference_time": 0.1801537719147365
103
+ }
104
+ }
eval_output/vggt_sdt_metric_20260115_235001.json ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "NYUv2": {
3
+ "depth_affine_invariant": {
4
+ "delta1": 0.9808023873636846,
5
+ "rel": 0.046157878837818765
6
+ },
7
+ "depth_metric": {
8
+ "delta1": 0.006306150596156737,
9
+ "rel": 0.6252215197850987
10
+ },
11
+ "depth_scale_invariant": {
12
+ "delta1": 0.9769755343960695,
13
+ "rel": 0.052342915129975985
14
+ },
15
+ "disparity_affine_invariant": {
16
+ "delta1": 0.9815074402257937,
17
+ "rel": 0.049038132305174426
18
+ },
19
+ "inference_time": 0.5853344825429654
20
+ },
21
+ "KITTI": {
22
+ "depth_affine_invariant": {
23
+ "delta1": 0.9153403988553702,
24
+ "rel": 0.08019378747322534
25
+ },
26
+ "depth_metric": {
27
+ "delta1": 0.0,
28
+ "rel": 0.9333650501776327
29
+ },
30
+ "depth_scale_invariant": {
31
+ "delta1": 0.9017889578741021,
32
+ "rel": 0.08654069212305308
33
+ },
34
+ "disparity_affine_invariant": {
35
+ "delta1": 0.8180910662456524,
36
+ "rel": 0.13129532042365133
37
+ },
38
+ "inference_time": 0.9374275920581232
39
+ },
40
+ "ETH3D": {
41
+ "depth_affine_invariant": {
42
+ "delta1": 0.9623385925781359,
43
+ "rel": 0.06296132472733504
44
+ },
45
+ "depth_metric": {
46
+ "delta1": 9.598338831774817e-05,
47
+ "rel": 0.7884494281550336
48
+ },
49
+ "depth_scale_invariant": {
50
+ "delta1": 0.9449081049682285,
51
+ "rel": 0.0764451494382998
52
+ },
53
+ "disparity_affine_invariant": {
54
+ "delta1": 0.9126489337221904,
55
+ "rel": 0.09150957926032892
56
+ },
57
+ "inference_time": 0.6859911155070503
58
+ },
59
+ "iBims-1": {
60
+ "boundary": {
61
+ "radius1_f1": 0.0077513542684103775,
62
+ "radius2_f1": 0.02277467589519884,
63
+ "radius3_f1": 0.04447932022293009
64
+ },
65
+ "depth_affine_invariant": {
66
+ "delta1": 0.9679432511329651,
67
+ "rel": 0.05319356375839561
68
+ },
69
+ "depth_metric": {
70
+ "delta1": 0.02450272503105907,
71
+ "rel": 0.654370816797018
72
+ },
73
+ "depth_scale_invariant": {
74
+ "delta1": 0.9498512089252472,
75
+ "rel": 0.0657089563459158
76
+ },
77
+ "disparity_affine_invariant": {
78
+ "delta1": 0.9521907129883767,
79
+ "rel": 0.06197955245617777
80
+ },
81
+ "inference_time": 0.5828405976295471
82
+ },
83
+ "DDAD": {
84
+ "depth_affine_invariant": {
85
+ "delta1": 0.7503258799090982,
86
+ "rel": 0.18454354463145137
87
+ },
88
+ "depth_metric": {
89
+ "delta1": 0.00012723483115405543,
90
+ "rel": 0.927197467148304
91
+ },
92
+ "depth_scale_invariant": {
93
+ "delta1": 0.6921712415292859,
94
+ "rel": 0.20418074756488205
95
+ },
96
+ "disparity_affine_invariant": {
97
+ "delta1": 0.4409225285537541,
98
+ "rel": 0.3713432611823082
99
+ },
100
+ "inference_time": 0.9352728707790374
101
+ },
102
+ "DIODE": {
103
+ "depth_affine_invariant": {
104
+ "delta1": 0.9360030998805617,
105
+ "rel": 0.0741081772282739
106
+ },
107
+ "depth_metric": {
108
+ "delta1": 0.014003635961455072,
109
+ "rel": 0.8007672338199291
110
+ },
111
+ "depth_scale_invariant": {
112
+ "delta1": 0.8891483539311957,
113
+ "rel": 0.09678002011064735
114
+ },
115
+ "disparity_affine_invariant": {
116
+ "delta1": 0.8812503358530395,
117
+ "rel": 0.10531159690985922
118
+ },
119
+ "inference_time": 0.5874425545431451
120
+ },
121
+ "HAMMER": {
122
+ "boundary": {
123
+ "radius1_f1": 0.0007055392913307495,
124
+ "radius2_f1": 0.003957263036205352,
125
+ "radius3_f1": 0.010570287316098716
126
+ },
127
+ "depth_affine_invariant": {
128
+ "delta1": 0.9112414946479183,
129
+ "rel": 0.09712361636421372
130
+ },
131
+ "depth_metric": {
132
+ "delta1": 0.4833832547895312,
133
+ "rel": 0.32821473934477374
134
+ },
135
+ "depth_scale_invariant": {
136
+ "delta1": 0.8822336696809338,
137
+ "rel": 0.11034731717359635
138
+ },
139
+ "disparity_affine_invariant": {
140
+ "delta1": 0.9148368316696536,
141
+ "rel": 0.09986887558333335
142
+ },
143
+ "inference_time": 0.9270938959429341
144
+ },
145
+ "mean": {
146
+ "boundary": {
147
+ "radius1_f1": 0.004228446779870563,
148
+ "radius2_f1": 0.013365969465702097,
149
+ "radius3_f1": 0.0275248037695144
150
+ },
151
+ "depth_affine_invariant": {
152
+ "delta1": 0.9177135863382478,
153
+ "rel": 0.08546884186010197
154
+ },
155
+ "depth_metric": {
156
+ "delta1": 0.07548842637109628,
157
+ "rel": 0.7225123221753985
158
+ },
159
+ "depth_scale_invariant": {
160
+ "delta1": 0.8910110101864376,
161
+ "rel": 0.09890654255519578
162
+ },
163
+ "disparity_affine_invariant": {
164
+ "delta1": 0.8430639784654943,
165
+ "rel": 0.13004947401726188
166
+ },
167
+ "inference_time": 0.748771872714686
168
+ }
169
+ }
eval_scripts/eval_all_slurm.sh ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=eval-all
3
+ #SBATCH --output=/home/ywan0794/MoGe/eval_all_%j.log
4
+ #SBATCH --error=/home/ywan0794/MoGe/eval_all_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:H100:1
9
+ #SBATCH --time=0-12:00:00
10
+ #SBATCH --mem=80G
11
+ #SBATCH --nodelist=erinyes
12
+ # Single sbatch — production run for 7 models on all 10 MoGe benchmarks, serial,
13
+ # one H100 held the whole time. Failures don't abort; we log & continue.
14
+ # Model order: cheap → expensive (FE2E last so it doesn't block others if it crashes).
15
+
16
+ export PYTHONUNBUFFERED=1
17
+ cd /home/ywan0794/MoGe
18
+
19
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
20
+
21
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
22
+ CONFIG=/home/ywan0794/MoGe/configs/eval/all_benchmarks.json
23
+ CONFIG_FE2E=/home/ywan0794/MoGe/configs/eval/fe2e_all_benchmarks.json
24
+ OUT_DIR=eval_output
25
+ mkdir -p $OUT_DIR
26
+
27
+ SUMMARY=$OUT_DIR/_eval_all_${TIMESTAMP}.summary.txt
28
+ : > $SUMMARY
29
+
30
+ echo "============================================"
31
+ echo "eval-all started at $(date)"
32
+ echo "Config (main): $CONFIG"
33
+ echo "Config (fe2e): $CONFIG_FE2E"
34
+ echo "TIMESTAMP: $TIMESTAMP"
35
+ echo "Summary file: $SUMMARY"
36
+ echo "============================================"
37
+ nvidia-smi
38
+
39
+ run_model() {
40
+ # Usage: run_model <label> <env> <config> <python invocation ...>
41
+ local label=$1 env=$2 cfg=$3
42
+ shift 3
43
+ echo
44
+ echo "============================================"
45
+ echo "[$label] starting at $(date) (conda env: $env)"
46
+ echo "============================================"
47
+ conda deactivate 2>/dev/null || true
48
+ conda activate $env
49
+ echo "Active env: $CONDA_DEFAULT_ENV"
50
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
51
+ python -c "import torch; print('CUDA:', torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else '')"
52
+
53
+ local OUTFILE=$OUT_DIR/${label}_${TIMESTAMP}.json
54
+
55
+ if "$@" \
56
+ --baseline baselines/${label}.py \
57
+ --config $cfg \
58
+ --output $OUTFILE; then
59
+ if [ -f $OUTFILE ]; then
60
+ local SIZE=$(stat -c%s $OUTFILE 2>/dev/null)
61
+ echo "[OK] $label -> $OUTFILE (${SIZE} bytes) at $(date)" | tee -a $SUMMARY
62
+ else
63
+ echo "[NO-OUTPUT] $label (exited 0 but no JSON) at $(date)" | tee -a $SUMMARY
64
+ fi
65
+ else
66
+ rc=$?
67
+ echo "[FAIL rc=$rc] $label at $(date)" | tee -a $SUMMARY
68
+ fi
69
+ }
70
+
71
+ # ============================================
72
+ # 1) DA3-Mono — SKIPPED, already done in eval_output/da3_mono_20260514_010406.json
73
+ # ============================================
74
+ # REPO=/home/ywan0794/EvalMDE/Depth-Anything-3
75
+ # HF_ID=depth-anything/DA3MONO-LARGE
76
+ # run_model da3_mono da3 $CONFIG \
77
+ # python moge/scripts/eval_baseline.py \
78
+ # --repo $REPO --hf_id $HF_ID
79
+
80
+ # ============================================
81
+ # 2) Depth Pro — SKIPPED, already done in eval_output/depth_pro_20260514_010406.json
82
+ # ============================================
83
+ # REPO=/home/ywan0794/EvalMDE/ml-depth-pro
84
+ # CKPT=$REPO/checkpoints/depth_pro.pt
85
+ # run_model depth_pro depth-pro $CONFIG \
86
+ # python moge/scripts/eval_baseline.py \
87
+ # --repo $REPO --checkpoint $CKPT --precision fp32
88
+
89
+ # ============================================
90
+ # 3) Marigold v1.1 (env: marigold) — paper-canonical via
91
+ # `script/depth/eval/11_infer_nyu.sh`: v1-1 + denoise=1 + ensemble=10 + seed=1234.
92
+ # v1-1 retrained to match v1-0's denoise=50 quality at denoise=1.
93
+ # ============================================
94
+ REPO=/home/ywan0794/EvalMDE/Marigold
95
+ CHECKPOINT=prs-eth/marigold-depth-v1-1
96
+ run_model marigold marigold $CONFIG \
97
+ python moge/scripts/eval_baseline.py \
98
+ --repo $REPO --checkpoint $CHECKPOINT \
99
+ --denoise_steps 4 --ensemble_size 1
100
+
101
+ # ============================================
102
+ # 4) Lotus (env: lotus) — paper-canonical eval.sh:
103
+ # generative v2-1-disparity + half_precision + seed=42.
104
+ # ============================================
105
+ REPO=/home/ywan0794/EvalMDE/Lotus
106
+ PRETRAINED=jingheya/lotus-depth-g-v2-1-disparity
107
+ run_model lotus lotus $CONFIG \
108
+ python moge/scripts/eval_baseline.py \
109
+ --repo $REPO --pretrained $PRETRAINED --mode generation \
110
+ --task_name depth --disparity --timestep 999 --fp16 --seed 42
111
+
112
+ # ============================================
113
+ # 5) DepthMaster (env: depthmaster)
114
+ # ============================================
115
+ REPO=/home/ywan0794/EvalMDE/DepthMaster
116
+ CKPT=$REPO/ckpt/eval
117
+ run_model depthmaster depthmaster $CONFIG \
118
+ python moge/scripts/eval_baseline.py \
119
+ --repo $REPO --checkpoint $CKPT --processing_res 768
120
+
121
+ # ============================================
122
+ # 6) PPD (env: ppd) — needs DA2 vitl semantics
123
+ # ============================================
124
+ REPO=/home/ywan0794/EvalMDE/Pixel-Perfect-Depth
125
+ # Paper-canonical eval.yaml: semantics=MoGe2, ppd_moge.pth, sampling_steps=4
126
+ run_model ppd ppd $CONFIG \
127
+ python moge/scripts/eval_baseline.py \
128
+ --repo $REPO --semantics_model MoGe2 \
129
+ --semantics_pth checkpoints/moge2.pt \
130
+ --model_pth checkpoints/ppd_moge.pth --sampling_steps 4
131
+
132
+ # ============================================
133
+ # 7) FE2E (env: fe2e) — slowest, last
134
+ # ============================================
135
+ REPO=/home/ywan0794/EvalMDE/FE2E
136
+ MODEL_PATH=$REPO/pretrain
137
+ LORA_PATH=$REPO/lora/LDRN.safetensors
138
+ run_model fe2e fe2e $CONFIG_FE2E \
139
+ python moge/scripts/eval_baseline.py \
140
+ --repo $REPO --model_path $MODEL_PATH --lora_path $LORA_PATH \
141
+ --prompt_type empty --single_denoise --cfg_guidance 6.0 --size_level 768
142
+
143
+ # ============================================
144
+ echo
145
+ echo "============================================"
146
+ echo "eval-all finished at $(date)"
147
+ echo "============================================"
148
+ echo "=== Summary ==="
149
+ cat $SUMMARY
eval_scripts/eval_da2_dpt_slurm.sh ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-da2-dpt
3
+ #SBATCH --output=moge_da2_dpt_%j.log
4
+ #SBATCH --error=moge_da2_dpt_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=hades
12
+
13
+ # 禁用Python输出缓冲
14
+ export PYTHONUNBUFFERED=1
15
+
16
+ # 进入MoGe目录
17
+ cd /home/ywan0794/MoGe
18
+
19
+ # 初始化并激活conda环境
20
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
21
+
22
+ # 激活环境
23
+ conda activate da2
24
+
25
+ # 设置 CUDA 环境变量
26
+ export CUDA_HOME=$CONDA_PREFIX
27
+ export PATH=$CUDA_HOME/bin:$PATH
28
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
29
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
30
+
31
+ # 设置Python路径
32
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
33
+
34
+ # 确认环境
35
+ echo "============================================"
36
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
37
+ echo "CUDA_HOME: $CUDA_HOME"
38
+ echo "============================================"
39
+
40
+ # 显示GPU信息
41
+ echo "=== GPU Info ==="
42
+ nvidia-smi
43
+
44
+ # 检查CUDA是否可用
45
+ python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU count:', torch.cuda.device_count()); print('GPU name:', torch.cuda.get_device_name(0))"
46
+
47
+ # ============================================
48
+ # 评估配置
49
+ # ============================================
50
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
51
+
52
+ # 模型配置
53
+ ENCODER=vitb
54
+ DECODER=dpt
55
+
56
+ # Checkpoint路径
57
+ CKPT="/home/ywan0794/Depth-Anything-V2/training/exp/dpt_vitb_both/epoch_007.pth"
58
+
59
+ # 输出目录
60
+ OUT_DIR="eval_output"
61
+ mkdir -p $OUT_DIR
62
+
63
+ echo "============================================"
64
+ echo "Starting MoGe Evaluation for DA2+DPT at $(date)"
65
+ echo "Encoder: $ENCODER"
66
+ echo "Decoder: $DECODER"
67
+ echo "Checkpoint: $CKPT"
68
+ echo "Output: ${OUT_DIR}/da2_dpt_${ENCODER}.json"
69
+ echo "============================================"
70
+
71
+ # ============================================
72
+ # 运行评估
73
+ # ============================================
74
+ python moge/scripts/eval_baseline.py \
75
+ --baseline baselines/da2_custom.py \
76
+ --config /home/ywan0794/datasets/eval/moge_style_eval/all_benchmarks.json \
77
+ --output ${OUT_DIR}/da2_dpt_${ENCODER}_${TIMESTAMP}.json \
78
+ --repo /home/ywan0794/Depth-Anything-V2 \
79
+ --checkpoint "$CKPT" \
80
+ --encoder $ENCODER \
81
+ --decoder $DECODER
82
+
83
+ echo "============================================"
84
+ echo "Evaluation completed at $(date)"
85
+ echo "============================================"
eval_scripts/eval_da2_sdt_slurm.sh ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-da2-sdt
3
+ #SBATCH --output=moge_da2_sdt_%j.log
4
+ #SBATCH --error=moge_da2_sdt_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=erinyes
12
+
13
+ # 禁用Python输出缓冲
14
+ export PYTHONUNBUFFERED=1
15
+
16
+ # 进入MoGe目录
17
+ cd /home/ywan0794/MoGe
18
+
19
+ # 初始化并激活conda环境
20
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
21
+
22
+ # 激活环境
23
+ conda activate da2
24
+
25
+ # 设置 CUDA 环境变量
26
+ export CUDA_HOME=$CONDA_PREFIX
27
+ export PATH=$CUDA_HOME/bin:$PATH
28
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
29
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
30
+
31
+ # 设置Python路径
32
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
33
+
34
+ # 确认环境
35
+ echo "============================================"
36
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
37
+ echo "CUDA_HOME: $CUDA_HOME"
38
+ echo "============================================"
39
+
40
+ # 显示GPU信息
41
+ echo "=== GPU Info ==="
42
+ nvidia-smi
43
+
44
+ # 检查CUDA是否可用
45
+ python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU count:', torch.cuda.device_count()); print('GPU name:', torch.cuda.get_device_name(0))"
46
+
47
+ # ============================================
48
+ # 评估配置
49
+ # ============================================
50
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
51
+
52
+ # 模型配置
53
+ ENCODER=vitb
54
+ DECODER=sdt
55
+
56
+ # Checkpoint路径
57
+ CKPT="/home/ywan0794/Depth-Anything-V2/training/exp/sdt_vitb_both/epoch_008.pth"
58
+
59
+ # 输出目录
60
+ OUT_DIR="eval_output"
61
+ mkdir -p $OUT_DIR
62
+
63
+ echo "============================================"
64
+ echo "Starting MoGe Evaluation for DA2+SDT at $(date)"
65
+ echo "Encoder: $ENCODER"
66
+ echo "Decoder: $DECODER"
67
+ echo "Checkpoint: $CKPT"
68
+ echo "Output: ${OUT_DIR}/da2_sdt_${ENCODER}.json"
69
+ echo "============================================"
70
+
71
+ # ============================================
72
+ # 运行评估
73
+ # ============================================
74
+ python moge/scripts/eval_baseline.py \
75
+ --baseline baselines/da2_custom.py \
76
+ --config /home/ywan0794/datasets/eval/moge_style_eval/all_benchmarks.json \
77
+ --output ${OUT_DIR}/da2_sdt_${ENCODER}_${TIMESTAMP}.json \
78
+ --repo /home/ywan0794/Depth-Anything-V2 \
79
+ --checkpoint "$CKPT" \
80
+ --encoder $ENCODER \
81
+ --decoder $DECODER
82
+
83
+ echo "============================================"
84
+ echo "Evaluation completed at $(date)"
85
+ echo "============================================"
eval_scripts/eval_da2_slurm.sh ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-eval-da2
3
+ #SBATCH --output=moge_eval_da2_%j.log
4
+ #SBATCH --error=moge_eval_da2_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=hades
12
+
13
+ # 禁用Python输出缓冲
14
+ export PYTHONUNBUFFERED=1
15
+
16
+ # 进入MoGe目录
17
+ cd /home/ywan0794/MoGe
18
+
19
+ # 初始化并激活conda环境
20
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
21
+
22
+ # 激活环境
23
+ conda activate da2
24
+
25
+ # 设置 CUDA 环境变量
26
+ export CUDA_HOME=$CONDA_PREFIX
27
+ export PATH=$CUDA_HOME/bin:$PATH
28
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
29
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
30
+
31
+ # 设置Python路径
32
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
33
+
34
+ # 确认环境
35
+ echo "============================================"
36
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
37
+ echo "CUDA_HOME: $CUDA_HOME"
38
+ echo "============================================"
39
+
40
+ # 显示GPU信息
41
+ echo "=== GPU Info ==="
42
+ nvidia-smi
43
+
44
+ # 检查CUDA是否可用
45
+ python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU count:', torch.cuda.device_count()); print('GPU name:', torch.cuda.get_device_name(0))"
46
+
47
+ # ============================================
48
+ # 评估配置
49
+ # ============================================
50
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
51
+
52
+ # 模型配置
53
+ BACKBONE=vitl # vits, vitb, vitl
54
+
55
+ # 输出目录
56
+ OUT_DIR="eval_output"
57
+ mkdir -p $OUT_DIR
58
+
59
+ echo "============================================"
60
+ echo "Starting MoGe Evaluation for DA2 at $(date)"
61
+ echo "Backbone: $BACKBONE"
62
+ echo "Output: ${OUT_DIR}/da2_${BACKBONE}.json"
63
+ echo "============================================"
64
+
65
+ # ============================================
66
+ # 运行评估
67
+ # ============================================
68
+ python moge/scripts/eval_baseline.py \
69
+ --baseline baselines/da_v2.py \
70
+ --config /home/ywan0794/datasets/eval/moge_style_eval/all_benchmarks.json \
71
+ --output ${OUT_DIR}/da2_${BACKBONE}_${TIMESTAMP}.json \
72
+ --repo /home/ywan0794/Depth-Anything-V2 \
73
+ --backbone $BACKBONE
74
+
75
+ echo "============================================"
76
+ echo "Evaluation completed at $(date)"
77
+ echo "============================================"
eval_scripts/eval_da3_dpt_slurm.sh ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-da3-dpt
3
+ #SBATCH --output=moge_da3_dpt_%j.log
4
+ #SBATCH --error=moge_da3_dpt_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=60G
11
+ #SBATCH --nodelist=hades
12
+
13
+ # 禁用Python输出缓冲
14
+ export PYTHONUNBUFFERED=1
15
+
16
+ # 进入MoGe目录
17
+ cd /home/ywan0794/MoGe
18
+
19
+ # 初始化并激活conda环境
20
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
21
+
22
+ # 激活环境
23
+ conda activate da3
24
+
25
+ # 设置 CUDA 环境变量
26
+ export CUDA_HOME=$CONDA_PREFIX
27
+ export PATH=$CUDA_HOME/bin:$PATH
28
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
29
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
30
+
31
+ # 设置Python路径
32
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
33
+
34
+ # 确认环境
35
+ echo "============================================"
36
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
37
+ echo "CUDA_HOME: $CUDA_HOME"
38
+ echo "============================================"
39
+
40
+ # 显示GPU信息
41
+ echo "=== GPU Info ==="
42
+ nvidia-smi
43
+
44
+ # 检查CUDA是否可用
45
+ python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU count:', torch.cuda.device_count()); print('GPU name:', torch.cuda.get_device_name(0))"
46
+
47
+ # ============================================
48
+ # 评估配置
49
+ # ============================================
50
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
51
+
52
+ # 模型配置
53
+ DECODER=dpt
54
+
55
+ # Checkpoint路径
56
+ CKPT="/home/ywan0794/Depth-Anything-3/training/exp/da3_dpt_vitl_both/epoch_010.pth"
57
+
58
+ # 输出目录
59
+ OUT_DIR="eval_output"
60
+ mkdir -p $OUT_DIR
61
+
62
+ echo "============================================"
63
+ echo "Starting MoGe Evaluation for DA3+DPT at $(date)"
64
+ echo "Decoder: $DECODER"
65
+ echo "Checkpoint: $CKPT"
66
+ echo "Output: ${OUT_DIR}/da3_dpt.json"
67
+ echo "============================================"
68
+
69
+ # ============================================
70
+ # 运行评估
71
+ # ============================================
72
+ python moge/scripts/eval_baseline.py \
73
+ --baseline baselines/da3_custom.py \
74
+ --config /home/ywan0794/datasets/eval/moge_style_eval/all_benchmarks.json \
75
+ --output ${OUT_DIR}/da3_dpt_${TIMESTAMP}.json \
76
+ --repo /home/ywan0794/Depth-Anything-3 \
77
+ --checkpoint "$CKPT" \
78
+ --decoder $DECODER
79
+
80
+ echo "============================================"
81
+ echo "Evaluation completed at $(date)"
82
+ echo "============================================"
eval_scripts/eval_da3_dualdpt_slurm.sh ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-da3-dualdpt
3
+ #SBATCH --output=moge_da3_dualdpt_%j.log
4
+ #SBATCH --error=moge_da3_dualdpt_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=60G
11
+ #SBATCH --nodelist=hades
12
+
13
+ # 禁用Python输出缓冲
14
+ export PYTHONUNBUFFERED=1
15
+
16
+ # 进入MoGe目录
17
+ cd /home/ywan0794/MoGe
18
+
19
+ # 初始化并激活conda环境
20
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
21
+
22
+ # 激活环境
23
+ conda activate da3
24
+
25
+ # 设置 CUDA 环境变量
26
+ export CUDA_HOME=$CONDA_PREFIX
27
+ export PATH=$CUDA_HOME/bin:$PATH
28
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
29
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
30
+
31
+ # 设置Python路径
32
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
33
+
34
+ # 确认环境
35
+ echo "============================================"
36
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
37
+ echo "CUDA_HOME: $CUDA_HOME"
38
+ echo "============================================"
39
+
40
+ # 显示GPU信息
41
+ echo "=== GPU Info ==="
42
+ nvidia-smi
43
+
44
+ # 检查CUDA是否可用
45
+ python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU count:', torch.cuda.device_count()); print('GPU name:', torch.cuda.get_device_name(0))"
46
+
47
+ # ============================================
48
+ # 评估配置
49
+ # ============================================
50
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
51
+
52
+ # 模型配置
53
+ DECODER=dualdpt
54
+
55
+ # Checkpoint路径
56
+ CKPT="/home/ywan0794/Depth-Anything-3/training/exp/da3_dualdpt_vitl_both/epoch_010.pth"
57
+
58
+ # 输出目录
59
+ OUT_DIR="eval_output"
60
+ mkdir -p $OUT_DIR
61
+
62
+ echo "============================================"
63
+ echo "Starting MoGe Evaluation for DA3+DualDPT at $(date)"
64
+ echo "Decoder: $DECODER"
65
+ echo "Checkpoint: $CKPT"
66
+ echo "Output: ${OUT_DIR}/da3_dualdpt.json"
67
+ echo "============================================"
68
+
69
+ # ============================================
70
+ # 运行评估
71
+ # ============================================
72
+ python moge/scripts/eval_baseline.py \
73
+ --baseline baselines/da3_custom.py \
74
+ --config /home/ywan0794/datasets/eval/moge_style_eval/all_benchmarks.json \
75
+ --output ${OUT_DIR}/da3_dualdpt_${TIMESTAMP}.json \
76
+ --repo /home/ywan0794/Depth-Anything-3 \
77
+ --checkpoint "$CKPT" \
78
+ --decoder $DECODER
79
+
80
+ echo "============================================"
81
+ echo "Evaluation completed at $(date)"
82
+ echo "============================================"
eval_scripts/eval_da3_mono_slurm.sh ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-da3-mono
3
+ #SBATCH --output=/home/ywan0794/MoGe/moge_da3_mono_%j.log
4
+ #SBATCH --error=/home/ywan0794/MoGe/moge_da3_mono_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:H100:1
9
+ #SBATCH --time=0-04:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=erinyes
12
+
13
+ export PYTHONUNBUFFERED=1
14
+
15
+ cd /home/ywan0794/MoGe
16
+
17
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
18
+ conda activate da3
19
+
20
+ export CUDA_HOME=$CONDA_PREFIX
21
+ export PATH=$CUDA_HOME/bin:$PATH
22
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
23
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
24
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
25
+
26
+ echo "============================================"
27
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
28
+ echo "CUDA_HOME: $CUDA_HOME"
29
+ echo "============================================"
30
+
31
+ echo "=== GPU Info ==="
32
+ nvidia-smi
33
+
34
+ python -c "import torch; print('CUDA:', torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else '')"
35
+
36
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
37
+ REPO=/home/ywan0794/EvalMDE/Depth-Anything-3
38
+ HF_ID=depth-anything/DA3MONO-LARGE
39
+ CONFIG=/home/ywan0794/MoGe/configs/eval/all_benchmarks.json
40
+ OUT_DIR=eval_output
41
+ mkdir -p $OUT_DIR
42
+
43
+ echo "============================================"
44
+ echo "Starting MoGe Eval for DA3-Mono at $(date)"
45
+ echo "Repo: $REPO"
46
+ echo "HF id: $HF_ID"
47
+ echo "Config: $CONFIG"
48
+ echo "============================================"
49
+
50
+ python moge/scripts/eval_baseline.py \
51
+ --baseline baselines/da3_mono.py \
52
+ --config $CONFIG \
53
+ --output ${OUT_DIR}/da3_mono_${TIMESTAMP}.json \
54
+ --repo $REPO \
55
+ --hf_id $HF_ID
56
+
57
+ echo "============================================"
58
+ echo "Evaluation completed at $(date)"
59
+ echo "============================================"
eval_scripts/eval_da3_sdt_slurm.sh ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-da3-sdt
3
+ #SBATCH --output=moge_da3_sdt_%j.log
4
+ #SBATCH --error=moge_da3_sdt_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=60G
11
+ #SBATCH --nodelist=hades
12
+
13
+ # 禁用Python输出缓冲
14
+ export PYTHONUNBUFFERED=1
15
+
16
+ # 进入MoGe目录
17
+ cd /home/ywan0794/MoGe
18
+
19
+ # 初始化并激活conda环境
20
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
21
+
22
+ # 激活环境
23
+ conda activate da3
24
+
25
+ # 设置 CUDA 环境变量
26
+ export CUDA_HOME=$CONDA_PREFIX
27
+ export PATH=$CUDA_HOME/bin:$PATH
28
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
29
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
30
+
31
+ # 设置Python路径
32
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
33
+
34
+ # 确认环境
35
+ echo "============================================"
36
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
37
+ echo "CUDA_HOME: $CUDA_HOME"
38
+ echo "============================================"
39
+
40
+ # 显示GPU信息
41
+ echo "=== GPU Info ==="
42
+ nvidia-smi
43
+
44
+ # 检查CUDA是否可用
45
+ python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU count:', torch.cuda.device_count()); print('GPU name:', torch.cuda.get_device_name(0))"
46
+
47
+ # ============================================
48
+ # 评估配置
49
+ # ============================================
50
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
51
+
52
+ # 模型配置
53
+ DECODER=sdt
54
+
55
+ # Checkpoint路径
56
+ CKPT="/home/ywan0794/Depth-Anything-3/training/exp/da3_sdt_vitl_both/epoch_010.pth"
57
+
58
+ # 输出目录
59
+ OUT_DIR="eval_output"
60
+ mkdir -p $OUT_DIR
61
+
62
+ echo "============================================"
63
+ echo "Starting MoGe Evaluation for DA3+SDT at $(date)"
64
+ echo "Decoder: $DECODER"
65
+ echo "Checkpoint: $CKPT"
66
+ echo "Output: ${OUT_DIR}/da3_sdt.json"
67
+ echo "============================================"
68
+
69
+ # ============================================
70
+ # 运行评估
71
+ # ============================================
72
+ python moge/scripts/eval_baseline.py \
73
+ --baseline baselines/da3_custom.py \
74
+ --config /home/ywan0794/datasets/eval/moge_style_eval/all_benchmarks.json \
75
+ --output ${OUT_DIR}/da3_sdt_${TIMESTAMP}.json \
76
+ --repo /home/ywan0794/Depth-Anything-3 \
77
+ --checkpoint "$CKPT" \
78
+ --decoder $DECODER
79
+
80
+ echo "============================================"
81
+ echo "Evaluation completed at $(date)"
82
+ echo "============================================"
eval_scripts/eval_da3_slurm.sh ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-eval-da3
3
+ #SBATCH --output=moge_eval_da3_%j.log
4
+ #SBATCH --error=moge_eval_da3_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=60G
11
+ #SBATCH --nodelist=hades
12
+
13
+ # 禁用Python输出缓冲
14
+ export PYTHONUNBUFFERED=1
15
+
16
+ # 进入MoGe目录
17
+ cd /home/ywan0794/MoGe
18
+
19
+ # 初始化并激活conda环境
20
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
21
+
22
+ # 激活环境
23
+ conda activate da3
24
+
25
+ # 设置 CUDA 环境变量
26
+ export CUDA_HOME=$CONDA_PREFIX
27
+ export PATH=$CUDA_HOME/bin:$PATH
28
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
29
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
30
+
31
+ # 设置Python路径
32
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
33
+
34
+ # 确认环境
35
+ echo "============================================"
36
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
37
+ echo "CUDA_HOME: $CUDA_HOME"
38
+ echo "============================================"
39
+
40
+ # 显示GPU信息
41
+ echo "=== GPU Info ==="
42
+ nvidia-smi
43
+
44
+ # 检查CUDA是否可用
45
+ python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU count:', torch.cuda.device_count()); print('GPU name:', torch.cuda.get_device_name(0))"
46
+
47
+ # ============================================
48
+ # 评估配置
49
+ # ============================================
50
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
51
+
52
+ # 模型配置
53
+ MODEL_NAME=da3-large # da3-base, da3-large, da3-giant
54
+
55
+ # 输出目录
56
+ OUT_DIR="eval_output"
57
+ mkdir -p $OUT_DIR
58
+
59
+ echo "============================================"
60
+ echo "Starting MoGe Evaluation for DA3 at $(date)"
61
+ echo "Model: $MODEL_NAME"
62
+ echo "Output: ${OUT_DIR}/da3_${MODEL_NAME}.json"
63
+ echo "============================================"
64
+
65
+ # ============================================
66
+ # 运行评估
67
+ # ============================================
68
+ python moge/scripts/eval_baseline.py \
69
+ --baseline baselines/da3.py \
70
+ --config /home/ywan0794/datasets/eval/moge_style_eval/all_benchmarks.json \
71
+ --output ${OUT_DIR}/${MODEL_NAME}_${TIMESTAMP}.json \
72
+ --repo /home/ywan0794/Depth-Anything-3 \
73
+ --model_name $MODEL_NAME
74
+
75
+ echo "============================================"
76
+ echo "Evaluation completed at $(date)"
77
+ echo "============================================"
eval_scripts/eval_depth_pro_slurm.sh ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-depth-pro
3
+ #SBATCH --output=/home/ywan0794/MoGe/moge_depth_pro_%j.log
4
+ #SBATCH --error=/home/ywan0794/MoGe/moge_depth_pro_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:H100:1
9
+ #SBATCH --time=0-04:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=erinyes
12
+
13
+ export PYTHONUNBUFFERED=1
14
+
15
+ cd /home/ywan0794/MoGe
16
+
17
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
18
+ conda activate depth-pro
19
+
20
+ export CUDA_HOME=$CONDA_PREFIX
21
+ export PATH=$CUDA_HOME/bin:$PATH
22
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
23
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.9/site-packages/torch/lib:$LD_LIBRARY_PATH
24
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
25
+
26
+ echo "============================================"
27
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
28
+ echo "CUDA_HOME: $CUDA_HOME"
29
+ echo "============================================"
30
+
31
+ echo "=== GPU Info ==="
32
+ nvidia-smi
33
+
34
+ python -c "import torch; print('CUDA:', torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else '')"
35
+
36
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
37
+ REPO=/home/ywan0794/EvalMDE/ml-depth-pro
38
+ CKPT=$REPO/checkpoints/depth_pro.pt
39
+ CONFIG=/home/ywan0794/MoGe/configs/eval/all_benchmarks.json
40
+ OUT_DIR=eval_output
41
+ mkdir -p $OUT_DIR
42
+
43
+ echo "============================================"
44
+ echo "Starting MoGe Eval for Depth Pro at $(date)"
45
+ echo "Repo: $REPO"
46
+ echo "Checkpoint: $CKPT"
47
+ echo "Config: $CONFIG"
48
+ echo "============================================"
49
+
50
+ python moge/scripts/eval_baseline.py \
51
+ --baseline baselines/depth_pro.py \
52
+ --config $CONFIG \
53
+ --output ${OUT_DIR}/depth_pro_${TIMESTAMP}.json \
54
+ --repo $REPO \
55
+ --checkpoint $CKPT \
56
+ --precision fp32
57
+
58
+ echo "============================================"
59
+ echo "Evaluation completed at $(date)"
60
+ echo "============================================"
eval_scripts/eval_depthmaster_slurm.sh ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-depthmaster
3
+ #SBATCH --output=/home/ywan0794/MoGe/moge_depthmaster_%j.log
4
+ #SBATCH --error=/home/ywan0794/MoGe/moge_depthmaster_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:H100:1
9
+ #SBATCH --time=0-04:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=erinyes
12
+
13
+ export PYTHONUNBUFFERED=1
14
+
15
+ cd /home/ywan0794/MoGe
16
+
17
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
18
+ conda activate depthmaster
19
+
20
+ export CUDA_HOME=$CONDA_PREFIX
21
+ export PATH=$CUDA_HOME/bin:$PATH
22
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
23
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
24
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
25
+
26
+ echo "============================================"
27
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
28
+ echo "CUDA_HOME: $CUDA_HOME"
29
+ echo "============================================"
30
+
31
+ echo "=== GPU Info ==="
32
+ nvidia-smi
33
+
34
+ python -c "import torch; print('CUDA:', torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else '')"
35
+
36
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
37
+ REPO=/home/ywan0794/EvalMDE/DepthMaster
38
+ # README: download HF zysong212/DepthMaster into ckpt/eval (relative to repo)
39
+ CKPT=$REPO/ckpt/eval
40
+ CONFIG=/home/ywan0794/MoGe/configs/eval/all_benchmarks.json
41
+ OUT_DIR=eval_output
42
+ mkdir -p $OUT_DIR
43
+
44
+ echo "============================================"
45
+ echo "Starting MoGe Eval for DepthMaster at $(date)"
46
+ echo "Repo: $REPO"
47
+ echo "Checkpoint dir: $CKPT"
48
+ echo "Config: $CONFIG"
49
+ echo "============================================"
50
+
51
+ # DepthMaster infer.sh default: --processing_res 768
52
+ python moge/scripts/eval_baseline.py \
53
+ --baseline baselines/depthmaster.py \
54
+ --config $CONFIG \
55
+ --output ${OUT_DIR}/depthmaster_${TIMESTAMP}.json \
56
+ --repo $REPO \
57
+ --checkpoint $CKPT \
58
+ --processing_res 768
59
+
60
+ echo "============================================"
61
+ echo "Evaluation completed at $(date)"
62
+ echo "============================================"
eval_scripts/eval_fe2e_slurm.sh ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-fe2e
3
+ #SBATCH --output=/home/ywan0794/MoGe/moge_fe2e_%j.log
4
+ #SBATCH --error=/home/ywan0794/MoGe/moge_fe2e_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:H100:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=80G
11
+ #SBATCH --nodelist=erinyes
12
+
13
+ export PYTHONUNBUFFERED=1
14
+
15
+ cd /home/ywan0794/MoGe
16
+
17
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
18
+ conda activate fe2e
19
+
20
+ export CUDA_HOME=$CONDA_PREFIX
21
+ export PATH=$CUDA_HOME/bin:$PATH
22
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
23
+ # FE2E pins torch 2.6 / diffusers 0.32.2 fork in its own env.
24
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
25
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
26
+
27
+ echo "============================================"
28
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
29
+ echo "CUDA_HOME: $CUDA_HOME"
30
+ echo "============================================"
31
+
32
+ echo "=== GPU Info ==="
33
+ nvidia-smi
34
+
35
+ python -c "import torch; print('CUDA:', torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else '')"
36
+
37
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
38
+ REPO=/home/ywan0794/EvalMDE/FE2E
39
+ MODEL_PATH=$REPO/pretrain
40
+ LORA_PATH=$REPO/lora/LDRN.safetensors
41
+ CONFIG=/home/ywan0794/MoGe/configs/eval/fe2e_all_benchmarks.json
42
+ OUT_DIR=eval_output
43
+ mkdir -p $OUT_DIR
44
+
45
+ echo "============================================"
46
+ echo "Starting MoGe Eval for FE2E at $(date)"
47
+ echo "Repo: $REPO"
48
+ echo "model_path: $MODEL_PATH"
49
+ echo "lora_path: $LORA_PATH"
50
+ echo "Config: $CONFIG"
51
+ echo "============================================"
52
+
53
+ # Mirror README depth-eval call:
54
+ # --single_denoise --prompt_type empty --task_name depth --cfg_guidance 6.0
55
+ # Default size_level matches README 768.
56
+ python moge/scripts/eval_baseline.py \
57
+ --baseline baselines/fe2e.py \
58
+ --config $CONFIG \
59
+ --output ${OUT_DIR}/fe2e_${TIMESTAMP}.json \
60
+ --repo $REPO \
61
+ --model_path $MODEL_PATH \
62
+ --lora_path $LORA_PATH \
63
+ --prompt_type empty \
64
+ --single_denoise \
65
+ --cfg_guidance 6.0 \
66
+ --size_level 768
67
+
68
+ echo "============================================"
69
+ echo "Evaluation completed at $(date)"
70
+ echo "============================================"
eval_scripts/eval_lotus_slurm.sh ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-lotus
3
+ #SBATCH --output=/home/ywan0794/MoGe/moge_lotus_%j.log
4
+ #SBATCH --error=/home/ywan0794/MoGe/moge_lotus_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:H100:1
9
+ #SBATCH --time=0-08:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=erinyes
12
+
13
+ export PYTHONUNBUFFERED=1
14
+
15
+ cd /home/ywan0794/MoGe
16
+
17
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
18
+ conda activate lotus
19
+
20
+ export CUDA_HOME=$CONDA_PREFIX
21
+ export PATH=$CUDA_HOME/bin:$PATH
22
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
23
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
24
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
25
+
26
+ echo "============================================"
27
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
28
+ echo "CUDA_HOME: $CUDA_HOME"
29
+ echo "============================================"
30
+
31
+ echo "=== GPU Info ==="
32
+ nvidia-smi
33
+
34
+ python -c "import torch; print('CUDA:', torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else '')"
35
+
36
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
37
+ REPO=/home/ywan0794/EvalMDE/Lotus
38
+ PRETRAINED=jingheya/lotus-depth-d-v2-0-disparity
39
+ CONFIG=/home/ywan0794/MoGe/configs/eval/all_benchmarks.json
40
+ OUT_DIR=eval_output
41
+ mkdir -p $OUT_DIR
42
+
43
+ echo "============================================"
44
+ echo "Starting MoGe Eval for Lotus at $(date)"
45
+ echo "Repo: $REPO"
46
+ echo "Checkpoint: $PRETRAINED"
47
+ echo "Config: $CONFIG"
48
+ echo "============================================"
49
+
50
+ # Lotus disparity v2 regression: --disparity flag tells the wrapper to emit
51
+ # `disparity_affine_invariant`. For depth ckpts (e.g. lotus-depth-d-v1-0), drop --disparity.
52
+ python moge/scripts/eval_baseline.py \
53
+ --baseline baselines/lotus.py \
54
+ --config $CONFIG \
55
+ --output ${OUT_DIR}/lotus_${TIMESTAMP}.json \
56
+ --repo $REPO \
57
+ --pretrained $PRETRAINED \
58
+ --mode regression \
59
+ --task_name depth \
60
+ --disparity \
61
+ --timestep 999
62
+
63
+ echo "============================================"
64
+ echo "Evaluation completed at $(date)"
65
+ echo "============================================"
eval_scripts/eval_lotus_v1_slurm.sh ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-lotus-v1
3
+ #SBATCH --output=/home/ywan0794/MoGe/moge_lotus_v1_%j.log
4
+ #SBATCH --error=/home/ywan0794/MoGe/moge_lotus_v1_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:H100:1
9
+ #SBATCH --time=0-04:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=erinyes
12
+ # MoGe protocol full eval on 10 benchmarks with Lotus depth ckpt v1-0 (depth output).
13
+ # Chosen over v2-1-disparity for uniform `depth_affine_invariant` output across 7 models.
14
+ # v1-0 is the original Lotus depth ckpt (Lotus paper, 2024-09); v2-1-disparity (2024-11)
15
+ # achieves better numbers per README, but emits disparity_affine_invariant — not directly
16
+ # comparable in depth space to the other 6 models.
17
+
18
+ export PYTHONUNBUFFERED=1
19
+ cd /home/ywan0794/MoGe
20
+
21
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
22
+ conda activate lotus
23
+
24
+ export CUDA_HOME=$CONDA_PREFIX
25
+ export PATH=$CUDA_HOME/bin:$PATH
26
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
27
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
28
+ export PYTHONPATH=${PYTHONPATH:-}:$(pwd)
29
+
30
+ echo "============================================"
31
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
32
+ echo "Ckpt: jingheya/lotus-depth-g-v1-0 (depth output, generation mode)"
33
+ echo "============================================"
34
+
35
+ nvidia-smi
36
+ python -c "import torch; print('CUDA:', torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else '')"
37
+
38
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
39
+ REPO=/home/ywan0794/EvalMDE/Lotus
40
+ PRETRAINED=jingheya/lotus-depth-g-v1-0
41
+ CONFIG=/home/ywan0794/MoGe/configs/eval/all_benchmarks.json
42
+ OUT_DIR=eval_output
43
+ mkdir -p $OUT_DIR
44
+
45
+ echo "============================================"
46
+ echo "Starting MoGe Eval for Lotus v1-0 (depth ckpt) at $(date)"
47
+ echo "Repo: $REPO"
48
+ echo "Checkpoint: $PRETRAINED"
49
+ echo "Config: $CONFIG"
50
+ echo "============================================"
51
+
52
+ # Paper-canonical from Lotus eval.sh: generation mode, fp16, seed=42, timestep=999.
53
+ # NO --disparity flag (v1-0 outputs depth, not disparity).
54
+ # Wrapper auto-emits `depth_affine_invariant` when --disparity is absent.
55
+ python moge/scripts/eval_baseline.py \
56
+ --baseline baselines/lotus.py \
57
+ --config $CONFIG \
58
+ --output ${OUT_DIR}/lotus_v1_${TIMESTAMP}.json \
59
+ --repo $REPO \
60
+ --pretrained $PRETRAINED \
61
+ --mode generation \
62
+ --task_name depth \
63
+ --timestep 999 \
64
+ --fp16 \
65
+ --seed 42
66
+
67
+ echo "============================================"
68
+ echo "Evaluation completed at $(date)"
69
+ echo "============================================"
eval_scripts/eval_marigold_slurm.sh ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-marigold
3
+ #SBATCH --output=/home/ywan0794/MoGe/moge_marigold_%j.log
4
+ #SBATCH --error=/home/ywan0794/MoGe/moge_marigold_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:H100:1
9
+ #SBATCH --time=0-08:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=erinyes
12
+
13
+ export PYTHONUNBUFFERED=1
14
+
15
+ cd /home/ywan0794/MoGe
16
+
17
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
18
+ conda activate marigold
19
+
20
+ export CUDA_HOME=$CONDA_PREFIX
21
+ export PATH=$CUDA_HOME/bin:$PATH
22
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
23
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
24
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
25
+
26
+ echo "============================================"
27
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
28
+ echo "CUDA_HOME: $CUDA_HOME"
29
+ echo "============================================"
30
+
31
+ echo "=== GPU Info ==="
32
+ nvidia-smi
33
+
34
+ python -c "import torch; print('CUDA:', torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else '')"
35
+
36
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
37
+ REPO=/home/ywan0794/EvalMDE/Marigold
38
+ CHECKPOINT=prs-eth/marigold-depth-v1-1
39
+ CONFIG=/home/ywan0794/MoGe/configs/eval/all_benchmarks.json
40
+ OUT_DIR=eval_output
41
+ mkdir -p $OUT_DIR
42
+
43
+ echo "============================================"
44
+ echo "Starting MoGe Eval for Marigold v1.1 at $(date)"
45
+ echo "Repo: $REPO"
46
+ echo "Checkpoint: $CHECKPOINT"
47
+ echo "Config: $CONFIG"
48
+ echo "============================================"
49
+
50
+ # Marigold defaults from official run.py: ensemble_size=1, denoise_steps=None (use checkpoint default),
51
+ # processing_res=None (use checkpoint default), fp16 ON for speed/VRAM (per run.py example).
52
+ python moge/scripts/eval_baseline.py \
53
+ --baseline baselines/marigold.py \
54
+ --config $CONFIG \
55
+ --output ${OUT_DIR}/marigold_${TIMESTAMP}.json \
56
+ --repo $REPO \
57
+ --checkpoint $CHECKPOINT \
58
+ --ensemble_size 1 \
59
+ --fp16
60
+
61
+ echo "============================================"
62
+ echo "Evaluation completed at $(date)"
63
+ echo "============================================"
eval_scripts/eval_ppd_slurm.sh ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-ppd
3
+ #SBATCH --output=/home/ywan0794/MoGe/moge_ppd_%j.log
4
+ #SBATCH --error=/home/ywan0794/MoGe/moge_ppd_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:H100:1
9
+ #SBATCH --time=0-08:00:00
10
+ #SBATCH --mem=40G
11
+ #SBATCH --nodelist=erinyes
12
+
13
+ export PYTHONUNBUFFERED=1
14
+
15
+ cd /home/ywan0794/MoGe
16
+
17
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
18
+ conda activate ppd
19
+
20
+ export CUDA_HOME=$CONDA_PREFIX
21
+ export PATH=$CUDA_HOME/bin:$PATH
22
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
23
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
24
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
25
+
26
+ echo "============================================"
27
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
28
+ echo "CUDA_HOME: $CUDA_HOME"
29
+ echo "============================================"
30
+
31
+ echo "=== GPU Info ==="
32
+ nvidia-smi
33
+
34
+ python -c "import torch; print('CUDA:', torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else '')"
35
+
36
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
37
+ REPO=/home/ywan0794/EvalMDE/Pixel-Perfect-Depth
38
+ CONFIG=/home/ywan0794/MoGe/configs/eval/all_benchmarks.json
39
+ OUT_DIR=eval_output
40
+ mkdir -p $OUT_DIR
41
+
42
+ echo "============================================"
43
+ echo "Starting MoGe Eval for Pixel-Perfect-Depth at $(date)"
44
+ echo "Repo: $REPO"
45
+ echo "Config: $CONFIG"
46
+ echo "============================================"
47
+
48
+ # PPD README: ppd.pth + depth_anything_v2_vitl.pth under <repo>/checkpoints/.
49
+ # sampling_steps=4 is run.py default.
50
+ python moge/scripts/eval_baseline.py \
51
+ --baseline baselines/ppd.py \
52
+ --config $CONFIG \
53
+ --output ${OUT_DIR}/ppd_${TIMESTAMP}.json \
54
+ --repo $REPO \
55
+ --semantics_model DA2 \
56
+ --semantics_pth checkpoints/depth_anything_v2_vitl.pth \
57
+ --model_pth checkpoints/ppd.pth \
58
+ --sampling_steps 4
59
+
60
+ echo "============================================"
61
+ echo "Evaluation completed at $(date)"
62
+ echo "============================================"
eval_scripts/eval_vggt_dpt_metric_slurm.sh ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=vggt-dpt-metric
3
+ #SBATCH --output=vggt_dpt_metric_%j.log
4
+ #SBATCH --error=vggt_dpt_metric_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=80G
11
+ #SBATCH --nodelist=hades
12
+
13
+ # 禁用Python输出缓冲
14
+ export PYTHONUNBUFFERED=1
15
+
16
+ # 进入MoGe目录
17
+ cd /home/ywan0794/MoGe
18
+
19
+ # 初始化并激活conda环境
20
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
21
+
22
+ # 激活环境
23
+ conda activate vggt
24
+
25
+ # 设置 CUDA 环境变量
26
+ export CUDA_HOME=$CONDA_PREFIX
27
+ export PATH=$CUDA_HOME/bin:$PATH
28
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
29
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
30
+
31
+ # 设置Python路径
32
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
33
+
34
+ # 确认环境
35
+ echo "============================================"
36
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
37
+ echo "CUDA_HOME: $CUDA_HOME"
38
+ echo "============================================"
39
+
40
+ # 显示GPU信息
41
+ echo "=== GPU Info ==="
42
+ nvidia-smi
43
+
44
+ # 检查CUDA是否可用
45
+ python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU count:', torch.cuda.device_count()); print('GPU name:', torch.cuda.get_device_name(0))"
46
+
47
+ # ============================================
48
+ # 评估配置
49
+ # ============================================
50
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
51
+
52
+ # 模型配置
53
+ DECODER=dpt
54
+
55
+ # Checkpoint路径
56
+ CKPT="/home/ywan0794/vggt/training/logs/dpt_metric_lora/ckpts/checkpoint_4.pt"
57
+
58
+ # 输出目录
59
+ OUT_DIR="eval_output"
60
+ mkdir -p $OUT_DIR
61
+
62
+ echo "============================================"
63
+ echo "Starting MoGe Evaluation for VGGT+DPT (Metric) at $(date)"
64
+ echo "Decoder: $DECODER"
65
+ echo "Checkpoint: $CKPT"
66
+ echo "Output: ${OUT_DIR}/vggt_dpt_metric_${TIMESTAMP}.json"
67
+ echo "============================================"
68
+
69
+ # ============================================
70
+ # 运行评估 (使用 vggt_metric.py baseline 和 metric_benchmarks.json)
71
+ # ============================================
72
+ python moge/scripts/eval_baseline.py \
73
+ --baseline baselines/vggt_metric.py \
74
+ --config /home/ywan0794/datasets/eval/moge_style_eval/metric_benchmarks.json \
75
+ --output ${OUT_DIR}/vggt_dpt_metric_${TIMESTAMP}.json \
76
+ --repo /home/ywan0794/vggt \
77
+ --checkpoint "$CKPT" \
78
+ --decoder $DECODER \
79
+ --lora_rank 8 \
80
+ --lora_alpha 16
81
+
82
+ echo "============================================"
83
+ echo "Evaluation completed at $(date)"
84
+ echo "============================================"
eval_scripts/eval_vggt_dpt_slurm.sh ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=moge-vggt-dpt
3
+ #SBATCH --output=moge_vggt_dpt_%j.log
4
+ #SBATCH --error=moge_vggt_dpt_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --gres=gpu:1
9
+ #SBATCH --time=1-00:00:00
10
+ #SBATCH --mem=80G
11
+ #SBATCH --nodelist=hades
12
+
13
+ # 禁用Python输出缓冲
14
+ export PYTHONUNBUFFERED=1
15
+
16
+ # 进入MoGe目录
17
+ cd /home/ywan0794/MoGe
18
+
19
+ # 初始化并激活conda环境
20
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
21
+
22
+ # 激活环境
23
+ conda activate vggt
24
+
25
+ # 设置 CUDA 环境变量
26
+ export CUDA_HOME=$CONDA_PREFIX
27
+ export PATH=$CUDA_HOME/bin:$PATH
28
+ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
29
+ export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages/torch/lib:$LD_LIBRARY_PATH
30
+
31
+ # 设置Python路径
32
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
33
+
34
+ # 确认环境
35
+ echo "============================================"
36
+ echo "Activated conda environment: $CONDA_DEFAULT_ENV"
37
+ echo "CUDA_HOME: $CUDA_HOME"
38
+ echo "============================================"
39
+
40
+ # 显示GPU信息
41
+ echo "=== GPU Info ==="
42
+ nvidia-smi
43
+
44
+ # 检查CUDA是否可用
45
+ python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU count:', torch.cuda.device_count()); print('GPU name:', torch.cuda.get_device_name(0))"
46
+
47
+ # ============================================
48
+ # 评估配置
49
+ # ============================================
50
+ TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
51
+
52
+ # 模型配置
53
+ DECODER=dpt
54
+
55
+ # Checkpoint路径
56
+ CKPT="/home/ywan0794/vggt/training/logs/dpt_metric_lora/ckpts/checkpoint_4.pt"
57
+
58
+ # 输出目录
59
+ OUT_DIR="eval_output"
60
+ mkdir -p $OUT_DIR
61
+
62
+ echo "============================================"
63
+ echo "Starting MoGe Evaluation for VGGT+DPT at $(date)"
64
+ echo "Decoder: $DECODER"
65
+ echo "Checkpoint: $CKPT"
66
+ echo "Output: ${OUT_DIR}/vggt_dpt.json"
67
+ echo "============================================"
68
+
69
+ # ============================================
70
+ # 运行评估
71
+ # ============================================
72
+ python moge/scripts/eval_baseline.py \
73
+ --baseline baselines/vggt_custom.py \
74
+ --config /home/ywan0794/datasets/eval/moge_style_eval/all_benchmarks.json \
75
+ --output ${OUT_DIR}/vggt_dpt_${TIMESTAMP}.json \
76
+ --repo /home/ywan0794/vggt \
77
+ --checkpoint "$CKPT" \
78
+ --decoder $DECODER \
79
+ --lora_rank 8 \
80
+ --lora_alpha 16
81
+
82
+ echo "============================================"
83
+ echo "Evaluation completed at $(date)"
84
+ echo "============================================"