zeyuren2002 commited on
Commit
da3ed5b
·
verified ·
1 Parent(s): 914c8a5

Add files using upload-large-folder tool

Browse files
.gitignore ADDED
@@ -0,0 +1,423 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Ignore Visual Studio temporary files, build results, and
2
+ ## files generated by popular Visual Studio add-ons.
3
+ ##
4
+ ## Get latest from https://github.com/github/gitignore/blob/main/VisualStudio.gitignore
5
+
6
+ # User-specific files
7
+ *.rsuser
8
+ *.suo
9
+ *.user
10
+ *.userosscache
11
+ *.sln.docstates
12
+
13
+ # User-specific files (MonoDevelop/Xamarin Studio)
14
+ *.userprefs
15
+
16
+ # Mono auto generated files
17
+ mono_crash.*
18
+
19
+ # Build results
20
+ [Dd]ebug/
21
+ [Dd]ebugPublic/
22
+ [Rr]elease/
23
+ [Rr]eleases/
24
+ x64/
25
+ x86/
26
+ [Ww][Ii][Nn]32/
27
+ [Aa][Rr][Mm]/
28
+ [Aa][Rr][Mm]64/
29
+ bld/
30
+ [Bb]in/
31
+ [Oo]bj/
32
+ [Ll]og/
33
+ [Ll]ogs/
34
+
35
+ # Visual Studio 2015/2017 cache/options directory
36
+ .vs/
37
+ # Uncomment if you have tasks that create the project's static files in wwwroot
38
+ #wwwroot/
39
+
40
+ # Visual Studio 2017 auto generated files
41
+ Generated\ Files/
42
+
43
+ # MSTest test Results
44
+ [Tt]est[Rr]esult*/
45
+ [Bb]uild[Ll]og.*
46
+
47
+ # NUnit
48
+ *.VisualState.xml
49
+ TestResult.xml
50
+ nunit-*.xml
51
+
52
+ # Build Results of an ATL Project
53
+ [Dd]ebugPS/
54
+ [Rr]eleasePS/
55
+ dlldata.c
56
+
57
+ # Benchmark Results
58
+ BenchmarkDotNet.Artifacts/
59
+
60
+ # .NET Core
61
+ project.lock.json
62
+ project.fragment.lock.json
63
+ artifacts/
64
+
65
+ # ASP.NET Scaffolding
66
+ ScaffoldingReadMe.txt
67
+
68
+ # StyleCop
69
+ StyleCopReport.xml
70
+
71
+ # Files built by Visual Studio
72
+ *_i.c
73
+ *_p.c
74
+ *_h.h
75
+ *.ilk
76
+ *.meta
77
+ *.obj
78
+ *.iobj
79
+ *.pch
80
+ *.pdb
81
+ *.ipdb
82
+ *.pgc
83
+ *.pgd
84
+ *.rsp
85
+ *.sbr
86
+ *.tlb
87
+ *.tli
88
+ *.tlh
89
+ *.tmp
90
+ *.tmp_proj
91
+ *_wpftmp.csproj
92
+ *.log
93
+ *.tlog
94
+ *.vspscc
95
+ *.vssscc
96
+ .builds
97
+ *.pidb
98
+ *.svclog
99
+ *.scc
100
+
101
+ # Chutzpah Test files
102
+ _Chutzpah*
103
+
104
+ # Visual C++ cache files
105
+ ipch/
106
+ *.aps
107
+ *.ncb
108
+ *.opendb
109
+ *.opensdf
110
+ *.sdf
111
+ *.cachefile
112
+ *.VC.db
113
+ *.VC.VC.opendb
114
+
115
+ # Visual Studio profiler
116
+ *.psess
117
+ *.vsp
118
+ *.vspx
119
+ *.sap
120
+
121
+ # Visual Studio Trace Files
122
+ *.e2e
123
+
124
+ # TFS 2012 Local Workspace
125
+ $tf/
126
+
127
+ # Guidance Automation Toolkit
128
+ *.gpState
129
+
130
+ # ReSharper is a .NET coding add-in
131
+ _ReSharper*/
132
+ *.[Rr]e[Ss]harper
133
+ *.DotSettings.user
134
+
135
+ # TeamCity is a build add-in
136
+ _TeamCity*
137
+
138
+ # DotCover is a Code Coverage Tool
139
+ *.dotCover
140
+
141
+ # AxoCover is a Code Coverage Tool
142
+ .axoCover/*
143
+ !.axoCover/settings.json
144
+
145
+ # Coverlet is a free, cross platform Code Coverage Tool
146
+ coverage*.json
147
+ coverage*.xml
148
+ coverage*.info
149
+
150
+ # Visual Studio code coverage results
151
+ *.coverage
152
+ *.coveragexml
153
+
154
+ # NCrunch
155
+ _NCrunch_*
156
+ .*crunch*.local.xml
157
+ nCrunchTemp_*
158
+
159
+ # MightyMoose
160
+ *.mm.*
161
+ AutoTest.Net/
162
+
163
+ # Web workbench (sass)
164
+ .sass-cache/
165
+
166
+ # Installshield output folder
167
+ [Ee]xpress/
168
+
169
+ # DocProject is a documentation generator add-in
170
+ DocProject/buildhelp/
171
+ DocProject/Help/*.HxT
172
+ DocProject/Help/*.HxC
173
+ DocProject/Help/*.hhc
174
+ DocProject/Help/*.hhk
175
+ DocProject/Help/*.hhp
176
+ DocProject/Help/Html2
177
+ DocProject/Help/html
178
+
179
+ # Click-Once directory
180
+ publish/
181
+
182
+ # Publish Web Output
183
+ *.[Pp]ublish.xml
184
+ *.azurePubxml
185
+ # Note: Comment the next line if you want to checkin your web deploy settings,
186
+ # but database connection strings (with potential passwords) will be unencrypted
187
+ *.pubxml
188
+ *.publishproj
189
+
190
+ # Microsoft Azure Web App publish settings. Comment the next line if you want to
191
+ # checkin your Azure Web App publish settings, but sensitive information contained
192
+ # in these scripts will be unencrypted
193
+ PublishScripts/
194
+
195
+ # NuGet Packages
196
+ *.nupkg
197
+ # NuGet Symbol Packages
198
+ *.snupkg
199
+ # The packages folder can be ignored because of Package Restore
200
+ **/[Pp]ackages/*
201
+ # except build/, which is used as an MSBuild target.
202
+ !**/[Pp]ackages/build/
203
+ # Uncomment if necessary however generally it will be regenerated when needed
204
+ #!**/[Pp]ackages/repositories.config
205
+ # NuGet v3's project.json files produces more ignorable files
206
+ *.nuget.props
207
+ *.nuget.targets
208
+
209
+ # Microsoft Azure Build Output
210
+ csx/
211
+ *.build.csdef
212
+
213
+ # Microsoft Azure Emulator
214
+ ecf/
215
+ rcf/
216
+
217
+ # Windows Store app package directories and files
218
+ AppPackages/
219
+ BundleArtifacts/
220
+ Package.StoreAssociation.xml
221
+ _pkginfo.txt
222
+ *.appx
223
+ *.appxbundle
224
+ *.appxupload
225
+
226
+ # Visual Studio cache files
227
+ # files ending in .cache can be ignored
228
+ *.[Cc]ache
229
+ # but keep track of directories ending in .cache
230
+ !?*.[Cc]ache/
231
+
232
+ # Others
233
+ ClientBin/
234
+ ~$*
235
+ *~
236
+ *.dbmdl
237
+ *.dbproj.schemaview
238
+ *.jfm
239
+ *.pfx
240
+ *.publishsettings
241
+ orleans.codegen.cs
242
+
243
+ # Including strong name files can present a security risk
244
+ # (https://github.com/github/gitignore/pull/2483#issue-259490424)
245
+ #*.snk
246
+
247
+ # Since there are multiple workflows, uncomment next line to ignore bower_components
248
+ # (https://github.com/github/gitignore/pull/1529#issuecomment-104372622)
249
+ #bower_components/
250
+
251
+ # RIA/Silverlight projects
252
+ Generated_Code/
253
+
254
+ # Backup & report files from converting an old project file
255
+ # to a newer Visual Studio version. Backup files are not needed,
256
+ # because we have git ;-)
257
+ _UpgradeReport_Files/
258
+ Backup*/
259
+ UpgradeLog*.XML
260
+ UpgradeLog*.htm
261
+ ServiceFabricBackup/
262
+ *.rptproj.bak
263
+
264
+ # SQL Server files
265
+ *.mdf
266
+ *.ldf
267
+ *.ndf
268
+
269
+ # Business Intelligence projects
270
+ *.rdl.data
271
+ *.bim.layout
272
+ *.bim_*.settings
273
+ *.rptproj.rsuser
274
+ *- [Bb]ackup.rdl
275
+ *- [Bb]ackup ([0-9]).rdl
276
+ *- [Bb]ackup ([0-9][0-9]).rdl
277
+
278
+ # Microsoft Fakes
279
+ FakesAssemblies/
280
+
281
+ # GhostDoc plugin setting file
282
+ *.GhostDoc.xml
283
+
284
+ # Node.js Tools for Visual Studio
285
+ .ntvs_analysis.dat
286
+ node_modules/
287
+
288
+ # Visual Studio 6 build log
289
+ *.plg
290
+
291
+ # Visual Studio 6 workspace options file
292
+ *.opt
293
+
294
+ # Visual Studio 6 auto-generated workspace file (contains which files were open etc.)
295
+ *.vbw
296
+
297
+ # Visual Studio 6 auto-generated project file (contains which files were open etc.)
298
+ *.vbp
299
+
300
+ # Visual Studio 6 workspace and project file (working project files containing files to include in project)
301
+ *.dsw
302
+ *.dsp
303
+
304
+ # Visual Studio 6 technical files
305
+ *.ncb
306
+ *.aps
307
+
308
+ # Visual Studio LightSwitch build output
309
+ **/*.HTMLClient/GeneratedArtifacts
310
+ **/*.DesktopClient/GeneratedArtifacts
311
+ **/*.DesktopClient/ModelManifest.xml
312
+ **/*.Server/GeneratedArtifacts
313
+ **/*.Server/ModelManifest.xml
314
+ _Pvt_Extensions
315
+
316
+ # Paket dependency manager
317
+ .paket/paket.exe
318
+ paket-files/
319
+
320
+ # FAKE - F# Make
321
+ .fake/
322
+
323
+ # CodeRush personal settings
324
+ .cr/personal
325
+
326
+ # Python Tools for Visual Studio (PTVS)
327
+ __pycache__/
328
+ *.pyc
329
+
330
+ # Cake - Uncomment if you are using it
331
+ # tools/**
332
+ # !tools/packages.config
333
+
334
+ # Tabs Studio
335
+ *.tss
336
+
337
+ # Telerik's JustMock configuration file
338
+ *.jmconfig
339
+
340
+ # BizTalk build output
341
+ *.btp.cs
342
+ *.btm.cs
343
+ *.odx.cs
344
+ *.xsd.cs
345
+
346
+ # OpenCover UI analysis results
347
+ OpenCover/
348
+
349
+ # Azure Stream Analytics local run output
350
+ ASALocalRun/
351
+
352
+ # MSBuild Binary and Structured Log
353
+ *.binlog
354
+
355
+ # NVidia Nsight GPU debugger configuration file
356
+ *.nvuser
357
+
358
+ # MFractors (Xamarin productivity tool) working folder
359
+ .mfractor/
360
+
361
+ # Local History for Visual Studio
362
+ .localhistory/
363
+
364
+ # Visual Studio History (VSHistory) files
365
+ .vshistory/
366
+
367
+ # BeatPulse healthcheck temp database
368
+ healthchecksdb
369
+
370
+ # Backup folder for Package Reference Convert tool in Visual Studio 2017
371
+ MigrationBackup/
372
+
373
+ # Ionide (cross platform F# VS Code tools) working folder
374
+ .ionide/
375
+
376
+ # Fody - auto-generated XML schema
377
+ FodyWeavers.xsd
378
+
379
+ # VS Code files for those working on multiple tools
380
+ .vscode/*
381
+ !.vscode/settings.json
382
+ !.vscode/tasks.json
383
+ !.vscode/launch.json
384
+ !.vscode/extensions.json
385
+ *.code-workspace
386
+
387
+ # Local History for Visual Studio Code
388
+ .history/
389
+
390
+ # Windows Installer files from build outputs
391
+ *.cab
392
+ *.msi
393
+ *.msix
394
+ *.msm
395
+ *.msp
396
+
397
+ # JetBrains Rider
398
+ *.sln.iml
399
+
400
+ # Python
401
+ *.egg-info/
402
+ /build
403
+
404
+ # MoGe
405
+ /data*
406
+ /download
407
+ /extract
408
+ /debug
409
+ /workspace
410
+ /mlruns
411
+ /infer_output
412
+ /video_output
413
+ /eval_output
414
+ /.blobcache
415
+ /test_images
416
+ /test_videos
417
+ /vis
418
+ /videos
419
+ /blobmnt
420
+ /eval_dump
421
+ /pretrained
422
+ /.gradio
423
+ /tmp
CHANGELOG.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## 2024-11-28
2
+ ### Added
3
+ - Supported user-provided camera FOV. See [scripts/infer.py](scripts/infer.py) --fov_x.
4
+ - Related issues: [#25](https://github.com/microsoft/MoGe/issues/25) and [#24](https://github.com/microsoft/MoGe/issues/24).
5
+ - Added inference scripts for panorama images. See [scripts/infer_panorama.py](scripts/infer_panorama.py).
6
+ - Related issue: [#19](https://github.com/microsoft/MoGe/issues/19).
7
+
8
+ ### Fixed
9
+ - Suppressed unnecessary numpy runtime warnings.
10
+ - Specified recommended versions of requirements.
11
+ - Related issue: [#21](https://github.com/microsoft/MoGe/issues/21).
12
+
13
+ ### Changed
14
+ - Moved `app.py` and `infer.py` to [scripts/](scripts/)
15
+ - Improved edge removal.
16
+
17
+ ## 2025-03-18
18
+ ### Added
19
+ - Training and evaluation code. See [docs/train.md](docs/train.md) and [docs/eval.md](docs/eval.md).
20
+ - Supported installation via pip. Thanks to @fabiencastan and @jgoueslard
21
+ for commits in the [#47](https://github.com/microsoft/MoGe/pull/47)
22
+ - Supported command-line usage when installed.
23
+
24
+ ### Changed
25
+ - Moved `scripts/` into `moge/` for package installation and command-line usage.
26
+ - Renamed `moge.model.moge_model` to `moge.model.v1` for version management.
27
+ Now you can import the model class through `from moge.model.v1 import MoGeModel` or `from moge.model import import_model_class_by_version; MoGeModel = import_model_class_by_version('v1')`.
28
+ - Exposed `num_tokens` parameter in MoGe model.
29
+
30
+ ## 2025-06-10
31
+ ### Added
32
+ - Released MoGe-2.
33
+
34
+ ## 2025-10-16
35
+ ### Added
36
+ - Update training code for MoGe-2.
37
+
38
+ ### Changed
39
+ - Refactored training dataloader code for better readability.
40
+ - Removed Git LFS for convenience.
CODE_OF_CONDUCT.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # Microsoft Open Source Code of Conduct
2
+
3
+ This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
4
+
5
+ Resources:
6
+
7
+ - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
8
+ - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
9
+ - Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns
LICENSE ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) Microsoft Corporation.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE
22
+
23
+
24
+ Apache License
25
+ Version 2.0, January 2004
26
+ http://www.apache.org/licenses/
27
+
28
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
29
+
30
+ 1. Definitions.
31
+
32
+ "License" shall mean the terms and conditions for use, reproduction,
33
+ and distribution as defined by Sections 1 through 9 of this document.
34
+
35
+ "Licensor" shall mean the copyright owner or entity authorized by
36
+ the copyright owner that is granting the License.
37
+
38
+ "Legal Entity" shall mean the union of the acting entity and all
39
+ other entities that control, are controlled by, or are under common
40
+ control with that entity. For the purposes of this definition,
41
+ "control" means (i) the power, direct or indirect, to cause the
42
+ direction or management of such entity, whether by contract or
43
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
44
+ outstanding shares, or (iii) beneficial ownership of such entity.
45
+
46
+ "You" (or "Your") shall mean an individual or Legal Entity
47
+ exercising permissions granted by this License.
48
+
49
+ "Source" form shall mean the preferred form for making modifications,
50
+ including but not limited to software source code, documentation
51
+ source, and configuration files.
52
+
53
+ "Object" form shall mean any form resulting from mechanical
54
+ transformation or translation of a Source form, including but
55
+ not limited to compiled object code, generated documentation,
56
+ and conversions to other media types.
57
+
58
+ "Work" shall mean the work of authorship, whether in Source or
59
+ Object form, made available under the License, as indicated by a
60
+ copyright notice that is included in or attached to the work
61
+ (an example is provided in the Appendix below).
62
+
63
+ "Derivative Works" shall mean any work, whether in Source or Object
64
+ form, that is based on (or derived from) the Work and for which the
65
+ editorial revisions, annotations, elaborations, or other modifications
66
+ represent, as a whole, an original work of authorship. For the purposes
67
+ of this License, Derivative Works shall not include works that remain
68
+ separable from, or merely link (or bind by name) to the interfaces of,
69
+ the Work and Derivative Works thereof.
70
+
71
+ "Contribution" shall mean any work of authorship, including
72
+ the original version of the Work and any modifications or additions
73
+ to that Work or Derivative Works thereof, that is intentionally
74
+ submitted to Licensor for inclusion in the Work by the copyright owner
75
+ or by an individual or Legal Entity authorized to submit on behalf of
76
+ the copyright owner. For the purposes of this definition, "submitted"
77
+ means any form of electronic, verbal, or written communication sent
78
+ to the Licensor or its representatives, including but not limited to
79
+ communication on electronic mailing lists, source code control systems,
80
+ and issue tracking systems that are managed by, or on behalf of, the
81
+ Licensor for the purpose of discussing and improving the Work, but
82
+ excluding communication that is conspicuously marked or otherwise
83
+ designated in writing by the copyright owner as "Not a Contribution."
84
+
85
+ "Contributor" shall mean Licensor and any individual or Legal Entity
86
+ on behalf of whom a Contribution has been received by Licensor and
87
+ subsequently incorporated within the Work.
88
+
89
+ 2. Grant of Copyright License. Subject to the terms and conditions of
90
+ this License, each Contributor hereby grants to You a perpetual,
91
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
92
+ copyright license to reproduce, prepare Derivative Works of,
93
+ publicly display, publicly perform, sublicense, and distribute the
94
+ Work and such Derivative Works in Source or Object form.
95
+
96
+ 3. Grant of Patent License. Subject to the terms and conditions of
97
+ this License, each Contributor hereby grants to You a perpetual,
98
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
99
+ (except as stated in this section) patent license to make, have made,
100
+ use, offer to sell, sell, import, and otherwise transfer the Work,
101
+ where such license applies only to those patent claims licensable
102
+ by such Contributor that are necessarily infringed by their
103
+ Contribution(s) alone or by combination of their Contribution(s)
104
+ with the Work to which such Contribution(s) was submitted. If You
105
+ institute patent litigation against any entity (including a
106
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
107
+ or a Contribution incorporated within the Work constitutes direct
108
+ or contributory patent infringement, then any patent licenses
109
+ granted to You under this License for that Work shall terminate
110
+ as of the date such litigation is filed.
111
+
112
+ 4. Redistribution. You may reproduce and distribute copies of the
113
+ Work or Derivative Works thereof in any medium, with or without
114
+ modifications, and in Source or Object form, provided that You
115
+ meet the following conditions:
116
+
117
+ (a) You must give any other recipients of the Work or
118
+ Derivative Works a copy of this License; and
119
+
120
+ (b) You must cause any modified files to carry prominent notices
121
+ stating that You changed the files; and
122
+
123
+ (c) You must retain, in the Source form of any Derivative Works
124
+ that You distribute, all copyright, patent, trademark, and
125
+ attribution notices from the Source form of the Work,
126
+ excluding those notices that do not pertain to any part of
127
+ the Derivative Works; and
128
+
129
+ (d) If the Work includes a "NOTICE" text file as part of its
130
+ distribution, then any Derivative Works that You distribute must
131
+ include a readable copy of the attribution notices contained
132
+ within such NOTICE file, excluding those notices that do not
133
+ pertain to any part of the Derivative Works, in at least one
134
+ of the following places: within a NOTICE text file distributed
135
+ as part of the Derivative Works; within the Source form or
136
+ documentation, if provided along with the Derivative Works; or,
137
+ within a display generated by the Derivative Works, if and
138
+ wherever such third-party notices normally appear. The contents
139
+ of the NOTICE file are for informational purposes only and
140
+ do not modify the License. You may add Your own attribution
141
+ notices within Derivative Works that You distribute, alongside
142
+ or as an addendum to the NOTICE text from the Work, provided
143
+ that such additional attribution notices cannot be construed
144
+ as modifying the License.
145
+
146
+ You may add Your own copyright statement to Your modifications and
147
+ may provide additional or different license terms and conditions
148
+ for use, reproduction, or distribution of Your modifications, or
149
+ for any such Derivative Works as a whole, provided Your use,
150
+ reproduction, and distribution of the Work otherwise complies with
151
+ the conditions stated in this License.
152
+
153
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
154
+ any Contribution intentionally submitted for inclusion in the Work
155
+ by You to the Licensor shall be under the terms and conditions of
156
+ this License, without any additional terms or conditions.
157
+ Notwithstanding the above, nothing herein shall supersede or modify
158
+ the terms of any separate license agreement you may have executed
159
+ with Licensor regarding such Contributions.
160
+
161
+ 6. Trademarks. This License does not grant permission to use the trade
162
+ names, trademarks, service marks, or product names of the Licensor,
163
+ except as required for reasonable and customary use in describing the
164
+ origin of the Work and reproducing the content of the NOTICE file.
165
+
166
+ 7. Disclaimer of Warranty. Unless required by applicable law or
167
+ agreed to in writing, Licensor provides the Work (and each
168
+ Contributor provides its Contributions) on an "AS IS" BASIS,
169
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
170
+ implied, including, without limitation, any warranties or conditions
171
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
172
+ PARTICULAR PURPOSE. You are solely responsible for determining the
173
+ appropriateness of using or redistributing the Work and assume any
174
+ risks associated with Your exercise of permissions under this License.
175
+
176
+ 8. Limitation of Liability. In no event and under no legal theory,
177
+ whether in tort (including negligence), contract, or otherwise,
178
+ unless required by applicable law (such as deliberate and grossly
179
+ negligent acts) or agreed to in writing, shall any Contributor be
180
+ liable to You for damages, including any direct, indirect, special,
181
+ incidental, or consequential damages of any character arising as a
182
+ result of this License or out of the use or inability to use the
183
+ Work (including but not limited to damages for loss of goodwill,
184
+ work stoppage, computer failure or malfunction, or any and all
185
+ other commercial damages or losses), even if such Contributor
186
+ has been advised of the possibility of such damages.
187
+
188
+ 9. Accepting Warranty or Additional Liability. While redistributing
189
+ the Work or Derivative Works thereof, You may choose to offer,
190
+ and charge a fee for, acceptance of support, warranty, indemnity,
191
+ or other liability obligations and/or rights consistent with this
192
+ License. However, in accepting such obligations, You may act only
193
+ on Your own behalf and on Your sole responsibility, not on behalf
194
+ of any other Contributor, and only if You agree to indemnify,
195
+ defend, and hold each Contributor harmless for any liability
196
+ incurred by, or claims asserted against, such Contributor by reason
197
+ of your accepting any such warranty or additional liability.
198
+
199
+ END OF TERMS AND CONDITIONS
200
+
201
+ APPENDIX: How to apply the Apache License to your work.
202
+
203
+ To apply the Apache License to your work, attach the following
204
+ boilerplate notice, with the fields enclosed by brackets "[]"
205
+ replaced with your own identifying information. (Don't include
206
+ the brackets!) The text should be enclosed in the appropriate
207
+ comment syntax for the file format. We also recommend that a
208
+ file or class name and description of purpose be included on the
209
+ same "printed page" as the copyright notice for easier
210
+ identification within third-party archives.
211
+
212
+ Copyright [yyyy] [name of copyright owner]
213
+
214
+ Licensed under the Apache License, Version 2.0 (the "License");
215
+ you may not use this file except in compliance with the License.
216
+ You may obtain a copy of the License at
217
+
218
+ http://www.apache.org/licenses/LICENSE-2.0
219
+
220
+ Unless required by applicable law or agreed to in writing, software
221
+ distributed under the License is distributed on an "AS IS" BASIS,
222
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
223
+ See the License for the specific language governing permissions and
224
+ limitations under the License.
README.md ADDED
@@ -0,0 +1,295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MoGe: Accurate Monocular Geometry Estimation
2
+
3
+ MoGe is a powerful model for recovering 3D geometry from monocular open-domain images, including metric point maps, metric depth maps, normal maps and camera FOV. ***Check our websites ([MoGe-1](https://wangrc.site/MoGePage), [MoGe-2](https://wangrc.site/MoGe2Page)) for videos and interactive results!***
4
+
5
+ ## 📖 Publications
6
+
7
+ ### MoGe-2: Accurate Monocular Geometry with Metric Scale and Sharp Details
8
+
9
+ <div align="center">
10
+ <a href="https://arxiv.org/abs/2507.02546"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a>
11
+ <a href='https://wangrc.site/MoGe2Page/'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=googlechrome&logoColor=white' alt='Project Page'></a>
12
+ <a href='https://huggingface.co/spaces/Ruicheng/MoGe-2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo_(MoGe_v2)-blue'></a>
13
+
14
+ https://github.com/user-attachments/assets/8f9ae680-659d-4f7f-82e2-b9ed9d6b988a
15
+
16
+ </div>
17
+
18
+ ### MoGe: Unlocking Accurate Monocular Geometry Estimation for Open-Domain Images with Optimal Training Supervision
19
+
20
+ <div align="center">
21
+ <a href="https://arxiv.org/abs/2410.19115"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a>
22
+ <a href='https://wangrc.site/MoGePage/'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=googlechrome&logoColor=white' alt='Project Page'></a>
23
+ <a href='https://huggingface.co/spaces/Ruicheng/MoGe'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo_(MoGe_v1)-blue'></a>
24
+ </div>
25
+
26
+ <img src="./assets/overview_simplified.png" width="100%" alt="Method overview" align="center">
27
+
28
+
29
+ ## 🌟 Features
30
+
31
+ * **Accurate 3D geometry estimation**: Estimate point maps & depth maps & [normal maps](docs/normal.md) from open-domain single images with high precision -- all capabilities in one model, one forward pass.
32
+ * **Optional ground-truth FOV input**: Enhance model accuracy further by providing the true field of view.
33
+ * **Flexible resolution support**: Works seamlessly with various resolutions and aspect ratios, from 2:1 to 1:2.
34
+ * **Optimized for speed**: Achieves 60ms latency per image (A100 or RTX3090, FP16, ViT-L). Adjustable inference resolution for even faster speed.
35
+
36
+ ## ✨ News
37
+
38
+ ***(2025-10-16)***
39
+ * Updated training code for MoGe-2.
40
+
41
+ ***(2025-06-10)***
42
+
43
+ * ❗**Released MoGe-2**, a state-of-the-art model for monocular geometry, with these new capabilities in one unified model:
44
+ * point map prediction in **metric scale**;
45
+ * comparable and even better performance over MoGe-1;
46
+ * significant improvement of **visual sharpness**;
47
+ * high-quality [**normal map** estimation](docs/normal.md);
48
+ * lower inference latency.
49
+
50
+ ## 📦 Installation
51
+
52
+ ### Install via pip
53
+
54
+ ```bash
55
+ pip install git+https://github.com/microsoft/MoGe.git
56
+ ```
57
+
58
+ ### Or clone this repository
59
+
60
+ ```bash
61
+ git clone https://github.com/microsoft/MoGe.git
62
+ cd MoGe
63
+ pip install -r requirements.txt # install the requirements
64
+ ```
65
+
66
+ Note: MoGe should be compatible with most requirements versions. Please check the `requirements.txt` for more details if you encounter any dependency issues.
67
+
68
+ ## 🤗 Pretrained Models
69
+
70
+ Our pretrained models are available on the huggingface hub:
71
+
72
+ <table>
73
+ <thead>
74
+ <tr>
75
+ <th>Version</th>
76
+ <th>Hugging Face Model</th>
77
+ <th>Metric scale</th>
78
+ <th>Normal</th>
79
+ <th>#Params</th>
80
+ </tr>
81
+ </thead>
82
+ <tbody>
83
+ <tr>
84
+ <td>MoGe-1</td>
85
+ <td><a href="https://huggingface.co/Ruicheng/moge-vitl" target="_blank"><code>Ruicheng/moge-vitl</code><a></td>
86
+ <td>-</td>
87
+ <td>-</td>
88
+ <td>314M</td>
89
+ </tr>
90
+ <tr>
91
+ <td rowspan="4">MoGe-2</td>
92
+ <td><a href="https://huggingface.co/Ruicheng/moge-2-vitl" target="_blank"><code>Ruicheng/moge-2-vitl</code></a></td>
93
+ <td>✅</td>
94
+ <td>-</td>
95
+ <td>326M</td>
96
+ </tr>
97
+ <tr>
98
+ <td><a href="https://huggingface.co/Ruicheng/moge-2-vitl-normal" target="_blank"><code>Ruicheng/moge-2-vitl-normal</code></a></td>
99
+ <td>✅</td>
100
+ <td>✅</td>
101
+ <td>331M</td>
102
+ </tr>
103
+ <tr>
104
+ <td><a href="https://huggingface.co/Ruicheng/moge-2-vitb-normal" target="_blank"><code>Ruicheng/moge-2-vitb-normal</code></a></td>
105
+ <td>✅</td>
106
+ <td>✅</td>
107
+ <td>104M</td>
108
+ </tr>
109
+ <tr>
110
+ <td><a href="https://huggingface.co/Ruicheng/moge-2-vits-normal" target="_blank"><code>Ruicheng/moge-2-vits-normal</code></a></td>
111
+ <td>✅</td>
112
+ <td>✅</td>
113
+ <td>35M</td>
114
+ </tr>
115
+ </tbody>
116
+ </table>
117
+
118
+
119
+ > NOTE: `moge-2-vitl-normal` has full capabilities, with almost the same level of performance as `moge-2-vitl` plus extra normal map estimation.
120
+
121
+ You may import the `MoGeModel` class of the matched version, then load the pretrained weights via `MoGeModel.from_pretrained("HUGGING_FACE_MODEL_REPO_NAME")` with automatic downloading.
122
+ If loading a local checkpoint, replace the model name with the local path.
123
+
124
+ For ONNX support, please refer to [docs/onnx.md](docs/onnx.md).
125
+
126
+ ## 💡 Minimal Code Example
127
+
128
+ Here is a minimal example for loading the model and inferring on a single image.
129
+
130
+ ```python
131
+ import cv2
132
+ import torch
133
+ # from moge.model.v1 import MoGeModel
134
+ from moge.model.v2 import MoGeModel # Let's try MoGe-2
135
+
136
+ device = torch.device("cuda")
137
+
138
+ # Load the model from huggingface hub (or load from local).
139
+ model = MoGeModel.from_pretrained("Ruicheng/moge-2-vitl-normal").to(device)
140
+
141
+ # Read the input image and convert to tensor (3, H, W) with RGB values normalized to [0, 1]
142
+ input_image = cv2.cvtColor(cv2.imread("PATH_TO_IMAGE.jpg"), cv2.COLOR_BGR2RGB)
143
+ input_image = torch.tensor(input_image / 255, dtype=torch.float32, device=device).permute(2, 0, 1)
144
+
145
+ # Infer
146
+ output = model.infer(input_image)
147
+ """
148
+ `output` has keys "points", "depth", "mask", "normal" (optional) and "intrinsics",
149
+ The maps are in the same size as the input image.
150
+ {
151
+ "points": (H, W, 3), # point map in OpenCV camera coordinate system (x right, y down, z forward). For MoGe-2, the point map is in metric scale.
152
+ "depth": (H, W), # depth map
153
+ "normal": (H, W, 3) # normal map in OpenCV camera coordinate system. (available for MoGe-2-normal)
154
+ "mask": (H, W), # a binary mask for valid pixels.
155
+ "intrinsics": (3, 3), # normalized camera intrinsics
156
+ }
157
+ """
158
+ ```
159
+ For more usage details, see the `MoGeModel.infer()` docstring.
160
+
161
+ ## 💡 Usage
162
+
163
+ ### Gradio demo | `moge app`
164
+
165
+ > The demo for MoGe-1 is also available at our [Hugging Face Space](https://huggingface.co/spaces/Ruicheng/MoGe).
166
+
167
+ ```bash
168
+ # Using the command line tool
169
+ moge app # will run MoGe-2 demo by default.
170
+
171
+ # In this repo
172
+ python moge/scripts/app.py # --share for Gradio public sharing
173
+ ```
174
+
175
+ See also [`moge/scripts/app.py`](moge/scripts/app.py)
176
+
177
+
178
+ ### Inference | `moge infer`
179
+
180
+ Run the script `moge/scripts/infer.py` via the following command:
181
+
182
+ ```bash
183
+ # Save the output [maps], [glb] and [ply] files
184
+ moge infer -i IMAGES_FOLDER_OR_IMAGE_PATH --o OUTPUT_FOLDER --maps --glb --ply
185
+
186
+ # Show the result in a window (requires pyglet < 2.0, e.g. pip install pyglet==1.5.29)
187
+ moge infer -i IMAGES_FOLDER_OR_IMAGE_PATH --o OUTPUT_FOLDER --show
188
+ ```
189
+
190
+ For detailed options, run `moge infer --help`:
191
+
192
+ ```
193
+ Usage: moge infer [OPTIONS]
194
+
195
+ Inference script
196
+
197
+ Options:
198
+ -i, --input PATH Input image or folder path. "jpg" and "png" are
199
+ supported.
200
+ --fov_x FLOAT If camera parameters are known, set the
201
+ horizontal field of view in degrees. Otherwise,
202
+ MoGe will estimate it.
203
+ -o, --output PATH Output folder path
204
+ --pretrained TEXT Pretrained model name or path. If not provided,
205
+ the corresponding default model will be chosen.
206
+ --version [v1|v2] Model version. Defaults to "v2"
207
+ --device TEXT Device name (e.g. "cuda", "cuda:0", "cpu").
208
+ Defaults to "cuda"
209
+ --fp16 Use fp16 precision for much faster inference.
210
+ --resize INTEGER Resize the image(s) & output maps to a specific
211
+ size. Defaults to None (no resizing).
212
+ --resolution_level INTEGER An integer [0-9] for the resolution level for
213
+ inference. Higher value means more tokens and
214
+ the finer details will be captured, but
215
+ inference can be slower. Defaults to 9. Note
216
+ that it is irrelevant to the output size, which
217
+ is always the same as the input size.
218
+ `resolution_level` actually controls
219
+ `num_tokens`. See `num_tokens` for more details.
220
+ --num_tokens INTEGER number of tokens used for inference. A integer
221
+ in the (suggested) range of `[1200, 2500]`.
222
+ `resolution_level` will be ignored if
223
+ `num_tokens` is provided. Default: None
224
+ --threshold FLOAT Threshold for removing edges. Defaults to 0.01.
225
+ Smaller value removes more edges. "inf" means no
226
+ thresholding.
227
+ --maps Whether to save the output maps (image, point
228
+ map, depth map, normal map, mask) and fov.
229
+ --glb Whether to save the output as a.glb file. The
230
+ color will be saved as a texture.
231
+ --ply Whether to save the output as a.ply file. The
232
+ color will be saved as vertex colors.
233
+ --show Whether show the output in a window. Note that
234
+ this requires pyglet<2 installed as required by
235
+ trimesh.
236
+ --help Show this message and exit.
237
+ ```
238
+
239
+ See also [`moge/scripts/infer.py`](moge/scripts/infer.py)
240
+
241
+ ### 360° panorama images | `moge infer_panorama`
242
+
243
+ > *NOTE: This is an experimental extension of MoGe.*
244
+
245
+ The script will split the 360-degree panorama image into multiple perspective views and infer on each view separately.
246
+ The output maps will be combined to produce a panorama depth map and point map.
247
+
248
+ Note that the panorama image must have spherical parameterization (e.g., environment maps or equirectangular images). Other formats must be converted to spherical format before using this script. Run `moge infer_panorama --help` for detailed options.
249
+
250
+
251
+ <div align="center">
252
+ <img src="./assets/panorama_pipeline.png" width="80%">
253
+
254
+ The photo is from [this URL](https://commons.wikimedia.org/wiki/Category:360%C2%B0_panoramas_with_equirectangular_projection#/media/File:Braunschweig_Sankt-%C3%84gidien_Panorama_02.jpg)
255
+ </div>
256
+
257
+ See also [`moge/scripts/infer_panorama.py`](moge/scripts/infer_panorama.py)
258
+
259
+ ## 🏋️‍♂️ Training & Finetuning
260
+
261
+ See [docs/train.md](docs/train.md)
262
+
263
+ ## 🧪 Evaluation
264
+
265
+ See [docs/eval.md](docs/eval.md)
266
+
267
+ ## ⚖️ License
268
+
269
+ MoGe code is released under the MIT license, except for DINOv2 code in `moge/model/dinov2` which is released by Meta AI under the Apache 2.0 license.
270
+ See [LICENSE](LICENSE) for more details.
271
+
272
+
273
+ ## 📜 Citation
274
+
275
+ If you find our work useful in your research, we gratefully request that you consider citing our paper:
276
+
277
+ ```
278
+ @inproceedings{wang2025moge,
279
+ title={Moge: Unlocking accurate monocular geometry estimation for open-domain images with optimal training supervision},
280
+ author={Wang, Ruicheng and Xu, Sicheng and Dai, Cassie and Xiang, Jianfeng and Deng, Yu and Tong, Xin and Yang, Jiaolong},
281
+ booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
282
+ pages={5261--5271},
283
+ year={2025}
284
+ }
285
+
286
+ @misc{wang2025moge2,
287
+ title={MoGe-2: Accurate Monocular Geometry with Metric Scale and Sharp Details},
288
+ author={Ruicheng Wang and Sicheng Xu and Yue Dong and Yu Deng and Jianfeng Xiang and Zelong Lv and Guangzhong Sun and Xin Tong and Jiaolong Yang},
289
+ year={2025},
290
+ eprint={2507.02546},
291
+ archivePrefix={arXiv},
292
+ primaryClass={cs.CV},
293
+ url={https://arxiv.org/abs/2507.02546},
294
+ }
295
+ ```
SECURITY.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- BEGIN MICROSOFT SECURITY.MD V0.0.9 BLOCK -->
2
+
3
+ ## Security
4
+
5
+ Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet) and [Xamarin](https://github.com/xamarin).
6
+
7
+ If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/security.md/definition), please report it to us as described below.
8
+
9
+ ## Reporting Security Issues
10
+
11
+ **Please do not report security vulnerabilities through public GitHub issues.**
12
+
13
+ Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/security.md/msrc/create-report).
14
+
15
+ If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/security.md/msrc/pgp).
16
+
17
+ You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
18
+
19
+ Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
20
+
21
+ * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
22
+ * Full paths of source file(s) related to the manifestation of the issue
23
+ * The location of the affected source code (tag/branch/commit or direct URL)
24
+ * Any special configuration required to reproduce the issue
25
+ * Step-by-step instructions to reproduce the issue
26
+ * Proof-of-concept or exploit code (if possible)
27
+ * Impact of the issue, including how an attacker might exploit the issue
28
+
29
+ This information will help us triage your report more quickly.
30
+
31
+ If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/security.md/msrc/bounty) page for more details about our active programs.
32
+
33
+ ## Preferred Languages
34
+
35
+ We prefer all communications to be in English.
36
+
37
+ ## Policy
38
+
39
+ Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/security.md/cvd).
40
+
41
+ <!-- END MICROSOFT SECURITY.MD BLOCK -->
SUPPORT.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TODO: The maintainer of this repo has not yet edited this file
2
+
3
+ **REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project?
4
+
5
+ - **No CSS support:** Fill out this template with information about how to file issues and get help.
6
+ - **Yes CSS support:** Fill out an intake form at [aka.ms/onboardsupport](https://aka.ms/onboardsupport). CSS will work with/help you to determine next steps.
7
+ - **Not sure?** Fill out an intake as though the answer were "Yes". CSS will help you decide.
8
+
9
+ *Then remove this first heading from this SUPPORT.MD file before publishing your repo.*
10
+
11
+ # Support
12
+
13
+ ## How to file issues and get help
14
+
15
+ This project uses GitHub Issues to track bugs and feature requests. Please search the existing
16
+ issues before filing new issues to avoid duplicates. For new issues, file your bug or
17
+ feature request as a new Issue.
18
+
19
+ For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE
20
+ FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER
21
+ CHANNEL. WHERE WILL YOU HELP PEOPLE?**.
22
+
23
+ ## Microsoft Support Policy
24
+
25
+ Support for this **PROJECT or PRODUCT** is limited to the resources listed above.
baselines/da2_custom.py ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DAv2 with custom trained DPT/SDT checkpoint
2
+ import os
3
+ import sys
4
+ from typing import *
5
+ from pathlib import Path
6
+
7
+ import click
8
+ import torch
9
+ import torch.nn.functional as F
10
+ import torchvision.transforms as T
11
+ import torchvision.transforms.functional as TF
12
+
13
+ from moge.test.baseline import MGEBaselineInterface
14
+
15
+
16
+ class Baseline(MGEBaselineInterface):
17
+ def __init__(self, repo_path: str, checkpoint: str, encoder: str, decoder: str, num_tokens: int, device: Union[torch.device, str]):
18
+ # Create from repo
19
+ repo_path = os.path.abspath(repo_path)
20
+ training_path = os.path.join(repo_path, 'training')
21
+ # Add both repo root (for depth_anything_v2) and training (for sdt)
22
+ if repo_path not in sys.path:
23
+ sys.path.insert(0, repo_path)
24
+ if training_path not in sys.path:
25
+ sys.path.insert(0, training_path)
26
+ if not Path(repo_path).exists():
27
+ raise FileNotFoundError(f'Cannot find the Depth-Anything-V2 repository at {repo_path}.')
28
+
29
+ device = torch.device(device)
30
+
31
+ # Model configurations (same as training)
32
+ model_configs = {
33
+ 'vits': {'encoder': 'vits', 'features': 64, 'out_channels': [48, 96, 192, 384]},
34
+ 'vitb': {'encoder': 'vitb', 'features': 128, 'out_channels': [96, 192, 384, 768]},
35
+ 'vitl': {'encoder': 'vitl', 'features': 256, 'out_channels': [256, 512, 1024, 1024]},
36
+ 'vitg': {'encoder': 'vitg', 'features': 384, 'out_channels': [1536, 1536, 1536, 1536]}
37
+ }
38
+
39
+ # Build model based on decoder type
40
+ if decoder == 'dpt':
41
+ from depth_anything_v2.dpt import DepthAnythingV2
42
+ model = DepthAnythingV2(**model_configs[encoder])
43
+ elif decoder == 'sdt':
44
+ from depth_anything_v2.sdt import DepthAnythingV2SDT
45
+ model = DepthAnythingV2SDT(
46
+ encoder=encoder,
47
+ features=model_configs[encoder]['features'],
48
+ out_channels=model_configs[encoder]['out_channels'],
49
+ use_clstoken=True,
50
+ upsampler='dysample'
51
+ )
52
+ else:
53
+ raise ValueError(f"Unknown decoder: {decoder}")
54
+
55
+ # Load checkpoint
56
+ if not os.path.exists(checkpoint):
57
+ raise FileNotFoundError(f'Cannot find checkpoint at {checkpoint}')
58
+
59
+ ckpt = torch.load(checkpoint, map_location='cpu')
60
+ if 'model' in ckpt:
61
+ state_dict = ckpt['model']
62
+ else:
63
+ state_dict = ckpt
64
+
65
+ # Remove 'module.' prefix if present
66
+ state_dict = {k.replace('module.', ''): v for k, v in state_dict.items()}
67
+
68
+ missing, unexpected = model.load_state_dict(state_dict, strict=False)
69
+ print(f"Loaded checkpoint from {checkpoint}")
70
+ if missing:
71
+ print(f"Missing keys: {len(missing)}")
72
+ if unexpected:
73
+ print(f"Unexpected keys: {len(unexpected)}")
74
+
75
+ model.to(device).eval()
76
+ self.model = model
77
+ self.num_tokens = num_tokens
78
+ self.device = device
79
+
80
+ @click.command()
81
+ @click.option('--repo', 'repo_path', type=click.Path(), default='/home/ywan0794/Depth-Anything-V2', help='Path to the Depth-Anything-V2 repository.')
82
+ @click.option('--checkpoint', type=click.Path(), required=True, help='Path to trained checkpoint.')
83
+ @click.option('--encoder', type=click.Choice(['vits', 'vitb', 'vitl']), default='vitb', help='Encoder architecture.')
84
+ @click.option('--decoder', type=click.Choice(['dpt', 'sdt']), default='dpt', help='Decoder type.')
85
+ @click.option('--num_tokens', type=int, default=None, help='Number of tokens to use for the input image.')
86
+ @click.option('--device', type=str, default='cuda', help='Device to use for inference.')
87
+ @staticmethod
88
+ def load(repo_path: str, checkpoint: str, encoder: str, decoder: str, num_tokens: int, device: torch.device = 'cuda'):
89
+ return Baseline(repo_path, checkpoint, encoder, decoder, num_tokens, device)
90
+
91
+ @torch.inference_mode()
92
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
93
+ original_height, original_width = image.shape[-2:]
94
+
95
+ if image.ndim == 3:
96
+ image = image.unsqueeze(0)
97
+ omit_batch_dim = True
98
+ else:
99
+ omit_batch_dim = False
100
+
101
+ if self.num_tokens is None:
102
+ resize_factor = 518 / min(original_height, original_width)
103
+ expected_width = round(original_width * resize_factor / 14) * 14
104
+ expected_height = round(original_height * resize_factor / 14) * 14
105
+ else:
106
+ aspect_ratio = original_width / original_height
107
+ tokens_rows = round((self.num_tokens * aspect_ratio) ** 0.5)
108
+ tokens_cols = round((self.num_tokens / aspect_ratio) ** 0.5)
109
+ expected_width = tokens_cols * 14
110
+ expected_height = tokens_rows * 14
111
+
112
+ image = TF.resize(image, (expected_height, expected_width), interpolation=T.InterpolationMode.BICUBIC, antialias=True)
113
+ image = TF.normalize(image, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
114
+ image = image.to(self.device)
115
+
116
+ disparity = self.model(image)
117
+
118
+ disparity = F.interpolate(disparity[:, None], size=(original_height, original_width), mode='bilinear', align_corners=False, antialias=False)[:, 0]
119
+
120
+ if omit_batch_dim:
121
+ disparity = disparity.squeeze(0)
122
+
123
+ return {
124
+ 'disparity_affine_invariant': disparity
125
+ }
baselines/da3.py ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/ByteDance-Seed/Depth-Anything-3
2
+ import os
3
+ import sys
4
+ from typing import *
5
+ from pathlib import Path
6
+
7
+ import click
8
+ import torch
9
+ import torch.nn.functional as F
10
+ import torchvision.transforms as T
11
+ import torchvision.transforms.functional as TF
12
+
13
+ from moge.test.baseline import MGEBaselineInterface
14
+
15
+
16
+ class Baseline(MGEBaselineInterface):
17
+ def __init__(self, repo_path: str, model_name: str, num_tokens: int, device: Union[torch.device, str]):
18
+ # Create from repo
19
+ repo_path = os.path.abspath(repo_path)
20
+ if repo_path not in sys.path:
21
+ sys.path.insert(0, os.path.join(repo_path, 'src'))
22
+ if not Path(repo_path).exists():
23
+ raise FileNotFoundError(f'Cannot find the Depth-Anything-3 repository at {repo_path}. Please clone the repository and provide the path to it using the --repo option.')
24
+
25
+ from depth_anything_3.api import DepthAnything3
26
+
27
+ device = torch.device(device)
28
+
29
+ # Instantiate model
30
+ model = DepthAnything3.from_pretrained(f"ByteDance-Seed/{model_name}")
31
+
32
+ model.to(device).eval()
33
+ self.model = model
34
+ self.num_tokens = num_tokens
35
+ self.device = device
36
+
37
+ @click.command()
38
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../Depth-Anything-3', help='Path to the Depth-Anything-3 repository.')
39
+ @click.option('--model_name', type=click.Choice(['da3-base', 'da3-large', 'da3-giant']), default='da3-large', help='Model name.')
40
+ @click.option('--num_tokens', type=int, default=None, help='Number of tokens to use for the input image.')
41
+ @click.option('--device', type=str, default='cuda', help='Device to use for inference.')
42
+ @staticmethod
43
+ def load(repo_path: str, model_name: str, num_tokens: int, device: torch.device = 'cuda'):
44
+ return Baseline(repo_path, model_name, num_tokens, device)
45
+
46
+ @torch.inference_mode()
47
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
48
+ original_height, original_width = image.shape[-2:]
49
+
50
+ assert intrinsics is None, "Depth-Anything-3 does not support camera intrinsics input in this baseline"
51
+
52
+ if image.ndim == 3:
53
+ image = image.unsqueeze(0)
54
+ omit_batch_dim = True
55
+ else:
56
+ omit_batch_dim = False
57
+
58
+ if self.num_tokens is None:
59
+ resize_factor = 518 / min(original_height, original_width)
60
+ expected_width = round(original_width * resize_factor / 14) * 14
61
+ expected_height = round(original_height * resize_factor / 14) * 14
62
+ else:
63
+ aspect_ratio = original_width / original_height
64
+ tokens_rows = round((self.num_tokens * aspect_ratio) ** 0.5)
65
+ tokens_cols = round((self.num_tokens / aspect_ratio) ** 0.5)
66
+ expected_width = tokens_cols * 14
67
+ expected_height = tokens_rows * 14
68
+
69
+ image = TF.resize(image, (expected_height, expected_width), interpolation=T.InterpolationMode.BICUBIC, antialias=True)
70
+ image = TF.normalize(image, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
71
+
72
+ # DA3 expects [B, N, 3, H, W] where N is number of views
73
+ image = image.unsqueeze(1) # [B, 1, 3, H, W]
74
+
75
+ # Forward pass
76
+ output = self.model(image)
77
+
78
+ # Extract depth prediction
79
+ # Output shape: [B, N, H, W]
80
+ depth = output['depth'][:, 0] # [B, H, W]
81
+
82
+ # Convert depth to disparity (inverse depth)
83
+ disparity = 1.0 / (depth + 1e-6)
84
+
85
+ disparity = F.interpolate(disparity[:, None], size=(original_height, original_width), mode='bilinear', align_corners=False, antialias=False)[:, 0]
86
+
87
+ if omit_batch_dim:
88
+ disparity = disparity.squeeze(0)
89
+
90
+ return {
91
+ 'disparity_affine_invariant': disparity
92
+ }
baselines/da3_custom.py ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DA3 with custom trained DPT/DualDPT/SDT checkpoint
2
+ import os
3
+ import sys
4
+ from typing import *
5
+ from pathlib import Path
6
+
7
+ import click
8
+ import torch
9
+ import torch.nn.functional as F
10
+ import torchvision.transforms as T
11
+ import torchvision.transforms.functional as TF
12
+
13
+ from moge.test.baseline import MGEBaselineInterface
14
+
15
+
16
+ # DA3 Wrapper (same as training)
17
+ class DA3Wrapper(torch.nn.Module):
18
+ def __init__(self, model):
19
+ super().__init__()
20
+ self.model = model
21
+
22
+ def forward(self, x):
23
+ # x: [B, 3, H, W]
24
+ # DA3 expects [B, N, 3, H, W] where N is number of views
25
+ x = x.unsqueeze(1) # [B, 1, 3, H, W]
26
+ output = self.model(x)
27
+ # output.depth shape: [B, 1, H, W]
28
+ depth = output.depth.squeeze(1) # [B, H, W]
29
+ return depth
30
+
31
+
32
+ class Baseline(MGEBaselineInterface):
33
+ def __init__(self, repo_path: str, checkpoint: str, decoder: str, num_tokens: int, device: Union[torch.device, str]):
34
+ # Create from repo
35
+ repo_path = os.path.abspath(repo_path)
36
+ src_path = os.path.join(repo_path, 'src')
37
+ training_path = os.path.join(repo_path, 'training')
38
+ # Add src path for depth_anything_3
39
+ if src_path not in sys.path:
40
+ sys.path.insert(0, src_path)
41
+ if training_path not in sys.path:
42
+ sys.path.insert(0, training_path)
43
+ if not Path(repo_path).exists():
44
+ raise FileNotFoundError(f'Cannot find the Depth-Anything-3 repository at {repo_path}.')
45
+
46
+ device = torch.device(device)
47
+
48
+ # Config paths
49
+ config_dir = os.path.join(repo_path, 'src', 'depth_anything_3', 'configs')
50
+ if decoder == 'dpt':
51
+ config_path = os.path.join(config_dir, 'da3dpt-large.yaml')
52
+ elif decoder == 'dualdpt':
53
+ config_path = os.path.join(config_dir, 'da3dualdpt-large.yaml')
54
+ elif decoder == 'sdt':
55
+ config_path = os.path.join(config_dir, 'da3sdt-large.yaml')
56
+ else:
57
+ raise ValueError(f"Unknown decoder: {decoder}")
58
+
59
+ from depth_anything_3.cfg import load_config, create_object
60
+
61
+ # Build model
62
+ cfg = load_config(config_path)
63
+ base_model = create_object(cfg)
64
+ model = DA3Wrapper(base_model)
65
+
66
+ # Load checkpoint
67
+ if not os.path.exists(checkpoint):
68
+ raise FileNotFoundError(f'Cannot find checkpoint at {checkpoint}')
69
+
70
+ ckpt = torch.load(checkpoint, map_location='cpu')
71
+ if 'model' in ckpt:
72
+ state_dict = ckpt['model']
73
+ else:
74
+ state_dict = ckpt
75
+
76
+ # Remove 'module.' prefix if present
77
+ state_dict = {k.replace('module.', ''): v for k, v in state_dict.items()}
78
+
79
+ missing, unexpected = model.load_state_dict(state_dict, strict=False)
80
+ print(f"Loaded checkpoint from {checkpoint}")
81
+ if missing:
82
+ print(f"Missing keys: {len(missing)}")
83
+ if unexpected:
84
+ print(f"Unexpected keys: {len(unexpected)}")
85
+
86
+ model.to(device).eval()
87
+ self.model = model
88
+ self.num_tokens = num_tokens
89
+ self.device = device
90
+
91
+ @click.command()
92
+ @click.option('--repo', 'repo_path', type=click.Path(), default='/home/ywan0794/Depth-Anything-3', help='Path to the Depth-Anything-3 repository.')
93
+ @click.option('--checkpoint', type=click.Path(), required=True, help='Path to trained checkpoint.')
94
+ @click.option('--decoder', type=click.Choice(['dpt', 'dualdpt', 'sdt']), default='dpt', help='Decoder type.')
95
+ @click.option('--num_tokens', type=int, default=None, help='Number of tokens to use for the input image.')
96
+ @click.option('--device', type=str, default='cuda', help='Device to use for inference.')
97
+ @staticmethod
98
+ def load(repo_path: str, checkpoint: str, decoder: str, num_tokens: int, device: torch.device = 'cuda'):
99
+ return Baseline(repo_path, checkpoint, decoder, num_tokens, device)
100
+
101
+ @torch.inference_mode()
102
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
103
+ original_height, original_width = image.shape[-2:]
104
+
105
+ if image.ndim == 3:
106
+ image = image.unsqueeze(0)
107
+ omit_batch_dim = True
108
+ else:
109
+ omit_batch_dim = False
110
+
111
+ if self.num_tokens is None:
112
+ resize_factor = 518 / min(original_height, original_width)
113
+ expected_width = round(original_width * resize_factor / 14) * 14
114
+ expected_height = round(original_height * resize_factor / 14) * 14
115
+ else:
116
+ aspect_ratio = original_width / original_height
117
+ tokens_rows = round((self.num_tokens * aspect_ratio) ** 0.5)
118
+ tokens_cols = round((self.num_tokens / aspect_ratio) ** 0.5)
119
+ expected_width = tokens_cols * 14
120
+ expected_height = tokens_rows * 14
121
+
122
+ image = TF.resize(image, (expected_height, expected_width), interpolation=T.InterpolationMode.BICUBIC, antialias=True)
123
+ image = TF.normalize(image, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
124
+ image = image.to(self.device)
125
+
126
+ # DA3 model forward - outputs normalized disparity (NOT depth!)
127
+ with torch.cuda.amp.autocast(dtype=torch.bfloat16):
128
+ disparity = self.model(image)
129
+
130
+ disparity = F.interpolate(disparity[:, None], size=(original_height, original_width), mode='bilinear', align_corners=False, antialias=False)[:, 0]
131
+
132
+ if omit_batch_dim:
133
+ disparity = disparity.squeeze(0)
134
+
135
+ return {
136
+ 'disparity_affine_invariant': disparity
137
+ }
baselines/da_v2.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/DepthAnything/Depth-Anything-V2
2
+ import os
3
+ import sys
4
+ from typing import *
5
+ from pathlib import Path
6
+
7
+ import click
8
+ import torch
9
+ import torch.nn.functional as F
10
+ import torchvision.transforms as T
11
+ import torchvision.transforms.functional as TF
12
+
13
+ from moge.test.baseline import MGEBaselineInterface
14
+
15
+
16
+ class Baseline(MGEBaselineInterface):
17
+ def __init__(self, repo_path: str, backbone: str, num_tokens: int, device: Union[torch.device, str]):
18
+ # Create from repo
19
+ repo_path = os.path.abspath(repo_path)
20
+ if repo_path not in sys.path:
21
+ sys.path.append(repo_path)
22
+ if not Path(repo_path).exists():
23
+ raise FileNotFoundError(f'Cannot find the Depth-Anything repository at {repo_path}. Please clone the repository and provide the path to it using the --repo option.')
24
+ from depth_anything_v2.dpt import DepthAnythingV2
25
+
26
+ device = torch.device(device)
27
+
28
+ # Instantiate model
29
+ model = DepthAnythingV2(encoder=backbone, features=256, out_channels=[256, 512, 1024, 1024])
30
+
31
+ # Load checkpoint
32
+ checkpoint_path = os.path.join(repo_path, f'checkpoints/depth_anything_v2_{backbone}.pth')
33
+ if not os.path.exists(checkpoint_path):
34
+ raise FileNotFoundError(f'Cannot find the checkpoint file at {checkpoint_path}. Please download the checkpoint file and place it in the checkpoints directory.')
35
+ checkpoint = torch.load(checkpoint_path, map_location='cpu', weights_only=True)
36
+ model.load_state_dict(checkpoint)
37
+
38
+ model.to(device).eval()
39
+ self.model = model
40
+ self.num_tokens = num_tokens
41
+ self.device = device
42
+
43
+ @click.command()
44
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../Depth-Anything-V2', help='Path to the Depth-Anything repository.')
45
+ @click.option('--backbone', type=click.Choice(['vits', 'vitb', 'vitl']), default='vitl', help='Encoder architecture.')
46
+ @click.option('--num_tokens', type=int, default=None, help='Number of tokens to use for the input image.')
47
+ @click.option('--device', type=str, default='cuda', help='Device to use for inference.')
48
+ @staticmethod
49
+ def load(repo_path: str, backbone, num_tokens: int, device: torch.device = 'cuda'):
50
+ return Baseline(repo_path, backbone, num_tokens, device)
51
+
52
+ @torch.inference_mode()
53
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
54
+ original_height, original_width = image.shape[-2:]
55
+
56
+ assert intrinsics is None, "Depth-Anything-V2 does not support camera intrinsics input"
57
+
58
+ if image.ndim == 3:
59
+ image = image.unsqueeze(0)
60
+ omit_batch_dim = True
61
+ else:
62
+ omit_batch_dim = False
63
+
64
+ if self.num_tokens is None:
65
+ resize_factor = 518 / min(original_height, original_width)
66
+ expected_width = round(original_width * resize_factor / 14) * 14
67
+ expected_height = round(original_height * resize_factor / 14) * 14
68
+ else:
69
+ aspect_ratio = original_width / original_height
70
+ tokens_rows = round((self.num_tokens * aspect_ratio) ** 0.5)
71
+ tokens_cols = round((self.num_tokens / aspect_ratio) ** 0.5)
72
+ expected_width = tokens_cols * 14
73
+ expected_height = tokens_rows * 14
74
+ image = TF.resize(image, (expected_height, expected_width), interpolation=T.InterpolationMode.BICUBIC, antialias=True)
75
+
76
+ image = TF.normalize(image, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
77
+
78
+ disparity = self.model(image)
79
+
80
+ disparity = F.interpolate(disparity[:, None], size=(original_height, original_width), mode='bilinear', align_corners=False, antialias=False)[:, 0]
81
+
82
+ if omit_batch_dim:
83
+ disparity = disparity.squeeze(0)
84
+
85
+ return {
86
+ 'disparity_affine_invariant': disparity
87
+ }
88
+
baselines/da_v2_metric.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference https://github.com/DepthAnything/Depth-Anything-V2/metric_depth
2
+ import os
3
+ import sys
4
+ from typing import *
5
+ from pathlib import Path
6
+
7
+ import click
8
+ import torch
9
+ import torch.nn.functional as F
10
+ import torchvision.transforms as T
11
+ import torchvision.transforms.functional as TF
12
+ import cv2
13
+
14
+ from moge.test.baseline import MGEBaselineInterface
15
+
16
+
17
+ class Baseline(MGEBaselineInterface):
18
+
19
+ def __init__(self, repo_path: str, backbone: str, domain: str, num_tokens: int, device: str):
20
+ device = torch.device(device)
21
+ repo_path = os.path.abspath(repo_path)
22
+ if not Path(repo_path).exists():
23
+ raise FileNotFoundError(f'Cannot find the Depth-Anything repository at {repo_path}. Please clone the repository and provide the path to it using the --repo option.')
24
+ sys.path.append(os.path.join(repo_path, 'metric_depth'))
25
+ from depth_anything_v2.dpt import DepthAnythingV2
26
+
27
+ model_configs = {
28
+ 'vits': {'encoder': 'vits', 'features': 64, 'out_channels': [48, 96, 192, 384]},
29
+ 'vitb': {'encoder': 'vitb', 'features': 128, 'out_channels': [96, 192, 384, 768]},
30
+ 'vitl': {'encoder': 'vitl', 'features': 256, 'out_channels': [256, 512, 1024, 1024]}
31
+ }
32
+
33
+ if domain == 'indoor':
34
+ dataset = 'hypersim'
35
+ max_depth = 20
36
+ elif domain == 'outdoor':
37
+ dataset = 'vkitti'
38
+ max_depth = 80
39
+ else:
40
+ raise ValueError(f"Invalid domain: {domain}")
41
+
42
+ model = DepthAnythingV2(**model_configs[backbone], max_depth=max_depth)
43
+ checkpoint_path = os.path.join(repo_path, f'checkpoints/depth_anything_v2_metric_{dataset}_{backbone}.pth')
44
+ if not os.path.exists(checkpoint_path):
45
+ raise FileNotFoundError(f'Cannot find the checkpoint file at {checkpoint_path}. Please download the checkpoint file and place it in the checkpoints directory.')
46
+ model.load_state_dict(torch.load(checkpoint_path, map_location='cpu', weights_only=True))
47
+ model.eval().to(device)
48
+
49
+ self.model = model
50
+ self.num_tokens = num_tokens
51
+ self.device = device
52
+
53
+ @click.command()
54
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../Depth-Anything-V2', help='Path to the Depth-Anything repository.')
55
+ @click.option('--backbone', type=click.Choice(['vits', 'vitb', 'vitl']), default='vitl', help='Backbone architecture.')
56
+ @click.option('--domain', type=click.Choice(['indoor', 'outdoor']), help='Domain of the dataset.')
57
+ @click.option('--num_tokens', type=int, default=None, help='Number of tokens for the ViT model')
58
+ @click.option('--device', type=str, default='cuda', help='Device to use for inference.')
59
+ @staticmethod
60
+ def load(repo_path: str, backbone: str, domain: str, num_tokens: int, device: str):
61
+ return Baseline(repo_path, backbone, domain, num_tokens, device)
62
+
63
+ @torch.inference_mode()
64
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
65
+ original_height, original_width = image.shape[-2:]
66
+
67
+ assert intrinsics is None, "Depth-Anything-V2 does not support camera intrinsics input"
68
+
69
+ if image.ndim == 3:
70
+ image = image.unsqueeze(0)
71
+ omit_batch_dim = True
72
+ else:
73
+ omit_batch_dim = False
74
+
75
+ if self.num_tokens is None:
76
+ resize_factor = 518 / min(original_height, original_width)
77
+ expected_width = round(original_width * resize_factor / 14) * 14
78
+ expected_height = round(original_height * resize_factor / 14) * 14
79
+ else:
80
+ aspect_ratio = original_width / original_height
81
+ tokens_rows = round((self.num_tokens * aspect_ratio) ** 0.5)
82
+ tokens_cols = round((self.num_tokens / aspect_ratio) ** 0.5)
83
+ expected_width = tokens_cols * 14
84
+ expected_height = tokens_rows * 14
85
+ image = TF.resize(image, (expected_height, expected_width), interpolation=T.InterpolationMode.BICUBIC, antialias=True)
86
+
87
+ image = TF.normalize(image, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
88
+
89
+ depth = self.model(image)
90
+
91
+ depth = F.interpolate(depth[:, None], size=(original_height, original_width), mode='bilinear', align_corners=False, antialias=False)[:, 0]
92
+
93
+ if omit_batch_dim:
94
+ depth = depth.squeeze(0)
95
+
96
+ return {
97
+ 'depth_metric': depth
98
+ }
99
+
baselines/depth_pro.py ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/apple/ml-depth-pro
2
+ # Strictly follows official README Python API:
3
+ # model, transform = depth_pro.create_model_and_transforms()
4
+ # prediction = model.infer(image, f_px=f_px)
5
+ # depth = prediction["depth"] # in [m]
6
+ # focallength_px = prediction["focallength_px"]
7
+ #
8
+ # Depth Pro outputs *metric* depth. Returns key `depth_metric` plus `intrinsics`
9
+ # when focal length is recovered, so MoGe's compute_metrics can use the metric path.
10
+
11
+ import os
12
+ import sys
13
+ from typing import *
14
+ from pathlib import Path
15
+
16
+ import click
17
+ import torch
18
+ import torch.nn.functional as F
19
+
20
+ from moge.test.baseline import MGEBaselineInterface
21
+
22
+
23
+ class Baseline(MGEBaselineInterface):
24
+ def __init__(self, repo_path: str, checkpoint_path: str, precision: str, device: Union[torch.device, str]):
25
+ repo_path = os.path.abspath(repo_path)
26
+ if not Path(repo_path).exists():
27
+ raise FileNotFoundError(
28
+ f"Cannot find Depth Pro repo at {repo_path}. Clone https://github.com/apple/ml-depth-pro "
29
+ f"and pass --repo <path>."
30
+ )
31
+ src_path = os.path.join(repo_path, "src")
32
+ if src_path not in sys.path:
33
+ sys.path.insert(0, src_path)
34
+
35
+ import depth_pro
36
+ from depth_pro.depth_pro import DepthProConfig, DEFAULT_MONODEPTH_CONFIG_DICT
37
+
38
+ if not os.path.isabs(checkpoint_path):
39
+ checkpoint_path = os.path.join(repo_path, checkpoint_path)
40
+ if not os.path.exists(checkpoint_path):
41
+ raise FileNotFoundError(
42
+ f"Cannot find Depth Pro checkpoint at {checkpoint_path}. "
43
+ f"Run `source get_pretrained_models.sh` inside the ml-depth-pro repo to download it."
44
+ )
45
+
46
+ device = torch.device(device)
47
+ precision_dtype = {"fp32": torch.float32, "fp16": torch.float16}[precision]
48
+
49
+ config = DepthProConfig(
50
+ patch_encoder_preset=DEFAULT_MONODEPTH_CONFIG_DICT.patch_encoder_preset,
51
+ image_encoder_preset=DEFAULT_MONODEPTH_CONFIG_DICT.image_encoder_preset,
52
+ decoder_features=DEFAULT_MONODEPTH_CONFIG_DICT.decoder_features,
53
+ checkpoint_uri=checkpoint_path,
54
+ use_fov_head=DEFAULT_MONODEPTH_CONFIG_DICT.use_fov_head,
55
+ fov_encoder_preset=DEFAULT_MONODEPTH_CONFIG_DICT.fov_encoder_preset,
56
+ )
57
+ model, _ = depth_pro.create_model_and_transforms(config=config, device=device, precision=precision_dtype)
58
+ model.eval()
59
+
60
+ self.model = model
61
+ self.device = device
62
+ self.precision_dtype = precision_dtype
63
+
64
+ @click.command()
65
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../ml-depth-pro',
66
+ help='Path to the apple/ml-depth-pro repository.')
67
+ @click.option('--checkpoint', 'checkpoint_path', type=click.Path(),
68
+ default='checkpoints/depth_pro.pt',
69
+ help='Checkpoint path; relative paths are resolved against --repo.')
70
+ @click.option('--precision', type=click.Choice(['fp32', 'fp16']), default='fp32')
71
+ @click.option('--device', type=str, default='cuda')
72
+ @staticmethod
73
+ def load(repo_path: str, checkpoint_path: str, precision: str, device: str = 'cuda'):
74
+ return Baseline(repo_path, checkpoint_path, precision, device)
75
+
76
+ @torch.inference_mode()
77
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
78
+ omit_batch = image.ndim == 3
79
+ if omit_batch:
80
+ image = image.unsqueeze(0)
81
+ assert image.shape[0] == 1, "Depth Pro baseline only supports batch size 1"
82
+ _, _, H, W = image.shape
83
+
84
+ # Depth Pro transform: torchvision.Normalize([0.5]*3, [0.5]*3) maps [0,1] -> [-1,1].
85
+ x = (image.to(self.device, dtype=self.precision_dtype) - 0.5) / 0.5
86
+
87
+ # Convert normalized intrinsics (fx, fy in image-relative units) to pixel focal length if provided.
88
+ f_px = None
89
+ if intrinsics is not None:
90
+ intr = intrinsics.to(self.device)
91
+ if intr.ndim == 3:
92
+ intr = intr[0]
93
+ f_px = intr[0, 0] * W # MoGe normalized intrinsics: fx in [0, 1] of width
94
+
95
+ prediction = self.model.infer(x, f_px=f_px)
96
+ depth = prediction["depth"] # [H, W] in meters (squeezed by Depth Pro)
97
+ focallength_px = prediction["focallength_px"] # scalar tensor (pixels)
98
+
99
+ out: Dict[str, torch.Tensor] = {"depth_metric": depth}
100
+
101
+ # Build normalized intrinsics (fx, fy in fraction of image width / height).
102
+ fx_norm = (focallength_px / W).reshape(())
103
+ fy_norm = (focallength_px / H).reshape(())
104
+ K = torch.eye(3, device=depth.device, dtype=depth.dtype)
105
+ K[0, 0] = fx_norm
106
+ K[1, 1] = fy_norm
107
+ K[0, 2] = 0.5
108
+ K[1, 2] = 0.5
109
+ out["intrinsics"] = K
110
+
111
+ if not omit_batch:
112
+ out["depth_metric"] = out["depth_metric"].unsqueeze(0)
113
+ out["intrinsics"] = out["intrinsics"].unsqueeze(0)
114
+
115
+ return out
baselines/marigold.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/prs-eth/Marigold
2
+ # Strictly follows official `script/depth/run.py`:
3
+ # from marigold import MarigoldDepthPipeline
4
+ # pipe = MarigoldDepthPipeline.from_pretrained(checkpoint, torch_dtype=dtype)
5
+ # pipe_out = pipe(input_pil_image, denoise_steps, ensemble_size, processing_res,
6
+ # match_input_res, batch_size, resample_method, ...)
7
+ # depth_pred: np.ndarray = pipe_out.depth_np # normalized affine-invariant depth
8
+ #
9
+ # Marigold reports its outputs as affine-invariant depth (Marigold paper, CVPR 2024).
10
+ # Returns key `depth_affine_invariant`.
11
+
12
+ import os
13
+ import sys
14
+ from typing import *
15
+ from pathlib import Path
16
+
17
+ import click
18
+ import torch
19
+ import torch.nn.functional as F
20
+ import numpy as np
21
+ from PIL import Image
22
+
23
+ from moge.test.baseline import MGEBaselineInterface
24
+
25
+
26
+ class Baseline(MGEBaselineInterface):
27
+ def __init__(self, repo_path: str, checkpoint: str, denoise_steps: Optional[int],
28
+ ensemble_size: int, processing_res: Optional[int], half_precision: bool,
29
+ device: Union[torch.device, str]):
30
+ repo_path = os.path.abspath(repo_path)
31
+ if not Path(repo_path).exists():
32
+ raise FileNotFoundError(
33
+ f"Cannot find Marigold repo at {repo_path}. Clone https://github.com/prs-eth/Marigold."
34
+ )
35
+ if repo_path not in sys.path:
36
+ sys.path.insert(0, repo_path)
37
+
38
+ from marigold import MarigoldDepthPipeline
39
+
40
+ device = torch.device(device)
41
+ dtype = torch.float16 if half_precision else torch.float32
42
+ variant = "fp16" if half_precision else None
43
+
44
+ pipe = MarigoldDepthPipeline.from_pretrained(checkpoint, variant=variant, torch_dtype=dtype)
45
+ try:
46
+ pipe.enable_xformers_memory_efficient_attention()
47
+ except ImportError:
48
+ pass
49
+ pipe = pipe.to(device)
50
+ pipe.set_progress_bar_config(disable=True)
51
+
52
+ self.pipe = pipe
53
+ self.device = device
54
+ self.denoise_steps = denoise_steps
55
+ self.ensemble_size = ensemble_size
56
+ self.processing_res = processing_res
57
+
58
+ @click.command()
59
+ @click.option('--repo', 'repo_path', type=click.Path(), default='../Marigold',
60
+ help='Path to the prs-eth/Marigold repository.')
61
+ @click.option('--checkpoint', type=str, default='prs-eth/marigold-depth-v1-1',
62
+ help='HuggingFace ckpt name or local dir (run.py default).')
63
+ @click.option('--denoise_steps', type=int, default=None,
64
+ help='Diffusion denoising steps. None -> default in ckpt.')
65
+ @click.option('--ensemble_size', type=int, default=1,
66
+ help='Ensemble size. run.py default = 1.')
67
+ @click.option('--processing_res', type=int, default=None,
68
+ help='Processing resolution. None -> default in ckpt.')
69
+ @click.option('--fp16', 'half_precision', is_flag=True, help='Run in half precision.')
70
+ @click.option('--device', type=str, default='cuda')
71
+ @staticmethod
72
+ def load(repo_path: str, checkpoint: str, denoise_steps: Optional[int],
73
+ ensemble_size: int, processing_res: Optional[int], half_precision: bool,
74
+ device: str = 'cuda'):
75
+ return Baseline(repo_path, checkpoint, denoise_steps, ensemble_size,
76
+ processing_res, half_precision, device)
77
+
78
+ @torch.inference_mode()
79
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
80
+ assert intrinsics is None or True, "Marigold does not consume intrinsics; argument ignored."
81
+ omit_batch = image.ndim == 3
82
+ if omit_batch:
83
+ image = image.unsqueeze(0)
84
+ assert image.shape[0] == 1, "Marigold baseline only supports batch size 1"
85
+ _, _, H, W = image.shape
86
+
87
+ # MoGe pipeline supplies image as float tensor in [0, 1]. Marigold pipe takes PIL.Image (run.py uses PIL).
88
+ arr = (image[0].cpu().permute(1, 2, 0).clamp(0, 1).numpy() * 255).astype(np.uint8)
89
+ pil = Image.fromarray(arr)
90
+
91
+ kwargs: Dict[str, Any] = dict(
92
+ ensemble_size=self.ensemble_size,
93
+ match_input_res=True,
94
+ batch_size=0,
95
+ resample_method='bilinear',
96
+ show_progress_bar=False,
97
+ )
98
+ if self.denoise_steps is not None:
99
+ kwargs['denoising_steps'] = self.denoise_steps # pipeline kwarg is "denoising_steps"
100
+ if self.processing_res is not None:
101
+ kwargs['processing_res'] = self.processing_res
102
+
103
+ out = self.pipe(pil, **kwargs)
104
+
105
+ # MarigoldDepthOutput.depth_np: HxW np.float32 in [0, 1]. Marigold paper:
106
+ # affine-invariant depth (linear monotone with true depth, scale+shift free).
107
+ depth_np = out.depth_np
108
+ depth = torch.from_numpy(np.ascontiguousarray(depth_np)).to(self.device).float()
109
+
110
+ # Resize back if pipeline yielded a different size (it shouldn't with match_input_res=True).
111
+ if depth.shape[-2:] != (H, W):
112
+ depth = F.interpolate(depth[None, None], size=(H, W), mode='bilinear', align_corners=False)[0, 0]
113
+
114
+ # Marigold predicts affine-invariant depth (Marigold paper, CVPR 2024). Emit only
115
+ # this physical key. MoGe compute_metrics reports `depth_affine_invariant` metric.
116
+ result = {'depth_affine_invariant': depth}
117
+ if not omit_batch:
118
+ result['depth_affine_invariant'] = result['depth_affine_invariant'].unsqueeze(0)
119
+ return result
baselines/metric3d_v2.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reference: https://github.com/YvanYin/Metric3D
2
+ import os
3
+ import sys
4
+ from typing import *
5
+
6
+ import click
7
+ import torch
8
+ import torch.nn.functional as F
9
+ import cv2
10
+
11
+ from moge.test.baseline import MGEBaselineInterface
12
+
13
+
14
+ class Baseline(MGEBaselineInterface):
15
+ def __init__(self, backbone: Literal['vits', 'vitl', 'vitg'], device):
16
+ backbone_map = {
17
+ 'vits': 'metric3d_vit_small',
18
+ 'vitl': 'metric3d_vit_large',
19
+ 'vitg': 'metric3d_vit_giant2'
20
+ }
21
+
22
+ device = torch.device(device)
23
+ model = torch.hub.load('yvanyin/metric3d', backbone_map[backbone], pretrain=True)
24
+ model.to(device).eval()
25
+
26
+ self.model = model
27
+ self.device = device
28
+
29
+ @click.command()
30
+ @click.option('--backbone', type=click.Choice(['vits', 'vitl', 'vitg']), default='vitl', help='Encoder architecture.')
31
+ @click.option('--device', type=str, default='cuda', help='Device to use.')
32
+ @staticmethod
33
+ def load(backbone: str = 'vitl', device: torch.device = 'cuda'):
34
+ return Baseline(backbone, device)
35
+
36
+ @torch.inference_mode()
37
+ def inference_one_image(self, image: torch.Tensor, intrinsics: torch.Tensor = None):
38
+ # Reference: https://github.com/YvanYin/Metric3D/blob/main/mono/utils/do_test.py
39
+
40
+ # rgb_origin: RGB, 0-255, uint8
41
+ rgb_origin = image.cpu().numpy().transpose((1, 2, 0)) * 255
42
+
43
+ # keep ratio resize
44
+ input_size = (616, 1064) # for vit model
45
+ h, w = rgb_origin.shape[:2]
46
+ scale = min(input_size[0] / h, input_size[1] / w)
47
+ rgb = cv2.resize(rgb_origin, (int(w * scale), int(h * scale)), interpolation=cv2.INTER_LINEAR)
48
+ if intrinsics is not None:
49
+ focal = intrinsics[0, 0] * int(w * scale)
50
+
51
+ # padding to input_size
52
+ padding = [123.675, 116.28, 103.53]
53
+ h, w = rgb.shape[:2]
54
+ pad_h = input_size[0] - h
55
+ pad_w = input_size[1] - w
56
+ pad_h_half = pad_h // 2
57
+ pad_w_half = pad_w // 2
58
+ rgb = cv2.copyMakeBorder(rgb, pad_h_half, pad_h - pad_h_half, pad_w_half, pad_w - pad_w_half, cv2.BORDER_CONSTANT, value=padding)
59
+ pad_info = [pad_h_half, pad_h - pad_h_half, pad_w_half, pad_w - pad_w_half]
60
+
61
+ # normalize rgb
62
+ mean = torch.tensor([123.675, 116.28, 103.53]).float()[:, None, None]
63
+ std = torch.tensor([58.395, 57.12, 57.375]).float()[:, None, None]
64
+ rgb = torch.from_numpy(rgb.transpose((2, 0, 1))).float()
65
+ rgb = torch.div((rgb - mean), std)
66
+ rgb = rgb[None, :, :, :].cuda()
67
+
68
+ # inference
69
+ pred_depth, confidence, output_dict = self.model.inference({'input': rgb})
70
+
71
+ # un pad
72
+ pred_depth = pred_depth.squeeze()
73
+ pred_depth = pred_depth[pad_info[0] : pred_depth.shape[0] - pad_info[1], pad_info[2] : pred_depth.shape[1] - pad_info[3]]
74
+ pred_depth = pred_depth.clamp_min(0.5) # clamp to 0.5m, since metric3d could yield very small depth values, resulting in crashed the scale shift alignment.
75
+
76
+ # upsample to original size
77
+ pred_depth = F.interpolate(pred_depth[None, None, :, :], image.shape[-2:], mode='bilinear').squeeze()
78
+
79
+ if intrinsics is not None:
80
+ # de-canonical transform
81
+ canonical_to_real_scale = focal / 1000.0 # 1000.0 is the focal length of canonical camera
82
+ pred_depth = pred_depth * canonical_to_real_scale # now the depth is metric
83
+ pred_depth = torch.clamp(pred_depth, 0, 300)
84
+
85
+ pred_normal, normal_confidence = output_dict['prediction_normal'].split([3, 1], dim=1) # see https://arxiv.org/abs/2109.09881 for details
86
+
87
+ # un pad and resize to some size if needed
88
+ pred_normal = pred_normal.squeeze(0)
89
+ pred_normal = pred_normal[:, pad_info[0] : pred_normal.shape[1] - pad_info[1], pad_info[2] : pred_normal.shape[2] - pad_info[3]]
90
+
91
+ # you can now do anything with the normal
92
+ pred_normal = F.interpolate(pred_normal[None, :, :, :], image.shape[-2:], mode='bilinear').squeeze(0)
93
+ pred_normal = F.normalize(pred_normal, p=2, dim=0)
94
+
95
+ return pred_depth, pred_normal.permute(1, 2, 0)
96
+
97
+ @torch.inference_mode()
98
+ def infer(self, image: torch.Tensor, intrinsics: torch.Tensor = None):
99
+ # image: (B, H, W, 3) or (H, W, 3)
100
+ if image.ndim == 3:
101
+ pred_depth, pred_normal = self.inference_one_image(image, intrinsics)
102
+ else:
103
+ for i in range(image.shape[0]):
104
+ pred_depth_i, pred_normal_i = self.inference_one_image(image[i], intrinsics[i] if intrinsics is not None else None)
105
+ pred_depth.append(pred_depth_i)
106
+ pred_normal.append(pred_normal_i)
107
+ pred_depth = torch.stack(pred_depth, dim=0)
108
+ pred_normal = torch.stack(pred_normal, dim=0)
109
+
110
+ if intrinsics is not None:
111
+ return {
112
+ "depth_metric": pred_depth,
113
+ }
114
+ else:
115
+ return {
116
+ "depth_scale_invariant": pred_depth,
117
+ }
baselines/moge.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ from typing import *
4
+ import importlib
5
+
6
+ import click
7
+ import torch
8
+ import utils3d
9
+
10
+ from moge.test.baseline import MGEBaselineInterface
11
+
12
+
13
+ class Baseline(MGEBaselineInterface):
14
+
15
+ def __init__(self, num_tokens: int, resolution_level: int, pretrained_model_name_or_path: str, use_fp16: bool, device: str = 'cuda:0', version: str = 'v1'):
16
+ super().__init__()
17
+ from moge.model import import_model_class_by_version
18
+ MoGeModel = import_model_class_by_version(version)
19
+ self.version = version
20
+
21
+ self.model = MoGeModel.from_pretrained(pretrained_model_name_or_path).to(device).eval()
22
+
23
+ self.device = torch.device(device)
24
+ self.num_tokens = num_tokens
25
+ self.resolution_level = resolution_level
26
+ self.use_fp16 = use_fp16
27
+
28
+ @click.command()
29
+ @click.option('--num_tokens', type=int, default=None)
30
+ @click.option('--resolution_level', type=int, default=9)
31
+ @click.option('--pretrained', 'pretrained_model_name_or_path', type=str, default='Ruicheng/moge-vitl')
32
+ @click.option('--fp16', 'use_fp16', is_flag=True)
33
+ @click.option('--device', type=str, default='cuda:0')
34
+ @click.option('--version', type=str, default='v1')
35
+ @staticmethod
36
+ def load(num_tokens: int, resolution_level: int, pretrained_model_name_or_path: str, use_fp16: bool, device: str = 'cuda:0', version: str = 'v1'):
37
+ return Baseline(num_tokens, resolution_level, pretrained_model_name_or_path, use_fp16, device, version)
38
+
39
+ # Implementation for inference
40
+ @torch.inference_mode()
41
+ def infer(self, image: torch.FloatTensor, intrinsics: Optional[torch.FloatTensor] = None):
42
+ if intrinsics is not None:
43
+ fov_x, _ = utils3d.pt.intrinsics_to_fov(intrinsics)
44
+ fov_x = torch.rad2deg(fov_x)
45
+ else:
46
+ fov_x = None
47
+ output = self.model.infer(image, fov_x=fov_x, apply_mask=True, num_tokens=self.num_tokens)
48
+
49
+ if self.version == 'v1':
50
+ return {
51
+ 'points_scale_invariant': output['points'],
52
+ 'depth_scale_invariant': output['depth'],
53
+ 'intrinsics': output['intrinsics'],
54
+ }
55
+ else:
56
+ return {
57
+ 'points_metric': output['points'],
58
+ 'depth_metric': output['depth'],
59
+ 'intrinsics': output['intrinsics'],
60
+ }
61
+
62
+ @torch.inference_mode()
63
+ def infer_for_evaluation(self, image: torch.FloatTensor, intrinsics: torch.FloatTensor = None):
64
+ if intrinsics is not None:
65
+ fov_x, _ = utils3d.pt.intrinsics_to_fov(intrinsics)
66
+ fov_x = torch.rad2deg(fov_x)
67
+ else:
68
+ fov_x = None
69
+ output = self.model.infer(image, fov_x=fov_x, apply_mask=False, num_tokens=self.num_tokens, use_fp16=self.use_fp16)
70
+
71
+ if self.version == 'v1':
72
+ return {
73
+ 'points_scale_invariant': output['points'],
74
+ 'depth_scale_invariant': output['depth'],
75
+ 'intrinsics': output['intrinsics'],
76
+ }
77
+ else:
78
+ return {
79
+ 'points_metric': output['points'],
80
+ 'depth_metric': output['depth'],
81
+ 'intrinsics': output['intrinsics'],
82
+ }
83
+
baselines/rae_depth.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ from typing import *
4
+ import math
5
+
6
+ import click
7
+ import torch
8
+ import torch.nn.functional as F
9
+
10
+ from moge.test.baseline import MGEBaselineInterface
11
+
12
+
13
+ class Baseline(MGEBaselineInterface):
14
+
15
+ def __init__(self, repo_path: str, config_path: str, checkpoint_path: str,
16
+ image_size: int, num_steps: int, use_fp16: bool, device: str = 'cuda:0'):
17
+ super().__init__()
18
+ repo_path = os.path.abspath(repo_path)
19
+ src_path = os.path.join(repo_path, 'src')
20
+ if src_path not in sys.path:
21
+ sys.path.insert(0, src_path)
22
+
23
+ from omegaconf import OmegaConf
24
+ from stage2.transport import create_transport, Sampler
25
+ from utils.model_utils import instantiate_from_config
26
+
27
+ # Load config
28
+ full_cfg = OmegaConf.load(config_path)
29
+ rae_config = full_cfg.get("stage_1", None)
30
+ model_config = full_cfg.get("stage_2", None)
31
+ transport_config = full_cfg.get("transport", {})
32
+ sampler_config = full_cfg.get("sampler", {})
33
+ misc_config = full_cfg.get("misc", {})
34
+
35
+ transport_cfg = OmegaConf.to_container(transport_config, resolve=True) if transport_config else {}
36
+ sampler_cfg = OmegaConf.to_container(sampler_config, resolve=True) if sampler_config else {}
37
+ misc = OmegaConf.to_container(misc_config, resolve=True) if misc_config else {}
38
+
39
+ latent_size = tuple(int(dim) for dim in misc.get("latent_size", (768, 32, 32)))
40
+ shift_dim = misc.get("time_dist_shift_dim", math.prod(latent_size))
41
+ shift_base = misc.get("time_dist_shift_base", 4096)
42
+ time_dist_shift = math.sqrt(shift_dim / shift_base)
43
+
44
+ # Load RAE (DepthRAE)
45
+ rae = instantiate_from_config(rae_config).to(device)
46
+ rae.eval()
47
+
48
+ # Load Stage-2 model
49
+ model = instantiate_from_config(model_config).to(device)
50
+
51
+ # Load checkpoint
52
+ checkpoint = torch.load(checkpoint_path, map_location='cpu', weights_only=False)
53
+ if 'ema' in checkpoint:
54
+ state_dict = checkpoint['ema']
55
+ elif 'model' in checkpoint:
56
+ state_dict = checkpoint['model']
57
+ else:
58
+ state_dict = checkpoint
59
+ model.load_state_dict(state_dict)
60
+ model.eval()
61
+
62
+ # Create transport sampler
63
+ transport_params = dict(transport_cfg.get("params", {}))
64
+ transport_params.pop("time_dist_shift", None)
65
+ transport = create_transport(**transport_params, time_dist_shift=time_dist_shift)
66
+ transport_sampler = Sampler(transport)
67
+
68
+ sampler_mode = sampler_cfg.get("mode", "ODE").upper()
69
+ sampler_params = dict(sampler_cfg.get("params", {}))
70
+ sampler_params['num_steps'] = num_steps
71
+
72
+ if sampler_mode == "ODE":
73
+ eval_sampler = transport_sampler.sample_ode(**sampler_params)
74
+ else:
75
+ eval_sampler = transport_sampler.sample_sde(**sampler_params)
76
+
77
+ self.rae = rae
78
+ self.model = model
79
+ self.eval_sampler = eval_sampler
80
+ self.latent_size = latent_size
81
+ self.image_size = image_size
82
+ self.device = torch.device(device)
83
+ self.use_fp16 = use_fp16
84
+
85
+ @click.command()
86
+ @click.option('--repo', 'repo_path', type=str, default='/home/ywan0794/RAE')
87
+ @click.option('--rae_config', 'config_path', type=str, required=True)
88
+ @click.option('--checkpoint', 'checkpoint_path', type=str, required=True)
89
+ @click.option('--image_size', type=int, default=512)
90
+ @click.option('--num_steps', type=int, default=2)
91
+ @click.option('--fp16', 'use_fp16', is_flag=True)
92
+ @click.option('--device', type=str, default='cuda:0')
93
+ @staticmethod
94
+ def load(repo_path, config_path, checkpoint_path, image_size, num_steps, use_fp16, device):
95
+ return Baseline(repo_path, config_path, checkpoint_path, image_size, num_steps, use_fp16, device)
96
+
97
+ def _predict_depth(self, image: torch.FloatTensor):
98
+ original_height, original_width = image.shape[-2:]
99
+
100
+ if image.ndim == 3:
101
+ image = image.unsqueeze(0)
102
+ omit_batch_dim = True
103
+ else:
104
+ omit_batch_dim = False
105
+
106
+ b = image.shape[0]
107
+
108
+ # Resize to model input size
109
+ image_resized = F.interpolate(
110
+ image, size=(self.image_size, self.image_size),
111
+ mode='bilinear', align_corners=False, antialias=True,
112
+ )
113
+
114
+ # Encode RGB
115
+ z_rgb = self.rae.encode(image_resized)
116
+
117
+ # Sample depth from noise
118
+ z_noise = torch.randn(b, *self.latent_size, device=self.device)
119
+ y = torch.zeros(b, dtype=torch.long, device=self.device)
120
+
121
+ # Marigold-style: concat z_rgb with xt before passing to model
122
+ def model_fn(xt, t, y):
123
+ x_input = torch.cat([xt, z_rgb], dim=1)
124
+ return self.model(x_input, t, y)
125
+
126
+ # Run diffusion sampling
127
+ z_pred = self.eval_sampler(z_noise, model_fn, y=y)[-1]
128
+
129
+ # Decode to depth (pass z_rgb for conditioning)
130
+ depth_pred = self.rae.decode(z_pred.float(), z_rgb)
131
+ depth_pred = depth_pred.mean(dim=1) # (B, H, W)
132
+
133
+ # Resize back to original size
134
+ depth_pred = F.interpolate(
135
+ depth_pred.unsqueeze(1), size=(original_height, original_width),
136
+ mode='bilinear', align_corners=False,
137
+ )[:, 0]
138
+
139
+ if omit_batch_dim:
140
+ depth_pred = depth_pred.squeeze(0)
141
+
142
+ return depth_pred
143
+
144
+ @torch.inference_mode()
145
+ def infer(self, image: torch.FloatTensor, intrinsics: Optional[torch.FloatTensor] = None):
146
+ depth_pred = self._predict_depth(image)
147
+ return {
148
+ 'depth_affine_invariant': depth_pred,
149
+ }
150
+
151
+ @torch.inference_mode()
152
+ def infer_for_evaluation(self, image: torch.FloatTensor, intrinsics: torch.FloatTensor = None):
153
+ with torch.cuda.amp.autocast(enabled=self.use_fp16, dtype=torch.float16):
154
+ depth_pred = self._predict_depth(image)
155
+ return {
156
+ 'depth_affine_invariant': depth_pred,
157
+ }
baselines/vggt_custom.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VGGT with custom trained DPT/SDT checkpoint (LoRA)
2
+ import os
3
+ import sys
4
+ from typing import *
5
+ from pathlib import Path
6
+
7
+ import click
8
+ import torch
9
+ import torch.nn.functional as F
10
+ import torchvision.transforms as T
11
+ import torchvision.transforms.functional as TF
12
+
13
+ from moge.test.baseline import MGEBaselineInterface
14
+
15
+
16
+ class Baseline(MGEBaselineInterface):
17
+ def __init__(self, repo_path: str, checkpoint: str, decoder: str, lora_rank: int, lora_alpha: int, num_tokens: int, device: Union[torch.device, str]):
18
+ # Create from repo
19
+ repo_path = os.path.abspath(repo_path)
20
+ training_path = os.path.join(repo_path, 'training')
21
+ if training_path not in sys.path:
22
+ sys.path.insert(0, training_path)
23
+ if repo_path not in sys.path:
24
+ sys.path.insert(0, repo_path)
25
+ if not Path(repo_path).exists():
26
+ raise FileNotFoundError(f'Cannot find the VGGT repository at {repo_path}.')
27
+
28
+ device = torch.device(device)
29
+
30
+ # Build model based on decoder type
31
+ if decoder == 'dpt':
32
+ from vggt.models.vggt import VGGT
33
+ model = VGGT(
34
+ enable_camera=True,
35
+ enable_depth=True,
36
+ enable_point=False,
37
+ enable_track=False,
38
+ )
39
+ elif decoder == 'sdt':
40
+ from vggt.models.vggt_sdt import VGGT_SDT
41
+ model = VGGT_SDT(
42
+ enable_camera=True,
43
+ enable_depth=True,
44
+ enable_point=False,
45
+ enable_track=False,
46
+ )
47
+ else:
48
+ raise ValueError(f"Unknown decoder: {decoder}")
49
+
50
+ # Apply LoRA
51
+ from lora import apply_lora
52
+ model = apply_lora(model, rank=lora_rank, alpha=lora_alpha)
53
+ print(f"Applied LoRA (rank={lora_rank}, alpha={lora_alpha})")
54
+
55
+ # Load checkpoint
56
+ if not os.path.exists(checkpoint):
57
+ raise FileNotFoundError(f'Cannot find checkpoint at {checkpoint}')
58
+
59
+ ckpt = torch.load(checkpoint, map_location='cpu')
60
+ if 'model' in ckpt:
61
+ state_dict = ckpt['model']
62
+ else:
63
+ state_dict = ckpt
64
+
65
+ # Remove 'module.' prefix if present
66
+ state_dict = {k.replace('module.', ''): v for k, v in state_dict.items()}
67
+
68
+ missing, unexpected = model.load_state_dict(state_dict, strict=False)
69
+ print(f"Loaded checkpoint from {checkpoint}")
70
+ if missing:
71
+ print(f"Missing keys: {len(missing)}")
72
+ if unexpected:
73
+ print(f"Unexpected keys: {len(unexpected)}")
74
+
75
+ model.to(device).eval()
76
+ self.model = model
77
+ self.num_tokens = num_tokens
78
+ self.device = device
79
+
80
+ @click.command()
81
+ @click.option('--repo', 'repo_path', type=click.Path(), default='/home/ywan0794/vggt', help='Path to the VGGT repository.')
82
+ @click.option('--checkpoint', type=click.Path(), required=True, help='Path to trained checkpoint.')
83
+ @click.option('--decoder', type=click.Choice(['dpt', 'sdt']), default='dpt', help='Decoder type.')
84
+ @click.option('--lora_rank', type=int, default=8, help='LoRA rank.')
85
+ @click.option('--lora_alpha', type=int, default=16, help='LoRA alpha.')
86
+ @click.option('--num_tokens', type=int, default=None, help='Number of tokens to use for the input image.')
87
+ @click.option('--device', type=str, default='cuda', help='Device to use for inference.')
88
+ @staticmethod
89
+ def load(repo_path: str, checkpoint: str, decoder: str, lora_rank: int, lora_alpha: int, num_tokens: int, device: torch.device = 'cuda'):
90
+ return Baseline(repo_path, checkpoint, decoder, lora_rank, lora_alpha, num_tokens, device)
91
+
92
+ @torch.inference_mode()
93
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
94
+ original_height, original_width = image.shape[-2:]
95
+
96
+ if image.ndim == 3:
97
+ image = image.unsqueeze(0)
98
+ omit_batch_dim = True
99
+ else:
100
+ omit_batch_dim = False
101
+
102
+ if self.num_tokens is None:
103
+ resize_factor = 518 / min(original_height, original_width)
104
+ expected_width = round(original_width * resize_factor / 14) * 14
105
+ expected_height = round(original_height * resize_factor / 14) * 14
106
+ else:
107
+ aspect_ratio = original_width / original_height
108
+ tokens_rows = round((self.num_tokens * aspect_ratio) ** 0.5)
109
+ tokens_cols = round((self.num_tokens / aspect_ratio) ** 0.5)
110
+ expected_width = tokens_cols * 14
111
+ expected_height = tokens_rows * 14
112
+
113
+ image = TF.resize(image, (expected_height, expected_width), interpolation=T.InterpolationMode.BICUBIC, antialias=True)
114
+
115
+ # VGGT expects [0, 1] range, not ImageNet normalized
116
+ image = image.to(self.device)
117
+
118
+ # VGGT expects sequence of images: [B, S, 3, H, W]
119
+ rgb_seq = image.unsqueeze(1).repeat(1, 2, 1, 1, 1)
120
+
121
+ # Forward pass
122
+ with torch.cuda.amp.autocast(dtype=torch.bfloat16):
123
+ output = self.model(images=rgb_seq)
124
+
125
+ # Extract depth from prediction
126
+ # pred["depth"] shape: [B, S, H, W, 1]
127
+ depth = output["depth"][0, 0, :, :, 0]
128
+
129
+ # Convert depth to disparity
130
+ disparity = 1.0 / (depth + 1e-6)
131
+
132
+ disparity = F.interpolate(disparity[None, None], size=(original_height, original_width), mode='bilinear', align_corners=False, antialias=False)[0, 0]
133
+
134
+ if omit_batch_dim:
135
+ pass # already squeezed
136
+
137
+ return {
138
+ 'disparity_affine_invariant': disparity
139
+ }
baselines/vggt_metric.py ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VGGT with custom trained DPT/SDT checkpoint (LoRA) - Metric Depth Output
2
+ import os
3
+ import sys
4
+ from typing import *
5
+ from pathlib import Path
6
+
7
+ import click
8
+ import torch
9
+ import torch.nn.functional as F
10
+ import torchvision.transforms as T
11
+ import torchvision.transforms.functional as TF
12
+
13
+ from moge.test.baseline import MGEBaselineInterface
14
+
15
+
16
+ class Baseline(MGEBaselineInterface):
17
+ def __init__(self, repo_path: str, checkpoint: str, decoder: str, lora_rank: int, lora_alpha: int, num_tokens: int, device: Union[torch.device, str]):
18
+ # Create from repo
19
+ repo_path = os.path.abspath(repo_path)
20
+ training_path = os.path.join(repo_path, 'training')
21
+ if training_path not in sys.path:
22
+ sys.path.insert(0, training_path)
23
+ if repo_path not in sys.path:
24
+ sys.path.insert(0, repo_path)
25
+ if not Path(repo_path).exists():
26
+ raise FileNotFoundError(f'Cannot find the VGGT repository at {repo_path}.')
27
+
28
+ device = torch.device(device)
29
+
30
+ # Build model based on decoder type
31
+ if decoder == 'dpt':
32
+ from vggt.models.vggt import VGGT
33
+ model = VGGT(
34
+ enable_camera=True,
35
+ enable_depth=True,
36
+ enable_point=False,
37
+ enable_track=False,
38
+ )
39
+ elif decoder == 'sdt':
40
+ from vggt.models.vggt_sdt import VGGT_SDT
41
+ model = VGGT_SDT(
42
+ enable_camera=True,
43
+ enable_depth=True,
44
+ enable_point=False,
45
+ enable_track=False,
46
+ )
47
+ else:
48
+ raise ValueError(f"Unknown decoder: {decoder}")
49
+
50
+ # Apply LoRA
51
+ from lora import apply_lora
52
+ model = apply_lora(model, rank=lora_rank, alpha=lora_alpha)
53
+ print(f"Applied LoRA (rank={lora_rank}, alpha={lora_alpha})")
54
+
55
+ # Load checkpoint
56
+ if not os.path.exists(checkpoint):
57
+ raise FileNotFoundError(f'Cannot find checkpoint at {checkpoint}')
58
+
59
+ ckpt = torch.load(checkpoint, map_location='cpu')
60
+ if 'model' in ckpt:
61
+ state_dict = ckpt['model']
62
+ else:
63
+ state_dict = ckpt
64
+
65
+ # Remove 'module.' prefix if present
66
+ state_dict = {k.replace('module.', ''): v for k, v in state_dict.items()}
67
+
68
+ missing, unexpected = model.load_state_dict(state_dict, strict=False)
69
+ print(f"Loaded checkpoint from {checkpoint}")
70
+ if missing:
71
+ print(f"Missing keys: {len(missing)}")
72
+ if unexpected:
73
+ print(f"Unexpected keys: {len(unexpected)}")
74
+
75
+ model.to(device).eval()
76
+ self.model = model
77
+ self.num_tokens = num_tokens
78
+ self.device = device
79
+
80
+ @click.command()
81
+ @click.option('--repo', 'repo_path', type=click.Path(), default='/home/ywan0794/vggt', help='Path to the VGGT repository.')
82
+ @click.option('--checkpoint', type=click.Path(), required=True, help='Path to trained checkpoint.')
83
+ @click.option('--decoder', type=click.Choice(['dpt', 'sdt']), default='dpt', help='Decoder type.')
84
+ @click.option('--lora_rank', type=int, default=8, help='LoRA rank.')
85
+ @click.option('--lora_alpha', type=int, default=16, help='LoRA alpha.')
86
+ @click.option('--num_tokens', type=int, default=None, help='Number of tokens to use for the input image.')
87
+ @click.option('--device', type=str, default='cuda', help='Device to use for inference.')
88
+ @staticmethod
89
+ def load(repo_path: str, checkpoint: str, decoder: str, lora_rank: int, lora_alpha: int, num_tokens: int, device: torch.device = 'cuda'):
90
+ return Baseline(repo_path, checkpoint, decoder, lora_rank, lora_alpha, num_tokens, device)
91
+
92
+ @torch.inference_mode()
93
+ def infer(self, image: torch.Tensor, intrinsics: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
94
+ original_height, original_width = image.shape[-2:]
95
+
96
+ if image.ndim == 3:
97
+ image = image.unsqueeze(0)
98
+ omit_batch_dim = True
99
+ else:
100
+ omit_batch_dim = False
101
+
102
+ if self.num_tokens is None:
103
+ resize_factor = 518 / min(original_height, original_width)
104
+ expected_width = round(original_width * resize_factor / 14) * 14
105
+ expected_height = round(original_height * resize_factor / 14) * 14
106
+ else:
107
+ aspect_ratio = original_width / original_height
108
+ tokens_rows = round((self.num_tokens * aspect_ratio) ** 0.5)
109
+ tokens_cols = round((self.num_tokens / aspect_ratio) ** 0.5)
110
+ expected_width = tokens_cols * 14
111
+ expected_height = tokens_rows * 14
112
+
113
+ image = TF.resize(image, (expected_height, expected_width), interpolation=T.InterpolationMode.BICUBIC, antialias=True)
114
+
115
+ # VGGT expects [0, 1] range, not ImageNet normalized
116
+ image = image.to(self.device)
117
+
118
+ # VGGT expects sequence of images: [B, S, 3, H, W]
119
+ rgb_seq = image.unsqueeze(1).repeat(1, 2, 1, 1, 1)
120
+
121
+ # Forward pass
122
+ with torch.cuda.amp.autocast(dtype=torch.bfloat16):
123
+ output = self.model(images=rgb_seq)
124
+
125
+ # Extract depth from prediction
126
+ # pred["depth"] shape: [B, S, H, W, 1]
127
+ depth = output["depth"][0, 0, :, :, 0]
128
+
129
+ # Output metric depth directly (no 1/depth conversion)
130
+ depth = F.interpolate(depth[None, None], size=(original_height, original_width), mode='bilinear', align_corners=False, antialias=False)[0, 0]
131
+
132
+ if omit_batch_dim:
133
+ pass # already squeezed
134
+
135
+ return {
136
+ 'depth_metric': depth
137
+ }
eval_all_12108.log ADDED
The diff for this file is too large to render. See raw diff
 
eval_all_12110.log ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ eval-all started at Thu May 14 04:58:17 AM AEST 2026
3
+ Config (main): /home/ywan0794/MoGe/configs/eval/all_benchmarks.json
4
+ Config (fe2e): /home/ywan0794/MoGe/configs/eval/fe2e_all_benchmarks.json
5
+ TIMESTAMP: 20260514_045817
6
+ Summary file: eval_output/_eval_all_20260514_045817.summary.txt
7
+ ============================================
8
+ Thu May 14 04:58:17 2026
9
+ +-----------------------------------------------------------------------------------------+
10
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
11
+ |-----------------------------------------+------------------------+----------------------+
12
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
13
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
14
+ | | | MIG M. |
15
+ |=========================================+========================+======================|
16
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
17
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
18
+ | | | Disabled |
19
+ +-----------------------------------------+------------------------+----------------------+
20
+
21
+ +-----------------------------------------------------------------------------------------+
22
+ | Processes: |
23
+ | GPU GI CI PID Type Process name GPU Memory |
24
+ | ID ID Usage |
25
+ |=========================================================================================|
26
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
27
+ +-----------------------------------------------------------------------------------------+
28
+
29
+ ============================================
30
+ [marigold] starting at Thu May 14 04:58:17 AM AEST 2026 (conda env: marigold)
31
+ ============================================
32
+ Active env: marigold
33
+ CUDA: True NVIDIA H100 NVL
34
+ The config attributes {'prediction_type': 'depth'} were passed to MarigoldDepthPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
35
+ Keyword arguments {'prediction_type': 'depth'} are not expected by MarigoldDepthPipeline and will be ignored.
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+
50
+
51
+
52
+
53
+
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+
72
+
73
+
74
+
75
+
76
+
77
+
78
+
79
+
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+
93
+
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+
102
+
103
+
104
+
105
+
106
+
107
+
108
+
109
+
110
+
111
+
112
+
113
+
114
+
115
+
116
+
117
+
118
+
119
+
120
+
121
+
122
+
123
+
124
+
125
+
126
+
127
+
128
+
129
+
130
+
131
+
132
+
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
moge_da2_dpt_subset_12087.log ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ Activated conda environment: da2
3
+ CUDA_HOME: /home/ywan0794/miniconda3/envs/da2
4
+ ============================================
5
+ === GPU Info ===
6
+ Tue May 12 18:06:35 2026
7
+ +-----------------------------------------------------------------------------------------+
8
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
9
+ |-----------------------------------------+------------------------+----------------------+
10
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
11
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
12
+ | | | MIG M. |
13
+ |=========================================+========================+======================|
14
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
15
+ | N/A 36C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
16
+ | | | Disabled |
17
+ +-----------------------------------------+------------------------+----------------------+
18
+
19
+ +-----------------------------------------------------------------------------------------+
20
+ | Processes: |
21
+ | GPU GI CI PID Type Process name GPU Memory |
22
+ | ID ID Usage |
23
+ |=========================================================================================|
24
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
25
+ +-----------------------------------------------------------------------------------------+
26
+ CUDA available: True
27
+ GPU count: 1
28
+ GPU name: NVIDIA H100 NVL
29
+ ============================================
30
+ Starting MoGe Subset Sanity Eval (DA2 public vitb) at Tue May 12 06:06:55 PM AEST 2026
31
+ Config: /home/ywan0794/MoGe/configs/eval/subset_benchmarks.json
32
+ Output: eval_output/da2_public_vitb_subset_20260512_180655.json
33
+ ============================================
34
+ xFormers not available
35
+ xFormers not available
36
+ Traceback (most recent call last):
37
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
38
+ main()
39
+ File "/home/ywan0794/miniconda3/envs/da2/lib/python3.10/site-packages/click/core.py", line 1485, in __call__
40
+ return self.main(*args, **kwargs)
41
+ File "/home/ywan0794/miniconda3/envs/da2/lib/python3.10/site-packages/click/core.py", line 1406, in main
42
+ rv = self.invoke(ctx)
43
+ File "/home/ywan0794/miniconda3/envs/da2/lib/python3.10/site-packages/click/core.py", line 1269, in invoke
44
+ return ctx.invoke(self.callback, **ctx.params)
45
+ File "/home/ywan0794/miniconda3/envs/da2/lib/python3.10/site-packages/click/core.py", line 824, in invoke
46
+ return callback(*args, **kwargs)
47
+ File "/home/ywan0794/miniconda3/envs/da2/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
48
+ return f(get_current_context(), *args, **kwargs)
49
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 42, in main
50
+ baseline : MGEBaselineInterface = baseline_cls.load.main(ctx.args, standalone_mode=False)
51
+ File "/home/ywan0794/miniconda3/envs/da2/lib/python3.10/site-packages/click/core.py", line 1406, in main
52
+ rv = self.invoke(ctx)
53
+ File "/home/ywan0794/miniconda3/envs/da2/lib/python3.10/site-packages/click/core.py", line 1269, in invoke
54
+ return ctx.invoke(self.callback, **ctx.params)
55
+ File "/home/ywan0794/miniconda3/envs/da2/lib/python3.10/site-packages/click/core.py", line 824, in invoke
56
+ return callback(*args, **kwargs)
57
+ File "/home/ywan0794/MoGe/baselines/da_v2.py", line 50, in load
58
+ return Baseline(repo_path, backbone, num_tokens, device)
59
+ File "/home/ywan0794/MoGe/baselines/da_v2.py", line 36, in __init__
60
+ model.load_state_dict(checkpoint)
61
+ File "/home/ywan0794/miniconda3/envs/da2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2629, in load_state_dict
62
+ raise RuntimeError(
63
+ RuntimeError: Error(s) in loading state_dict for DepthAnythingV2:
64
+ size mismatch for depth_head.projects.0.weight: copying a param with shape torch.Size([96, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 768, 1, 1]).
65
+ size mismatch for depth_head.projects.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([256]).
66
+ size mismatch for depth_head.projects.1.weight: copying a param with shape torch.Size([192, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 768, 1, 1]).
67
+ size mismatch for depth_head.projects.1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([512]).
68
+ size mismatch for depth_head.projects.2.weight: copying a param with shape torch.Size([384, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([1024, 768, 1, 1]).
69
+ size mismatch for depth_head.projects.2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([1024]).
70
+ size mismatch for depth_head.projects.3.weight: copying a param with shape torch.Size([768, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([1024, 768, 1, 1]).
71
+ size mismatch for depth_head.projects.3.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
72
+ size mismatch for depth_head.resize_layers.0.weight: copying a param with shape torch.Size([96, 96, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 256, 4, 4]).
73
+ size mismatch for depth_head.resize_layers.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([256]).
74
+ size mismatch for depth_head.resize_layers.1.weight: copying a param with shape torch.Size([192, 192, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 512, 2, 2]).
75
+ size mismatch for depth_head.resize_layers.1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([512]).
76
+ size mismatch for depth_head.resize_layers.3.weight: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
77
+ size mismatch for depth_head.resize_layers.3.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
78
+ size mismatch for depth_head.scratch.layer1_rn.weight: copying a param with shape torch.Size([128, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
79
+ size mismatch for depth_head.scratch.layer2_rn.weight: copying a param with shape torch.Size([128, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 512, 3, 3]).
80
+ size mismatch for depth_head.scratch.layer3_rn.weight: copying a param with shape torch.Size([128, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1024, 3, 3]).
81
+ size mismatch for depth_head.scratch.layer4_rn.weight: copying a param with shape torch.Size([128, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1024, 3, 3]).
82
+ size mismatch for depth_head.scratch.refinenet1.out_conv.weight: copying a param with shape torch.Size([128, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
83
+ size mismatch for depth_head.scratch.refinenet1.out_conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
84
+ size mismatch for depth_head.scratch.refinenet1.resConfUnit1.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
85
+ size mismatch for depth_head.scratch.refinenet1.resConfUnit1.conv1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
86
+ size mismatch for depth_head.scratch.refinenet1.resConfUnit1.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
87
+ size mismatch for depth_head.scratch.refinenet1.resConfUnit1.conv2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
88
+ size mismatch for depth_head.scratch.refinenet1.resConfUnit2.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
89
+ size mismatch for depth_head.scratch.refinenet1.resConfUnit2.conv1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
90
+ size mismatch for depth_head.scratch.refinenet1.resConfUnit2.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
91
+ size mismatch for depth_head.scratch.refinenet1.resConfUnit2.conv2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
92
+ size mismatch for depth_head.scratch.refinenet2.out_conv.weight: copying a param with shape torch.Size([128, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
93
+ size mismatch for depth_head.scratch.refinenet2.out_conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
94
+ size mismatch for depth_head.scratch.refinenet2.resConfUnit1.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
95
+ size mismatch for depth_head.scratch.refinenet2.resConfUnit1.conv1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
96
+ size mismatch for depth_head.scratch.refinenet2.resConfUnit1.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
97
+ size mismatch for depth_head.scratch.refinenet2.resConfUnit1.conv2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
98
+ size mismatch for depth_head.scratch.refinenet2.resConfUnit2.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
99
+ size mismatch for depth_head.scratch.refinenet2.resConfUnit2.conv1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
100
+ size mismatch for depth_head.scratch.refinenet2.resConfUnit2.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
101
+ size mismatch for depth_head.scratch.refinenet2.resConfUnit2.conv2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
102
+ size mismatch for depth_head.scratch.refinenet3.out_conv.weight: copying a param with shape torch.Size([128, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
103
+ size mismatch for depth_head.scratch.refinenet3.out_conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
104
+ size mismatch for depth_head.scratch.refinenet3.resConfUnit1.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
105
+ size mismatch for depth_head.scratch.refinenet3.resConfUnit1.conv1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
106
+ size mismatch for depth_head.scratch.refinenet3.resConfUnit1.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
107
+ size mismatch for depth_head.scratch.refinenet3.resConfUnit1.conv2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
108
+ size mismatch for depth_head.scratch.refinenet3.resConfUnit2.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
109
+ size mismatch for depth_head.scratch.refinenet3.resConfUnit2.conv1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
110
+ size mismatch for depth_head.scratch.refinenet3.resConfUnit2.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
111
+ size mismatch for depth_head.scratch.refinenet3.resConfUnit2.conv2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
112
+ size mismatch for depth_head.scratch.refinenet4.out_conv.weight: copying a param with shape torch.Size([128, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
113
+ size mismatch for depth_head.scratch.refinenet4.out_conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
114
+ size mismatch for depth_head.scratch.refinenet4.resConfUnit1.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
115
+ size mismatch for depth_head.scratch.refinenet4.resConfUnit1.conv1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
116
+ size mismatch for depth_head.scratch.refinenet4.resConfUnit1.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
117
+ size mismatch for depth_head.scratch.refinenet4.resConfUnit1.conv2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
118
+ size mismatch for depth_head.scratch.refinenet4.resConfUnit2.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
119
+ size mismatch for depth_head.scratch.refinenet4.resConfUnit2.conv1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
120
+ size mismatch for depth_head.scratch.refinenet4.resConfUnit2.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
121
+ size mismatch for depth_head.scratch.refinenet4.resConfUnit2.conv2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
122
+ size mismatch for depth_head.scratch.output_conv1.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
123
+ size mismatch for depth_head.scratch.output_conv1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
124
+ size mismatch for depth_head.scratch.output_conv2.0.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 3, 3]).
125
+ ============================================
126
+ Evaluation completed at Tue May 12 06:07:22 PM AEST 2026
127
+ ============================================
moge_da2_dpt_subset_12088.log ADDED
@@ -0,0 +1,1075 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  
 
 
 
 
1
+ ============================================
2
+ Activated conda environment: da2
3
+ CUDA_HOME: /home/ywan0794/miniconda3/envs/da2
4
+ ============================================
5
+ === GPU Info ===
6
+ Tue May 12 18:08:32 2026
7
+ +-----------------------------------------------------------------------------------------+
8
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
9
+ |-----------------------------------------+------------------------+----------------------+
10
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
11
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
12
+ | | | MIG M. |
13
+ |=========================================+========================+======================|
14
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
15
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
16
+ | | | Disabled |
17
+ +-----------------------------------------+------------------------+----------------------+
18
+
19
+ +-----------------------------------------------------------------------------------------+
20
+ | Processes: |
21
+ | GPU GI CI PID Type Process name GPU Memory |
22
+ | ID ID Usage |
23
+ |=========================================================================================|
24
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
25
+ +-----------------------------------------------------------------------------------------+
26
+ CUDA available: True
27
+ GPU count: 1
28
+ GPU name: NVIDIA H100 NVL
29
+ ============================================
30
+ Starting MoGe Subset Sanity Eval (DA2 public vitl) at Tue May 12 06:08:34 PM AEST 2026
31
+ Config: /home/ywan0794/MoGe/configs/eval/subset_benchmarks.json
32
+ Output: eval_output/da2_public_vitl_subset_20260512_180834.json
33
+ ============================================
34
+ xFormers not available
35
+ xFormers not available
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+
50
+
51
+
52
+
53
+
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+
72
+
73
+
74
+
75
+
76
+
77
+
78
+
79
+
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+
93
+
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+
102
+
103
+
104
+
105
+
106
+
107
+
108
+
109
+
110
+
111
+
112
+
113
+
114
+
115
+
116
+
117
+
118
+
119
+
120
+
121
+
122
+
123
+
124
+
125
+
126
+
127
+
128
+
129
+
130
+
131
+
132
+
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+
157
+
158
+
159
+
160
+
161
+
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
+
181
+
182
+
183
+
184
+
185
+
186
+
187
+
188
+
189
+
190
+
191
+
192
+
193
+
194
+
195
+
196
+
197
+
198
+
199
+
200
+
201
+
202
+
203
+
204
+
205
+
206
+
207
+
208
+
209
+
210
+
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
219
+
220
+
221
+
222
+
223
+
224
+
225
+
226
+
227
+
228
+
229
+
230
+
231
+
232
+
233
+
234
+
235
+
236
+
237
+
238
+
239
+
240
+
241
+
242
+
243
+
244
+
245
+
246
+
247
+
248
+
249
+
250
+
251
+
252
+
253
+
254
+
255
+
256
+
257
+
258
+
259
+
260
+
261
+
262
+
263
+
264
+
265
+
266
+
267
+
268
+
269
+
270
+
271
+
272
+
273
+
274
+
275
+
276
+
277
+
278
+
279
+
280
+
281
+
282
+
283
+
284
+
285
+
286
+
287
+
288
+
289
+
290
+
291
+
292
+
293
+
294
+
295
+
296
+
297
+
298
+
299
+
300
+
301
+
302
+
303
+
304
+
305
+
306
+
307
+
308
+
309
+
310
+
311
+
312
+
313
+
314
+
315
+
316
+
317
+
318
+
319
+
320
+
321
+
322
+
323
+
324
+
325
+
326
+
327
+
328
+
329
+
330
+
331
+
332
+
333
+
334
+
335
+
336
+
337
+
338
+
339
+
340
+
341
+
342
+
343
+
344
+
345
+
346
+
347
+
348
+
349
+
350
+
351
+
352
+
353
+
354
+
355
+
356
+
357
+
358
+
359
+
360
+
361
+
362
+
363
+
364
+
365
+
366
  
367
+
368
+
369
+
370
+
371
+
372
+
373
+
374
+
375
+
376
+
377
+
378
+
379
+
380
+
381
+
382
+
383
+
384
+
385
+
386
+
387
+
388
+
389
+
390
+
391
+
392
+
393
+
394
+
395
+
396
+
397
+
398
+
399
+
400
+
401
+
402
+
403
+
404
+
405
+
406
+
407
+
408
+
409
+
410
+
411
+
412
+
413
+
414
+
415
+
416
+
417
+
418
+
419
+
420
+
421
+
422
+
423
+
424
+
425
+
426
+
427
+
428
+
429
+
430
+
431
+
432
+
433
+
434
+
435
+
436
+
437
+
438
+
439
+
440
+
441
+
442
+
443
+
444
+
445
+
446
+
447
+
448
+
449
+
450
+
451
+
452
+
453
+
454
+
455
+
456
+
457
+
458
+
459
+
460
+
461
+
462
+
463
+
464
+
465
+
466
+
467
+
468
+
469
+
470
+
471
+
472
+
473
+
474
+
475
+
476
+
477
+
478
+
479
+
480
+
481
+
482
+
483
+
484
+
485
+
486
+
487
+
488
+
489
+
490
+
491
+
492
+
493
+
494
+
495
+
496
+
497
+
498
+
499
+
500
+
501
+
502
+
503
+
504
+
505
+
506
+
507
+
508
+
509
+
510
+
511
+
512
+
513
+
514
+
515
+
516
+
517
+
518
+
519
+
520
+
521
+
522
+
523
+
524
+
525
+
526
+
527
+
528
+
529
+
530
+
531
+
532
+
533
+
534
+
535
+
536
+
537
+
538
+
539
+
540
+
541
+
542
+
543
+
544
+
545
+
546
+
547
+
548
+
549
+
550
+
551
+
552
+
553
+
554
+
555
+
556
+
557
+
558
+
559
+
560
+
561
+
562
+
563
+
564
+
565
+
566
+
567
+
568
+
569
+
570
+
571
+
572
+
573
+
574
+
575
+
576
+
577
+
578
+
579
+
580
+
581
+
582
+
583
+
584
+
585
+
586
+
587
+
588
+
589
+
590
+
591
+
592
+
593
+
594
+
595
+
596
+
597
+
598
+
599
+
600
+
601
+
602
+
603
+
604
+
605
+
606
+
607
+
608
+
609
+
610
+
611
+
612
+
613
+
614
+
615
+
616
+
617
+
618
+
619
+
620
+
621
+
622
+
623
+
624
+
625
+
626
+
627
+
628
+
629
+
630
+
631
+
632
+
633
+
634
+
635
+
636
+
637
+
638
+
639
+
640
+
641
+
642
+
643
+
644
+
645
+
646
+
647
+
648
+
649
+
650
+
651
+
652
+
653
+
654
+
655
+
656
+
657
+
658
+
659
+
660
+
661
+
662
+
663
+
664
+
665
+
666
+
667
+
668
+
669
+
670
+
671
+
672
+
673
+
674
+
675
+
676
+
677
+
678
+
679
+
680
+
681
+
682
+
683
+
684
+
685
+
686
+
687
+
688
+
689
+
690
+
691
+
692
+
693
+
694
+
695
+
696
+
697
+
698
+
699
+
700
+
701
+
702
+
703
+
704
+
705
+
706
+
707
+
708
+
709
+
710
+
711
+
712
+
713
+
714
+
715
+
716
+
717
+
718
+
719
+
720
+
721
+
722
+
723
+
724
+
725
+
726
+
727
+
728
+
729
+
730
+
731
+
732
+
733
+
734
+
735
+
736
+
737
+
738
+
739
+
740
+
741
+
742
+
743
+
744
+
745
+
746
+
747
+
748
+
749
+
750
+
751
+
752
+
753
+
754
+
755
+
756
+
757
+
758
+
759
+
760
+
761
+
762
+
763
+
764
+
765
+
766
+
767
+
768
+
769
+
770
+
771
+
772
+
773
+
774
+
775
+
776
+
777
+
778
+
779
+
780
+
781
+
782
+
783
+
784
+
785
+
786
+
787
+
788
+
789
+
790
+
791
+
792
+
793
+
794
+
795
+
796
+
797
+
798
+
799
+
800
+
801
+
802
+
803
+
804
+
805
+
806
+
807
+
808
+
809
+
810
+
811
+
812
+
813
+
814
+
815
+
816
+
817
+
818
+
819
+
820
+
821
+
822
+
823
+
824
+
825
+
826
+
827
+
828
+
829
+
830
+
831
+
832
+
833
+
834
+
835
+
836
+
837
+
838
+
839
+
840
+
841
+
842
+
843
+
844
+
845
+
846
+
847
+
848
+
849
+
850
+
851
+
852
+
853
+
854
+
855
+
856
+
857
+
858
+
859
+
860
+
861
+
862
+
863
+
864
+
865
+
866
+
867
+
868
+
869
+
870
+
871
+
872
+
873
+
874
+
875
+
876
+
877
+
878
+
879
+
880
+
881
+
882
+
883
+
884
+
885
+
886
+
887
+
888
+
889
+
890
+
891
+
892
+
893
+
894
+
895
+
896
+
897
+
898
+
899
+
900
+
901
+
902
+
903
+
904
+
905
+
906
+
907
+
908
+
909
+
910
+
911
+
912
+
913
+
914
+
915
+
916
+
917
+
918
+
919
+
920
+
921
+
922
+
923
+
924
+
925
+
926
+
927
+
928
+
929
+
930
+
931
+
932
+
933
+
934
+
935
+
936
+
937
+
938
+
939
+
940
+
941
+
942
+
943
+
944
+
945
+
946
+
947
+
948
+
949
+
950
+
951
+
952
+
953
+
954
+
955
+
956
+
957
+
958
+
959
+
960
+
961
+
962
+
963
+
964
+
965
+
966
+
967
+
968
+
969
+
970
+
971
+
972
+
973
+
974
+
975
+
976
+
977
+
978
+
979
+
980
+
981
+
982
+
983
+
984
+
985
+
986
+
987
+
988
+
989
+
990
+
991
+
992
+
993
+
994
+
995
+
996
+
997
+
998
+
999
+
1000
+
1001
+
1002
+
1003
+
1004
+
1005
+
1006
+
1007
+
1008
+
1009
+
1010
+
1011
+
1012
+
1013
+
1014
+
1015
+
1016
+
1017
+
1018
+
1019
+
1020
+
1021
  
1022
+
1023
+
1024
+
1025
+
1026
+
1027
+
1028
+
1029
+
1030
+
1031
+
1032
+
1033
+
1034
+
1035
+
1036
+
1037
+
1038
+
1039
+
1040
+
1041
+
1042
+
1043
+
1044
+
1045
+
1046
+
1047
+
1048
+
1049
+
1050
+
1051
+
1052
+
1053
+
1054
+
1055
+
1056
+
1057
+
1058
+
1059
+
1060
+
1061
+
1062
+
1063
+
1064
+
1065
+
1066
+
1067
+
1068
+
1069
+
1070
+
1071
+
1072
+
1073
+
1074
+
1075
  
1076
+ ============================================
1077
+ Evaluation completed at Tue May 12 06:11:17 PM AEST 2026
1078
+ ============================================
pyproject.toml ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = ["setuptools>=61.0", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "moge"
7
+ version = "2.0.0"
8
+ description = "MoGe: Unlocking Accurate Monocular Geometry Estimation for Open-Domain Images with Optimal Training Supervision"
9
+ readme = "README.md"
10
+ license = {text = "MIT"}
11
+ dependencies = [
12
+ "click",
13
+ "opencv-python",
14
+ "scipy",
15
+ "matplotlib",
16
+ "trimesh",
17
+ "pillow",
18
+ "huggingface_hub",
19
+ "numpy",
20
+ "torch>=2.0.0",
21
+ "torchvision",
22
+ "gradio",
23
+ "utils3d @ git+https://github.com/EasternJournalist/utils3d.git@3fab839f0be9931dac7c8488eb0e1600c236e183",
24
+ "pipeline @ git+https://github.com/EasternJournalist/pipeline.git@866f059d2a05cde05e4a52211ec5051fd5f276d6"
25
+ ]
26
+ requires-python = ">=3.9"
27
+
28
+ [project.urls]
29
+ Homepage = "https://github.com/microsoft/MoGe"
30
+
31
+ [tool.setuptools.packages.find]
32
+ where = ["."]
33
+ include = ["moge*"]
34
+
35
+ [project.scripts]
36
+ moge = "moge.scripts.cli:main"
pyrightconfig.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "include": [
3
+ "moge",
4
+ "scripts",
5
+ "baselines"
6
+ ],
7
+ "ignore": [
8
+ "**"
9
+ ]
10
+ }
requirements.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The versions are not specified since MoGe should be compatible with most versions of the packages.
2
+ # If incompatibilities are found, consider upgrading to latest versions or installing the following recommended version of the package.
3
+ torch # >= 2.0.0
4
+ torchvision
5
+ gradio # ==2.8.13
6
+ click # ==8.1.7
7
+ opencv-python # ==4.10.0.84
8
+ scipy # ==1.14.1
9
+ matplotlib # ==3.9.2
10
+ trimesh # ==4.5.1
11
+ pillow # ==10.4.0
12
+ huggingface_hub # ==0.25.2
13
+ git+https://github.com/EasternJournalist/utils3d.git@3fab839f0be9931dac7c8488eb0e1600c236e183
14
+ git+https://github.com/EasternJournalist/pipeline.git@866f059d2a05cde05e4a52211ec5051fd5f276d6
sanity_all_12094.log ADDED
@@ -0,0 +1,328 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ sanity-all started at Wed May 13 02:31:53 AM AEST 2026
3
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
4
+ TIMESTAMP: 20260513_023153
5
+ Summary file: sanity_output/_sanity_all_20260513_023153.summary.txt
6
+ ============================================
7
+ Wed May 13 02:31:53 2026
8
+ +-----------------------------------------------------------------------------------------+
9
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
10
+ |-----------------------------------------+------------------------+----------------------+
11
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
12
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
13
+ | | | MIG M. |
14
+ |=========================================+========================+======================|
15
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
16
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
17
+ | | | Disabled |
18
+ +-----------------------------------------+------------------------+----------------------+
19
+
20
+ +-----------------------------------------------------------------------------------------+
21
+ | Processes: |
22
+ | GPU GI CI PID Type Process name GPU Memory |
23
+ | ID ID Usage |
24
+ |=========================================================================================|
25
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
26
+ +-----------------------------------------------------------------------------------------+
27
+
28
+ ============================================
29
+ [marigold] starting at Wed May 13 02:31:53 AM AEST 2026 (conda env: marigold)
30
+ ============================================
31
+ Active env: marigold
32
+ CUDA: True NVIDIA H100 NVL
33
+ Traceback (most recent call last):
34
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 1015, in _get_module
35
+ return importlib.import_module("." + module_name, self.__name__)
36
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/importlib/__init__.py", line 126, in import_module
37
+ return _bootstrap._gcd_import(name[level:], package, level)
38
+ File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
39
+ File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
40
+ File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
41
+ File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
42
+ File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
43
+ File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
44
+ File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
45
+ File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
46
+ File "<frozen importlib._bootstrap_external>", line 883, in exec_module
47
+ File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
48
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/autoencoders/__init__.py", line 1, in <module>
49
+ from .autoencoder_asym_kl import AsymmetricAutoencoderKL
50
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_asym_kl.py", line 21, in <module>
51
+ from .vae import AutoencoderMixin, DecoderOutput, DiagonalGaussianDistribution, Encoder, MaskConditionDecoder
52
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/autoencoders/vae.py", line 24, in <module>
53
+ from ..unets.unet_2d_blocks import (
54
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/unets/__init__.py", line 6, in <module>
55
+ from .unet_2d import UNet2DModel
56
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/unets/unet_2d.py", line 23, in <module>
57
+ from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block
58
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 36, in <module>
59
+ from ..transformers.dual_transformer_2d import DualTransformer2DModel
60
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/transformers/__init__.py", line 5, in <module>
61
+ from .ace_step_transformer import AceStepTransformer1DModel
62
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/transformers/ace_step_transformer.py", line 26, in <module>
63
+ from ..attention_dispatch import (
64
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/attention_dispatch.py", line 740, in <module>
65
+ def _wrapped_flash_attn_3(
66
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/torch/_library/custom_ops.py", line 119, in inner
67
+ schema_str = torch._custom_op.impl.infer_schema(fn, mutates_args)
68
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/torch/_library/infer_schema.py", line 42, in infer_schema
69
+ error_fn(
70
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/torch/_library/infer_schema.py", line 21, in error_fn
71
+ raise ValueError(
72
+ ValueError: infer_schema(func): Parameter q has unsupported type torch.Tensor. The valid types are: dict_keys([<class 'torch.Tensor'>, typing.Optional[torch.Tensor], typing.Sequence[torch.Tensor], typing.List[torch.Tensor], typing.Sequence[typing.Optional[torch.Tensor]], typing.List[typing.Optional[torch.Tensor]], <class 'int'>, typing.Optional[int], typing.Sequence[int], typing.List[int], typing.Optional[typing.Sequence[int]], typing.Optional[typing.List[int]], <class 'float'>, typing.Optional[float], typing.Sequence[float], typing.List[float], typing.Optional[typing.Sequence[float]], typing.Optional[typing.List[float]], <class 'bool'>, typing.Optional[bool], typing.Sequence[bool], typing.List[bool], typing.Optional[typing.Sequence[bool]], typing.Optional[typing.List[bool]], <class 'str'>, typing.Optional[str], typing.Union[int, float, bool], typing.Union[int, float, bool, NoneType], typing.Sequence[typing.Union[int, float, bool]], typing.List[typing.Union[int, float, bool]], <class 'torch.dtype'>, typing.Optional[torch.dtype], <class 'torch.device'>, typing.Optional[torch.device]]). Got func with signature (q: 'torch.Tensor', k: 'torch.Tensor', v: 'torch.Tensor', softmax_scale: 'float | None' = None, causal: 'bool' = False, qv: 'torch.Tensor | None' = None, q_descale: 'torch.Tensor | None' = None, k_descale: 'torch.Tensor | None' = None, v_descale: 'torch.Tensor | None' = None, attention_chunk: 'int' = 0, softcap: 'float' = 0.0, num_splits: 'int' = 1, pack_gqa: 'bool | None' = None, deterministic: 'bool' = False, sm_margin: 'int' = 0) -> 'tuple[torch.Tensor, torch.Tensor]')
73
+
74
+ The above exception was the direct cause of the following exception:
75
+
76
+ Traceback (most recent call last):
77
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
78
+ main()
79
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
80
+ return self.main(*args, **kwargs)
81
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1435, in main
82
+ rv = self.invoke(ctx)
83
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
84
+ return ctx.invoke(self.callback, **ctx.params)
85
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 853, in invoke
86
+ return callback(*args, **kwargs)
87
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
88
+ return f(get_current_context(), *args, **kwargs)
89
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 42, in main
90
+ baseline : MGEBaselineInterface = baseline_cls.load.main(ctx.args, standalone_mode=False)
91
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1435, in main
92
+ rv = self.invoke(ctx)
93
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
94
+ return ctx.invoke(self.callback, **ctx.params)
95
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 853, in invoke
96
+ return callback(*args, **kwargs)
97
+ File "/home/ywan0794/MoGe/baselines/marigold.py", line 75, in load
98
+ return Baseline(repo_path, checkpoint, denoise_steps, ensemble_size,
99
+ File "/home/ywan0794/MoGe/baselines/marigold.py", line 38, in __init__
100
+ from marigold import MarigoldDepthPipeline
101
+ File "/home/ywan0794/EvalMDE/Marigold/marigold/__init__.py", line 31, in <module>
102
+ from .marigold_depth_pipeline import (
103
+ File "/home/ywan0794/EvalMDE/Marigold/marigold/marigold_depth_pipeline.py", line 35, in <module>
104
+ from diffusers import (
105
+ File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
106
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 1006, in __getattr__
107
+ value = getattr(module, name)
108
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 1005, in __getattr__
109
+ module = self._get_module(self._class_to_module[name])
110
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 1017, in _get_module
111
+ raise RuntimeError(
112
+ RuntimeError: Failed to import diffusers.models.autoencoders.autoencoder_kl because of the following error (look up to see its traceback):
113
+ infer_schema(func): Parameter q has unsupported type torch.Tensor. The valid types are: dict_keys([<class 'torch.Tensor'>, typing.Optional[torch.Tensor], typing.Sequence[torch.Tensor], typing.List[torch.Tensor], typing.Sequence[typing.Optional[torch.Tensor]], typing.List[typing.Optional[torch.Tensor]], <class 'int'>, typing.Optional[int], typing.Sequence[int], typing.List[int], typing.Optional[typing.Sequence[int]], typing.Optional[typing.List[int]], <class 'float'>, typing.Optional[float], typing.Sequence[float], typing.List[float], typing.Optional[typing.Sequence[float]], typing.Optional[typing.List[float]], <class 'bool'>, typing.Optional[bool], typing.Sequence[bool], typing.List[bool], typing.Optional[typing.Sequence[bool]], typing.Optional[typing.List[bool]], <class 'str'>, typing.Optional[str], typing.Union[int, float, bool], typing.Union[int, float, bool, NoneType], typing.Sequence[typing.Union[int, float, bool]], typing.List[typing.Union[int, float, bool]], <class 'torch.dtype'>, typing.Optional[torch.dtype], <class 'torch.device'>, typing.Optional[torch.device]]). Got func with signature (q: 'torch.Tensor', k: 'torch.Tensor', v: 'torch.Tensor', softmax_scale: 'float | None' = None, causal: 'bool' = False, qv: 'torch.Tensor | None' = None, q_descale: 'torch.Tensor | None' = None, k_descale: 'torch.Tensor | None' = None, v_descale: 'torch.Tensor | None' = None, attention_chunk: 'int' = 0, softcap: 'float' = 0.0, num_splits: 'int' = 1, pack_gqa: 'bool | None' = None, deterministic: 'bool' = False, sm_margin: 'int' = 0) -> 'tuple[torch.Tensor, torch.Tensor]')
114
+ [FAIL rc=1] marigold
115
+
116
+ ============================================
117
+ [lotus] starting at Wed May 13 02:32:24 AM AEST 2026 (conda env: lotus)
118
+ ============================================
119
+ Active env: lotus
120
+ CUDA: True NVIDIA H100 NVL
121
+ Traceback (most recent call last):
122
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
123
+ main()
124
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
125
+ return self.main(*args, **kwargs)
126
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1435, in main
127
+ rv = self.invoke(ctx)
128
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
129
+ return ctx.invoke(self.callback, **ctx.params)
130
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 853, in invoke
131
+ return callback(*args, **kwargs)
132
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
133
+ return f(get_current_context(), *args, **kwargs)
134
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 42, in main
135
+ baseline : MGEBaselineInterface = baseline_cls.load.main(ctx.args, standalone_mode=False)
136
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1435, in main
137
+ rv = self.invoke(ctx)
138
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
139
+ return ctx.invoke(self.callback, **ctx.params)
140
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 853, in invoke
141
+ return callback(*args, **kwargs)
142
+ File "/home/ywan0794/MoGe/baselines/lotus.py", line 90, in load
143
+ return Baseline(repo_path, pretrained, mode, task_name, disparity, timestep,
144
+ File "/home/ywan0794/MoGe/baselines/lotus.py", line 48, in __init__
145
+ from pipeline import LotusGPipeline, LotusDPipeline
146
+ ImportError: cannot import name 'LotusGPipeline' from 'pipeline' (/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/__init__.py)
147
+ [FAIL rc=1] lotus
148
+
149
+ ============================================
150
+ [depthmaster] starting at Wed May 13 02:32:41 AM AEST 2026 (conda env: depthmaster)
151
+ ============================================
152
+ Active env: depthmaster
153
+ CUDA: True NVIDIA H100 NVL
154
+ /home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/models/transformers/transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead.
155
+ deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)
156
+ Traceback (most recent call last):
157
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 808, in _get_module
158
+ return importlib.import_module("." + module_name, self.__name__)
159
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/importlib/__init__.py", line 126, in import_module
160
+ return _bootstrap._gcd_import(name[level:], package, level)
161
+ File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
162
+ File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
163
+ File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
164
+ File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
165
+ File "<frozen importlib._bootstrap_external>", line 883, in exec_module
166
+ File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
167
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 69, in <module>
168
+ from .pipeline_loading_utils import (
169
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 48, in <module>
170
+ from transformers.utils import FLAX_WEIGHTS_NAME as TRANSFORMERS_FLAX_WEIGHTS_NAME
171
+ ImportError: cannot import name 'FLAX_WEIGHTS_NAME' from 'transformers.utils' (/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/transformers/utils/__init__.py)
172
+
173
+ The above exception was the direct cause of the following exception:
174
+
175
+ Traceback (most recent call last):
176
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
177
+ main()
178
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
179
+ return self.main(*args, **kwargs)
180
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1435, in main
181
+ rv = self.invoke(ctx)
182
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
183
+ return ctx.invoke(self.callback, **ctx.params)
184
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 853, in invoke
185
+ return callback(*args, **kwargs)
186
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
187
+ return f(get_current_context(), *args, **kwargs)
188
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 42, in main
189
+ baseline : MGEBaselineInterface = baseline_cls.load.main(ctx.args, standalone_mode=False)
190
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1435, in main
191
+ rv = self.invoke(ctx)
192
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
193
+ return ctx.invoke(self.callback, **ctx.params)
194
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 853, in invoke
195
+ return callback(*args, **kwargs)
196
+ File "/home/ywan0794/MoGe/baselines/depthmaster.py", line 71, in load
197
+ return Baseline(repo_path, checkpoint, processing_res, half_precision, device)
198
+ File "/home/ywan0794/MoGe/baselines/depthmaster.py", line 38, in __init__
199
+ from depthmaster import DepthMasterPipeline
200
+ File "/home/ywan0794/EvalMDE/DepthMaster/depthmaster/__init__.py", line 26, in <module>
201
+ from .depthmaster_pipeline import DepthMasterPipeline, DepthMasterDepthOutput # noqa: F401
202
+ File "/home/ywan0794/EvalMDE/DepthMaster/depthmaster/depthmaster_pipeline.py", line 31, in <module>
203
+ from diffusers import (
204
+ File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
205
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 799, in __getattr__
206
+ value = getattr(module, name)
207
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 798, in __getattr__
208
+ module = self._get_module(self._class_to_module[name])
209
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 810, in _get_module
210
+ raise RuntimeError(
211
+ RuntimeError: Failed to import diffusers.pipelines.pipeline_utils because of the following error (look up to see its traceback):
212
+ cannot import name 'FLAX_WEIGHTS_NAME' from 'transformers.utils' (/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/transformers/utils/__init__.py)
213
+ [FAIL rc=1] depthmaster
214
+
215
+ ============================================
216
+ [ppd] starting at Wed May 13 02:33:15 AM AEST 2026 (conda env: ppd)
217
+ ============================================
218
+ Active env: ppd
219
+ CUDA: True NVIDIA H100 NVL
220
+ Traceback (most recent call last):
221
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
222
+ main()
223
+ File "/home/ywan0794/miniconda3/envs/ppd/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
224
+ return self.main(*args, **kwargs)
225
+ File "/home/ywan0794/miniconda3/envs/ppd/lib/python3.10/site-packages/click/core.py", line 1435, in main
226
+ rv = self.invoke(ctx)
227
+ File "/home/ywan0794/miniconda3/envs/ppd/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
228
+ return ctx.invoke(self.callback, **ctx.params)
229
+ File "/home/ywan0794/miniconda3/envs/ppd/lib/python3.10/site-packages/click/core.py", line 853, in invoke
230
+ return callback(*args, **kwargs)
231
+ File "/home/ywan0794/miniconda3/envs/ppd/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
232
+ return f(get_current_context(), *args, **kwargs)
233
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 42, in main
234
+ baseline : MGEBaselineInterface = baseline_cls.load.main(ctx.args, standalone_mode=False)
235
+ File "/home/ywan0794/miniconda3/envs/ppd/lib/python3.10/site-packages/click/core.py", line 1435, in main
236
+ rv = self.invoke(ctx)
237
+ File "/home/ywan0794/miniconda3/envs/ppd/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
238
+ return ctx.invoke(self.callback, **ctx.params)
239
+ File "/home/ywan0794/miniconda3/envs/ppd/lib/python3.10/site-packages/click/core.py", line 853, in invoke
240
+ return callback(*args, **kwargs)
241
+ File "/home/ywan0794/MoGe/baselines/ppd.py", line 79, in load
242
+ return Baseline(repo_path, semantics_model, semantics_pth, model_pth, sampling_steps, device)
243
+ File "/home/ywan0794/MoGe/baselines/ppd.py", line 37, in __init__
244
+ from ppd.models.ppd import PixelPerfectDepth
245
+ File "/home/ywan0794/EvalMDE/Pixel-Perfect-Depth/ppd/models/ppd.py", line 9, in <module>
246
+ from omegaconf import DictConfig
247
+ ModuleNotFoundError: No module named 'omegaconf'
248
+ [FAIL rc=1] ppd
249
+
250
+ ============================================
251
+ [da3_mono] starting at Wed May 13 02:33:45 AM AEST 2026 (conda env: da3)
252
+ ============================================
253
+ Active env: da3
254
+ CUDA: True NVIDIA H100 NVL
255
+ [WARN ] Dependency `gsplat` is required for rendering 3DGS. Install via: pip install git+https://github.com/nerfstudio-project/gsplat.git@0b4dddf04cb687367602c01196913cde6a743d70
256
+ [INFO ] using MLP layer as FFN
257
+
258
+
259
+
260
  
261
+ Traceback (most recent call last):
262
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
263
+ main()
264
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/click/core.py", line 1485, in __call__
265
+ return self.main(*args, **kwargs)
266
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/click/core.py", line 1406, in main
267
+ rv = self.invoke(ctx)
268
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/click/core.py", line 1269, in invoke
269
+ return ctx.invoke(self.callback, **ctx.params)
270
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/click/core.py", line 824, in invoke
271
+ return callback(*args, **kwargs)
272
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
273
+ return f(get_current_context(), *args, **kwargs)
274
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 70, in main
275
+ pred = baseline.infer_for_evaluation(image)
276
+ File "/home/ywan0794/MoGe/moge/test/baseline.py", line 43, in infer_for_evaluation
277
+ return self.infer(image, intrinsics)
278
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
279
+ return func(*args, **kwargs)
280
+ File "/home/ywan0794/MoGe/baselines/da3_mono.py", line 91, in infer
281
+ output = self.model(x)
282
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
283
+ return self._call_impl(*args, **kwargs)
284
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
285
+ return forward_call(*args, **kwargs)
286
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
287
+ return func(*args, **kwargs)
288
+ File "/home/ywan0794/EvalMDE/Depth-Anything-3/src/depth_anything_3/api.py", line 129, in forward
289
+ return self.model(
290
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
291
+ return self._call_impl(*args, **kwargs)
292
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
293
+ return forward_call(*args, **kwargs)
294
+ File "/home/ywan0794/EvalMDE/Depth-Anything-3/src/depth_anything_3/model/da3.py", line 132, in forward
295
+ feats, aux_feats = self.backbone(
296
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
297
+ return self._call_impl(*args, **kwargs)
298
+ File "/home/ywan0794/miniconda3/envs/da3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
299
+ return forward_call(*args, **kwargs)
300
+ File "/home/ywan0794/EvalMDE/Depth-Anything-3/src/depth_anything_3/model/dinov2/dinov2.py", line 60, in forward
301
+ return self.pretrained.get_intermediate_layers(
302
+ File "/home/ywan0794/EvalMDE/Depth-Anything-3/src/depth_anything_3/model/dinov2/vision_transformer.py", line 379, in get_intermediate_layers
303
+ outputs, aux_outputs = self._get_intermediate_layers_not_chunked(
304
+ File "/home/ywan0794/EvalMDE/Depth-Anything-3/src/depth_anything_3/model/dinov2/vision_transformer.py", line 347, in _get_intermediate_layers_not_chunked
305
+ if i in export_feat_layers:
306
+ TypeError: argument of type 'NoneType' is not iterable
307
+ [FAIL rc=1] da3_mono
308
+
309
+ ============================================
310
+ [fe2e] starting at Wed May 13 02:34:42 AM AEST 2026 (conda env: fe2e)
311
+ ============================================
312
+ Active env: fe2e
313
+ CUDA: True NVIDIA H100 NVL
314
+ Traceback (most recent call last):
315
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 11, in <module>
316
+ import click
317
+ ModuleNotFoundError: No module named 'click'
318
+ [FAIL rc=1] fe2e
319
+
320
+ ============================================
321
+ sanity-all finished at Wed May 13 02:34:58 AM AEST 2026
322
+ ============================================
323
+ === Summary ===
324
+ [FAIL rc=1] marigold
325
+ [FAIL rc=1] lotus
326
+ [FAIL rc=1] depthmaster
327
+ [FAIL rc=1] ppd
328
+ [FAIL rc=1] da3_mono
329
+ [FAIL rc=1] fe2e
sanity_all_12095.log ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ sanity-all started at Wed May 13 02:45:10 AM AEST 2026
3
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
4
+ TIMESTAMP: 20260513_024510
5
+ Summary file: sanity_output/_sanity_all_20260513_024510.summary.txt
6
+ ============================================
7
+ Wed May 13 02:45:10 2026
8
+ +-----------------------------------------------------------------------------------------+
9
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
10
+ |-----------------------------------------+------------------------+----------------------+
11
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
12
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
13
+ | | | MIG M. |
14
+ |=========================================+========================+======================|
15
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
16
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
17
+ | | | Disabled |
18
+ +-----------------------------------------+------------------------+----------------------+
19
+
20
+ +-----------------------------------------------------------------------------------------+
21
+ | Processes: |
22
+ | GPU GI CI PID Type Process name GPU Memory |
23
+ | ID ID Usage |
24
+ |=========================================================================================|
25
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
26
+ +-----------------------------------------------------------------------------------------+
27
+
28
+ ============================================
29
+ [marigold] starting at Wed May 13 02:45:10 AM AEST 2026 (conda env: marigold)
30
+ ============================================
31
+ Active env: marigold
32
+ CUDA: True NVIDIA H100 NVL
33
+ Traceback (most recent call last):
34
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 920, in _get_module
35
+ return importlib.import_module("." + module_name, self.__name__)
36
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/importlib/__init__.py", line 126, in import_module
37
+ return _bootstrap._gcd_import(name[level:], package, level)
38
+ File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
39
+ File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
40
+ File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
41
+ File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
42
+ File "<frozen importlib._bootstrap_external>", line 883, in exec_module
43
+ File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
44
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/loaders/peft.py", line 40, in <module>
45
+ from .lora_base import _fetch_state_dict
46
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/loaders/lora_base.py", line 44, in <module>
47
+ from transformers import PreTrainedModel
48
+ File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
49
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 2226, in __getattr__
50
+ module = self._get_module(self._class_to_module[name])
51
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 2460, in _get_module
52
+ raise e
53
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 2458, in _get_module
54
+ return importlib.import_module("." + module_name, self.__name__)
55
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/importlib/__init__.py", line 126, in import_module
56
+ return _bootstrap._gcd_import(name[level:], package, level)
57
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/transformers/modeling_utils.py", line 69, in <module>
58
+ from .integrations.finegrained_fp8 import ALL_FP8_EXPERTS_FUNCTIONS
59
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/transformers/integrations/finegrained_fp8.py", line 30, in <module>
60
+ from .moe import ExpertsInterface, use_experts_implementation
61
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/transformers/integrations/moe.py", line 250, in <module>
62
+ torch.library.custom_op("transformers::grouped_mm_fallback", _grouped_mm_fallback, mutates_args=())
63
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/torch/_library/custom_ops.py", line 142, in custom_op
64
+ return inner(fn)
65
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/torch/_library/custom_ops.py", line 119, in inner
66
+ schema_str = torch._custom_op.impl.infer_schema(fn, mutates_args)
67
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/torch/_library/infer_schema.py", line 42, in infer_schema
68
+ error_fn(
69
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/torch/_library/infer_schema.py", line 21, in error_fn
70
+ raise ValueError(
71
+ ValueError: infer_schema(func): Parameter input has unsupported type torch.Tensor. The valid types are: dict_keys([<class 'torch.Tensor'>, typing.Optional[torch.Tensor], typing.Sequence[torch.Tensor], typing.List[torch.Tensor], typing.Sequence[typing.Optional[torch.Tensor]], typing.List[typing.Optional[torch.Tensor]], <class 'int'>, typing.Optional[int], typing.Sequence[int], typing.List[int], typing.Optional[typing.Sequence[int]], typing.Optional[typing.List[int]], <class 'float'>, typing.Optional[float], typing.Sequence[float], typing.List[float], typing.Optional[typing.Sequence[float]], typing.Optional[typing.List[float]], <class 'bool'>, typing.Optional[bool], typing.Sequence[bool], typing.List[bool], typing.Optional[typing.Sequence[bool]], typing.Optional[typing.List[bool]], <class 'str'>, typing.Optional[str], typing.Union[int, float, bool], typing.Union[int, float, bool, NoneType], typing.Sequence[typing.Union[int, float, bool]], typing.List[typing.Union[int, float, bool]], <class 'torch.dtype'>, typing.Optional[torch.dtype], <class 'torch.device'>, typing.Optional[torch.device]]). Got func with signature (input: 'torch.Tensor', weight: 'torch.Tensor', offs: 'torch.Tensor') -> 'torch.Tensor')
72
+
73
+ The above exception was the direct cause of the following exception:
74
+
75
+ Traceback (most recent call last):
76
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 920, in _get_module
77
+ return importlib.import_module("." + module_name, self.__name__)
78
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/importlib/__init__.py", line 126, in import_module
79
+ return _bootstrap._gcd_import(name[level:], package, level)
80
+ File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
81
+ File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
82
+ File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
83
+ File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
84
+ File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
85
+ File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
86
+ File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
87
+ File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
88
+ File "<frozen importlib._bootstrap_external>", line 883, in exec_module
89
+ File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
90
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/autoencoders/__init__.py", line 1, in <module>
91
+ from .autoencoder_asym_kl import AsymmetricAutoencoderKL
92
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_asym_kl.py", line 23, in <module>
93
+ from .vae import DecoderOutput, DiagonalGaussianDistribution, Encoder, MaskConditionDecoder
94
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/autoencoders/vae.py", line 25, in <module>
95
+ from ..unets.unet_2d_blocks import (
96
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/unets/__init__.py", line 6, in <module>
97
+ from .unet_2d import UNet2DModel
98
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/unets/unet_2d.py", line 24, in <module>
99
+ from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block
100
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 36, in <module>
101
+ from ..transformers.dual_transformer_2d import DualTransformer2DModel
102
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/transformers/__init__.py", line 6, in <module>
103
+ from .cogvideox_transformer_3d import CogVideoXTransformer3DModel
104
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/models/transformers/cogvideox_transformer_3d.py", line 22, in <module>
105
+ from ...loaders import PeftAdapterMixin
106
+ File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
107
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 910, in __getattr__
108
+ module = self._get_module(self._class_to_module[name])
109
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 922, in _get_module
110
+ raise RuntimeError(
111
+ RuntimeError: Failed to import diffusers.loaders.peft because of the following error (look up to see its traceback):
112
+ infer_schema(func): Parameter input has unsupported type torch.Tensor. The valid types are: dict_keys([<class 'torch.Tensor'>, typing.Optional[torch.Tensor], typing.Sequence[torch.Tensor], typing.List[torch.Tensor], typing.Sequence[typing.Optional[torch.Tensor]], typing.List[typing.Optional[torch.Tensor]], <class 'int'>, typing.Optional[int], typing.Sequence[int], typing.List[int], typing.Optional[typing.Sequence[int]], typing.Optional[typing.List[int]], <class 'float'>, typing.Optional[float], typing.Sequence[float], typing.List[float], typing.Optional[typing.Sequence[float]], typing.Optional[typing.List[float]], <class 'bool'>, typing.Optional[bool], typing.Sequence[bool], typing.List[bool], typing.Optional[typing.Sequence[bool]], typing.Optional[typing.List[bool]], <class 'str'>, typing.Optional[str], typing.Union[int, float, bool], typing.Union[int, float, bool, NoneType], typing.Sequence[typing.Union[int, float, bool]], typing.List[typing.Union[int, float, bool]], <class 'torch.dtype'>, typing.Optional[torch.dtype], <class 'torch.device'>, typing.Optional[torch.device]]). Got func with signature (input: 'torch.Tensor', weight: 'torch.Tensor', offs: 'torch.Tensor') -> 'torch.Tensor')
113
+
114
+ The above exception was the direct cause of the following exception:
115
+
116
+ Traceback (most recent call last):
117
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
118
+ main()
119
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
120
+ return self.main(*args, **kwargs)
121
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1435, in main
122
+ rv = self.invoke(ctx)
123
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
124
+ return ctx.invoke(self.callback, **ctx.params)
125
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 853, in invoke
126
+ return callback(*args, **kwargs)
127
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
128
+ return f(get_current_context(), *args, **kwargs)
129
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 42, in main
130
+ baseline : MGEBaselineInterface = baseline_cls.load.main(ctx.args, standalone_mode=False)
131
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1435, in main
132
+ rv = self.invoke(ctx)
133
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
134
+ return ctx.invoke(self.callback, **ctx.params)
135
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 853, in invoke
136
+ return callback(*args, **kwargs)
137
+ File "/home/ywan0794/MoGe/baselines/marigold.py", line 75, in load
138
+ return Baseline(repo_path, checkpoint, denoise_steps, ensemble_size,
139
+ File "/home/ywan0794/MoGe/baselines/marigold.py", line 38, in __init__
140
+ from marigold import MarigoldDepthPipeline
141
+ File "/home/ywan0794/EvalMDE/Marigold/marigold/__init__.py", line 31, in <module>
142
+ from .marigold_depth_pipeline import (
143
+ File "/home/ywan0794/EvalMDE/Marigold/marigold/marigold_depth_pipeline.py", line 35, in <module>
144
+ from diffusers import (
145
+ File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
146
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 911, in __getattr__
147
+ value = getattr(module, name)
148
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 910, in __getattr__
149
+ module = self._get_module(self._class_to_module[name])
150
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 922, in _get_module
151
+ raise RuntimeError(
152
+ RuntimeError: Failed to import diffusers.models.autoencoders.autoencoder_kl because of the following error (look up to see its traceback):
153
+ Failed to import diffusers.loaders.peft because of the following error (look up to see its traceback):
154
+ infer_schema(func): Parameter input has unsupported type torch.Tensor. The valid types are: dict_keys([<class 'torch.Tensor'>, typing.Optional[torch.Tensor], typing.Sequence[torch.Tensor], typing.List[torch.Tensor], typing.Sequence[typing.Optional[torch.Tensor]], typing.List[typing.Optional[torch.Tensor]], <class 'int'>, typing.Optional[int], typing.Sequence[int], typing.List[int], typing.Optional[typing.Sequence[int]], typing.Optional[typing.List[int]], <class 'float'>, typing.Optional[float], typing.Sequence[float], typing.List[float], typing.Optional[typing.Sequence[float]], typing.Optional[typing.List[float]], <class 'bool'>, typing.Optional[bool], typing.Sequence[bool], typing.List[bool], typing.Optional[typing.Sequence[bool]], typing.Optional[typing.List[bool]], <class 'str'>, typing.Optional[str], typing.Union[int, float, bool], typing.Union[int, float, bool, NoneType], typing.Sequence[typing.Union[int, float, bool]], typing.List[typing.Union[int, float, bool]], <class 'torch.dtype'>, typing.Optional[torch.dtype], <class 'torch.device'>, typing.Optional[torch.device]]). Got func with signature (input: 'torch.Tensor', weight: 'torch.Tensor', offs: 'torch.Tensor') -> 'torch.Tensor')
155
+ [FAIL rc=1] marigold
156
+
157
+ ============================================
158
+ [lotus] starting at Wed May 13 02:45:25 AM AEST 2026 (conda env: lotus)
159
+ ============================================
160
+ Active env: lotus
161
+ CUDA: True NVIDIA H100 NVL
162
+
163
+
164
+
165
+ Traceback (most recent call last):
166
+ Thread-12 (loop):
167
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
168
+ Traceback (most recent call last):
169
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
170
+ Exception in thread Thread-14 (loop):
171
+ Traceback (most recent call last):
172
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
173
+ self.run()
174
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 953, in run
175
+ Exception in thread Thread-16 (loop):
176
+ Traceback (most recent call last):
177
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
178
+ Exception in thread Thread-15 (loop) self.run()
179
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 953, in run
180
+ self.run()
181
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 953, in run
182
+ :
183
+ self.run()
184
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 953, in run
185
+ self._target(*self._args, **self._kwargs) self._target(*self._args, **self._kwargs)
186
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
187
+ self._target(*self._args, **self._kwargs)
188
+ Traceback (most recent call last):
189
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
190
+
191
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
192
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
193
+ self._target(*self._args, **self._kwargs)
194
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
195
+ result = self.work(item)
196
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
197
+ result = self.work(item)
198
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
199
+ return self.work_fn(*args, **kwargs)
200
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
201
+ return self.work_fn(*args, **kwargs)
202
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
203
+ result = self.work(item)
204
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
205
+ return self.work_fn(*args, **kwargs)
206
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
207
+ self.run()
208
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/threading.py", line 953, in run
209
+ self._target(*self._args, **self._kwargs)
210
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
211
+ result = self.work(item)
212
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
213
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
214
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
215
+ return self.work_fn(*args, **kwargs)
216
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
217
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
218
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
219
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
220
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
221
+ result = self.work(item)
222
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
223
+ return self.work_fn(*args, **kwargs)
224
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
225
+ return fn(*args, **kwargs)
226
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
227
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
228
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
229
+ return fn(*args, **kwargs)
230
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
231
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
232
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
233
+ result = func(*args, **kwargs)
234
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
235
+ return fn(*args, **kwargs)
236
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
237
+ return fn(*args, **kwargs)
238
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
239
+ return fn(*args, **kwargs)
240
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
241
+ result = func(*args, **kwargs)
242
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
243
+ result = func(*args, **kwargs)
244
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
245
+ result = func(*args, **kwargs)
246
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
247
+ points = points @ np.linalg.inv(transform).mT
248
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
249
+ points = points @ np.linalg.inv(transform).mT
250
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
251
+ points = points @ np.linalg.inv(transform).mT
252
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
253
+ result = func(*args, **kwargs)
254
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
255
+ points = points @ np.linalg.inv(transform).mT
256
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
257
+ points = points @ np.linalg.inv(transform).mT
258
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
259
+ slurmstepd-erinyes: error: *** JOB 12095 ON erinyes CANCELLED AT 2026-05-13T02:49:20 ***
sanity_all_12096.log ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ sanity-all started at Wed May 13 02:59:45 AM AEST 2026
3
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
4
+ TIMESTAMP: 20260513_025945
5
+ Summary file: sanity_output/_sanity_all_20260513_025945.summary.txt
6
+ ============================================
7
+ Wed May 13 02:59:45 2026
8
+ +-----------------------------------------------------------------------------------------+
9
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
10
+ |-----------------------------------------+------------------------+----------------------+
11
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
12
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
13
+ | | | MIG M. |
14
+ |=========================================+========================+======================|
15
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
16
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
17
+ | | | Disabled |
18
+ +-----------------------------------------+------------------------+----------------------+
19
+
20
+ +-----------------------------------------------------------------------------------------+
21
+ | Processes: |
22
+ | GPU GI CI PID Type Process name GPU Memory |
23
+ | ID ID Usage |
24
+ |=========================================================================================|
25
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
26
+ +-----------------------------------------------------------------------------------------+
27
+
28
+ ============================================
29
+ [marigold] starting at Wed May 13 02:59:45 AM AEST 2026 (conda env: marigold)
30
+ ============================================
31
+ Active env: marigold
32
+ CUDA: True NVIDIA H100 NVL
33
+ The config attributes {'prediction_type': 'depth'} were passed to MarigoldDepthPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
34
+ Keyword arguments {'prediction_type': 'depth'} are not expected by MarigoldDepthPipeline and will be ignored.
35
+
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+
50
+
51
+
52
+
53
+
54
+
55
+
56
  
57
+ [OK] marigold -> sanity_output/sanity_marigold_20260513_025945.json
58
+
59
+ ============================================
60
+ [lotus] starting at Wed May 13 03:01:07 AM AEST 2026 (conda env: lotus)
61
+ ============================================
62
+ Active env: lotus
63
+ CUDA: True NVIDIA H100 NVL
64
+
65
+ A module that was compiled using NumPy 1.x cannot be run in
66
+ NumPy 2.2.6 as it may crash. To support both 1.x and 2.x
67
+ versions of NumPy, modules must be compiled with NumPy 2.0.
68
+ Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
69
+
70
+ If you are a user of the module, the easiest solution will be to
71
+ downgrade to 'numpy<2' or try to upgrade the affected module.
72
+ We expect that some modules will need time to support NumPy 2.
73
+
74
+ Traceback (most recent call last): File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
75
+ main()
76
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
77
+ return self.main(*args, **kwargs)
78
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1435, in main
79
+ rv = self.invoke(ctx)
80
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
81
+ return ctx.invoke(self.callback, **ctx.params)
82
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 853, in invoke
83
+ return callback(*args, **kwargs)
84
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
85
+ return f(get_current_context(), *args, **kwargs)
86
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 25, in main
87
+ import cv2
88
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/cv2/__init__.py", line 181, in <module>
89
+ bootstrap()
90
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap
91
+ native_module = importlib.import_module("cv2")
92
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/importlib/__init__.py", line 126, in import_module
93
+ return _bootstrap._gcd_import(name[level:], package, level)
94
+ AttributeError: _ARRAY_API not found
95
+ Traceback (most recent call last):
96
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
97
+ main()
98
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
99
+ return self.main(*args, **kwargs)
100
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1435, in main
101
+ rv = self.invoke(ctx)
102
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
103
+ return ctx.invoke(self.callback, **ctx.params)
104
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/core.py", line 853, in invoke
105
+ return callback(*args, **kwargs)
106
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
107
+ return f(get_current_context(), *args, **kwargs)
108
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 25, in main
109
+ import cv2
110
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/cv2/__init__.py", line 181, in <module>
111
+ bootstrap()
112
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap
113
+ native_module = importlib.import_module("cv2")
114
+ File "/home/ywan0794/miniconda3/envs/lotus/lib/python3.10/importlib/__init__.py", line 126, in import_module
115
+ return _bootstrap._gcd_import(name[level:], package, level)
116
+ ImportError: numpy.core.multiarray failed to import
117
+ [FAIL rc=1] lotus
118
+
119
+ ============================================
120
+ [depthmaster] starting at Wed May 13 03:01:12 AM AEST 2026 (conda env: depthmaster)
121
+ ============================================
122
+ Active env: depthmaster
123
+ CUDA: True NVIDIA H100 NVL
124
+ /home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/models/transformers/transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead.
125
+ deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)
126
+ The config attributes {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} were passed to DepthMasterPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
127
+ Keyword arguments {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} are not expected by DepthMasterPipeline and will be ignored.
128
+
129
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
130
+ Some weights of the model checkpoint were not used when initializing UNet2DConditionModel:
131
+ ['fftblock.norm.weight, fftblock.norm.bias, fftblock.conv_f1.weight, fftblock.conv_f1.bias, fftblock.conv_f2.weight, fftblock.conv_f2.bias, fftblock.conv_f4.weight, fftblock.conv_f4.bias, fftblock.conv_f3.weight, fftblock.conv_f3.bias, fftblock.conv_s1.weight, fftblock.conv_s1.bias, fftblock.conv_s2.weight, fftblock.conv_s2.bias, fftblock.fuse.weight, fftblock.fuse.bias']
132
+
133
+ Traceback (most recent call last):
134
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
135
+ main()
136
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
137
+ return self.main(*args, **kwargs)
138
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1435, in main
139
+ rv = self.invoke(ctx)
140
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
141
+ return ctx.invoke(self.callback, **ctx.params)
142
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 853, in invoke
143
+ return callback(*args, **kwargs)
144
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
145
+ return f(get_current_context(), *args, **kwargs)
146
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 42, in main
147
+ baseline : MGEBaselineInterface = baseline_cls.load.main(ctx.args, standalone_mode=False)
148
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1435, in main
149
+ rv = self.invoke(ctx)
150
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
151
+ return ctx.invoke(self.callback, **ctx.params)
152
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/click/core.py", line 853, in invoke
153
+ return callback(*args, **kwargs)
154
+ File "/home/ywan0794/MoGe/baselines/depthmaster.py", line 71, in load
155
+ return Baseline(repo_path, checkpoint, processing_res, half_precision, device)
156
+ File "/home/ywan0794/MoGe/baselines/depthmaster.py", line 45, in __init__
157
+ pipe = DepthMasterPipeline.from_pretrained(checkpoint, variant=variant, torch_dtype=dtype)
158
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
159
+ return fn(*args, **kwargs)
160
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 972, in from_pretrained
161
+ model = pipeline_class(**init_kwargs)
162
+ File "/home/ywan0794/EvalMDE/DepthMaster/depthmaster/depthmaster_pipeline.py", line 125, in __init__
163
+ self.register_modules(
164
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 159, in register_modules
165
+ library, class_name = _fetch_class_library_tuple(module)
166
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 733, in _fetch_class_library_tuple
167
+ not_compiled_module = _unwrap_model(module)
168
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 236, in _unwrap_model
169
+ from peft import PeftModel
170
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/peft/__init__.py", line 17, in <module>
171
+ from .auto import (
172
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/peft/auto.py", line 32, in <module>
173
+ from .peft_model import (
174
+ File "/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/peft/peft_model.py", line 38, in <module>
175
+ from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
176
+ ImportError: cannot import name 'EncoderDecoderCache' from 'transformers' (/home/ywan0794/miniconda3/envs/depthmaster/lib/python3.10/site-packages/transformers/__init__.py)
177
+ [FAIL rc=1] depthmaster
178
+
179
+ ============================================
180
+ [ppd] starting at Wed May 13 03:02:57 AM AEST 2026 (conda env: ppd)
181
+ ============================================
182
+ Active env: ppd
183
+ CUDA: True NVIDIA H100 NVL
184
+ xFormers not available
185
+ xFormers not available
186
+
187
+
188
+
189
+
190
+
191
+
192
+
193
+
194
  
195
+ [OK] ppd -> sanity_output/sanity_ppd_20260513_025945.json
196
+
197
+ ============================================
198
+ [da3_mono] starting at Wed May 13 03:04:28 AM AEST 2026 (conda env: da3)
199
+ ============================================
200
+ Active env: da3
201
+ CUDA: True NVIDIA H100 NVL
202
+ [WARN ] Dependency `gsplat` is required for rendering 3DGS. Install via: pip install git+https://github.com/nerfstudio-project/gsplat.git@0b4dddf04cb687367602c01196913cde6a743d70
203
+ [INFO ] using MLP layer as FFN
204
+
205
+
206
+ [INFO ] Model Forward Pass Done. Time: 1.5514147281646729 seconds
207
+ [INFO ] Conversion to Prediction Done. Time: 0.003040313720703125 seconds
208
+
209
+
210
+ [INFO ] Model Forward Pass Done. Time: 0.019557952880859375 seconds
211
+ [INFO ] Conversion to Prediction Done. Time: 0.0003116130828857422 seconds
212
+ [INFO ] Processed Images Done taking 0.010450124740600586 seconds. Shape: torch.Size([1, 3, 378, 504])
213
+ [INFO ] Model Forward Pass Done. Time: 0.019212961196899414 seconds
214
+ [INFO ] Conversion to Prediction Done. Time: 0.00028777122497558594 seconds
215
+
216
+
217
+ [INFO ] Model Forward Pass Done. Time: 0.38853001594543457 seconds
218
+ [INFO ] Conversion to Prediction Done. Time: 0.002028226852416992 seconds
219
+ [INFO ] Processed Images Done taking 0.0074176788330078125 seconds. Shape: torch.Size([1, 3, 378, 504])
220
+ [INFO ] Model Forward Pass Done. Time: 0.019327163696289062 seconds
221
+ [INFO ] Conversion to Prediction Done. Time: 0.0002503395080566406 seconds
222
+
223
+
224
+
225
  
226
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260513_025945.json
227
+
228
+ ============================================
229
+ [fe2e] starting at Wed May 13 03:04:51 AM AEST 2026 (conda env: fe2e)
230
+ ============================================
231
+ Active env: fe2e
232
+ CUDA: True NVIDIA H100 NVL
233
+ [INFO] prompt_type=empty, 跳过Qwen模型加载
234
+ create LoRA network from weights
235
+ train all blocks only
236
+ create LoRA for DIT all blocks: 304 modules.
237
+ enable LoRA for U-Net
238
+ weights are merged
239
+
240
+
241
+ Traceback (most recent call last):
242
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
243
+ Thread-13 (loop):
244
+ Traceback (most recent call last):
245
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
246
+ self.run()
247
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 953, in run
248
+ Exception in thread Thread-15 (loop):
249
+ Traceback (most recent call last):
250
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
251
+ self._target(*self._args, **self._kwargs)
252
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
253
+ Exception in thread Thread-14 (loop):
254
+ Traceback (most recent call last):
255
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
256
+ self.run()
257
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 953, in run
258
+ self.run()
259
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 953, in run
260
+ self._target(*self._args, **self._kwargs)
261
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
262
+ self.run()
263
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 953, in run
264
+ self._target(*self._args, **self._kwargs)
265
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
266
+ self._target(*self._args, **self._kwargs)
267
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
268
+ result = self.work(item)
269
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
270
+ result = self.work(item)
271
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
272
+ result = self.work(item)
273
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
274
+ return self.work_fn(*args, **kwargs)
275
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
276
+ return self.work_fn(*args, **kwargs)
277
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
278
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
279
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
280
+ result = self.work(item)
281
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
282
+ return self.work_fn(*args, **kwargs)
283
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
284
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
285
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
286
+ return self.work_fn(*args, **kwargs)
287
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
288
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
289
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
290
+ return fn(*args, **kwargs)
291
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
292
+ result = func(*args, **kwargs)
293
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
294
+ return fn(*args, **kwargs)
295
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
296
+ return fn(*args, **kwargs)
297
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
298
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
299
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
300
+ return fn(*args, **kwargs)
301
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
302
+ result = func(*args, **kwargs)
303
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
304
+ result = func(*args, **kwargs)
305
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
306
+ points = points @ np.linalg.inv(transform).mT
307
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
308
+ result = func(*args, **kwargs)
309
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
310
+ points = points @ np.linalg.inv(transform).mT
311
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
312
+ points = points @ np.linalg.inv(transform).mT
313
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
314
+ points = points @ np.linalg.inv(transform).mT
315
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
316
+ Exception in thread Thread-16 (loop):
317
+ Traceback (most recent call last):
318
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
319
+ self.run()
320
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/threading.py", line 953, in run
321
+ self._target(*self._args, **self._kwargs)
322
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 218, in loop
323
+ result = self.work(item)
324
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/pipeline/components.py", line 208, in work
325
+ return self.work_fn(*args, **kwargs)
326
+ File "/home/ywan0794/MoGe/moge/test/dataloader.py", line 120, in _process_instance
327
+ direction = utils3d.np.unproject_cv(np.array([[cu, cv]], dtype=np.float32), np.array([1.0], dtype=np.float32), intrinsics=intrinsics)[0]
328
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/helpers.py", line 16, in wrapper
329
+ return fn(*args, **kwargs)
330
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/helpers.py", line 90, in wrapper
331
+ result = func(*args, **kwargs)
332
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/utils3d/numpy/transforms.py", line 737, in unproject_cv
333
+ points = points @ np.linalg.inv(transform).mT
334
+ AttributeError: 'numpy.ndarray' object has no attribute 'mT'. Did you mean: 'T'?
335
+ slurmstepd-erinyes: error: *** JOB 12096 ON erinyes CANCELLED AT 2026-05-13T03:20:24 ***
sanity_all_12097.log ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ sanity-all started at Wed May 13 03:20:26 AM AEST 2026
3
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
4
+ TIMESTAMP: 20260513_032026
5
+ Summary file: sanity_output/_sanity_all_20260513_032026.summary.txt
6
+ ============================================
7
+ Wed May 13 03:20:26 2026
8
+ +-----------------------------------------------------------------------------------------+
9
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
10
+ |-----------------------------------------+------------------------+----------------------+
11
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
12
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
13
+ | | | MIG M. |
14
+ |=========================================+========================+======================|
15
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
16
+ | N/A 38C P0 88W / 400W | 14MiB / 95830MiB | 0% Default |
17
+ | | | Disabled |
18
+ +-----------------------------------------+------------------------+----------------------+
19
+
20
+ +-----------------------------------------------------------------------------------------+
21
+ | Processes: |
22
+ | GPU GI CI PID Type Process name GPU Memory |
23
+ | ID ID Usage |
24
+ |=========================================================================================|
25
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
26
+ +-----------------------------------------------------------------------------------------+
27
+
28
+ ============================================
29
+ [marigold] starting at Wed May 13 03:20:27 AM AEST 2026 (conda env: marigold)
30
+ ============================================
31
+ Active env: marigold
32
+ CUDA: True NVIDIA H100 NVL
33
+ The config attributes {'prediction_type': 'depth'} were passed to MarigoldDepthPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
34
+ Keyword arguments {'prediction_type': 'depth'} are not expected by MarigoldDepthPipeline and will be ignored.
35
+
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
  
47
+ [OK] marigold -> sanity_output/sanity_marigold_20260513_032026.json
48
+
49
+ ============================================
50
+ [lotus] starting at Wed May 13 03:20:47 AM AEST 2026 (conda env: lotus)
51
+ ============================================
52
+ Active env: lotus
53
+ CUDA: True NVIDIA H100 NVL
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
  
64
+ [OK] lotus -> sanity_output/sanity_lotus_20260513_032026.json
65
+
66
+ ============================================
67
+ [depthmaster] starting at Wed May 13 03:22:05 AM AEST 2026 (conda env: depthmaster)
68
+ ============================================
69
+ Active env: depthmaster
70
+ CUDA: True NVIDIA H100 NVL
71
+ The config attributes {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} were passed to DepthMasterPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
72
+ Keyword arguments {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} are not expected by DepthMasterPipeline and will be ignored.
73
+
74
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
75
+ Some weights of the model checkpoint at /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet were not used when initializing UNet2DConditionModel:
76
+ ['fftblock.conv_f1.weight, fftblock.conv_s2.weight, fftblock.conv_f2.bias, fftblock.conv_f3.bias, fftblock.conv_f4.bias, fftblock.conv_s2.bias, fftblock.fuse.bias, fftblock.conv_f3.weight, fftblock.conv_f2.weight, fftblock.norm.weight, fftblock.conv_s1.bias, fftblock.fuse.weight, fftblock.conv_s1.weight, fftblock.conv_f4.weight, fftblock.conv_f1.bias, fftblock.norm.bias']
77
+
78
+
79
+
80
+ Expected types for unet: (<class 'depthmaster.modules.unet_2d_condition_s2.UNet2DConditionModel'>,), got <class 'diffusers.models.unets.unet_2d_condition.UNet2DConditionModel'>.
81
+ An error occurred while trying to fetch /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet: Error no file named diffusion_pytorch_model.safetensors found in directory /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet.
82
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
  
92
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260513_032026.json
93
+
94
+ ============================================
95
+ [ppd] starting at Wed May 13 03:23:33 AM AEST 2026 (conda env: ppd)
96
+ ============================================
97
+ Active env: ppd
98
+ CUDA: True NVIDIA H100 NVL
99
+ xFormers not available
100
+ xFormers not available
101
+
102
+
103
+
104
+
105
+
106
+
107
+
108
+
109
  
110
+ [OK] ppd -> sanity_output/sanity_ppd_20260513_032026.json
111
+
112
+ ============================================
113
+ [da3_mono] starting at Wed May 13 03:24:12 AM AEST 2026 (conda env: da3)
114
+ ============================================
115
+ Active env: da3
116
+ CUDA: True NVIDIA H100 NVL
117
+ [WARN ] Dependency `gsplat` is required for rendering 3DGS. Install via: pip install git+https://github.com/nerfstudio-project/gsplat.git@0b4dddf04cb687367602c01196913cde6a743d70
118
+ [INFO ] using MLP layer as FFN
119
+
120
+
121
+ [INFO ] Model Forward Pass Done. Time: 1.4963836669921875 seconds
122
+ [INFO ] Conversion to Prediction Done. Time: 0.0010690689086914062 seconds
123
+
124
+
125
+ [INFO ] Model Forward Pass Done. Time: 0.019660472869873047 seconds
126
+ [INFO ] Conversion to Prediction Done. Time: 0.00032258033752441406 seconds
127
+ [INFO ] Processed Images Done taking 0.01800370216369629 seconds. Shape: torch.Size([1, 3, 378, 504])
128
+ [INFO ] Model Forward Pass Done. Time: 0.01959395408630371 seconds
129
+ [INFO ] Conversion to Prediction Done. Time: 0.0003299713134765625 seconds
130
+
131
+
132
+ [INFO ] Model Forward Pass Done. Time: 0.019454002380371094 seconds
133
+ [INFO ] Conversion to Prediction Done. Time: 0.0003523826599121094 seconds
134
+ [INFO ] Processed Images Done taking 0.012474536895751953 seconds. Shape: torch.Size([1, 3, 378, 504])
135
+ [INFO ] Model Forward Pass Done. Time: 0.019382238388061523 seconds
136
+ [INFO ] Conversion to Prediction Done. Time: 0.0003466606140136719 seconds
137
+
138
+
139
+
140
  
141
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260513_032026.json
142
+
143
+ ============================================
144
+ [fe2e] starting at Wed May 13 03:25:00 AM AEST 2026 (conda env: fe2e)
145
+ ============================================
146
+ Active env: fe2e
147
+ CUDA: True NVIDIA H100 NVL
148
+ [INFO] prompt_type=empty, 跳过Qwen模型加载
149
+ create LoRA network from weights
150
+ train all blocks only
151
+ create LoRA for DIT all blocks: 304 modules.
152
+ enable LoRA for U-Net
153
+ weights are merged
154
+
155
+
156
+
157
  
158
+ Traceback (most recent call last):
159
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
160
+ main()
161
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
162
+ return self.main(*args, **kwargs)
163
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/click/core.py", line 1435, in main
164
+ rv = self.invoke(ctx)
165
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
166
+ return ctx.invoke(self.callback, **ctx.params)
167
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/click/core.py", line 853, in invoke
168
+ return callback(*args, **kwargs)
169
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
170
+ return f(get_current_context(), *args, **kwargs)
171
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 70, in main
172
+ pred = baseline.infer_for_evaluation(image)
173
+ File "/home/ywan0794/MoGe/moge/test/baseline.py", line 43, in infer_for_evaluation
174
+ return self.infer(image, intrinsics)
175
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
176
+ return func(*args, **kwargs)
177
+ File "/home/ywan0794/MoGe/baselines/fe2e.py", line 163, in infer
178
+ images_list, Lpred, Rpred = self.image_gen.generate_image(
179
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
180
+ return func(*args, **kwargs)
181
+ File "/home/ywan0794/EvalMDE/FE2E/infer/inference.py", line 475, in generate_image
182
+ Lpred,Rpred = self.denoise(**inputs,cfg_guidance=cfg_guidance,timesteps=timesteps,show_progress=show_progress,timesteps_truncate=1.0,)#图像中包括ref image
183
+ File "/home/ywan0794/EvalMDE/FE2E/infer/inference.py", line 270, in denoise
184
+ pred, feat = self.dit(
185
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
186
+ return self._call_impl(*args, **kwargs)
187
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
188
+ return forward_call(*args, **kwargs)
189
+ File "/home/ywan0794/EvalMDE/FE2E/modules/model_edit.py", line 197, in forward
190
+ img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
191
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
192
+ return self._call_impl(*args, **kwargs)
193
+ File "/home/ywan0794/miniconda3/envs/fe2e/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
194
+ return forward_call(*args, **kwargs)
195
+ File "/home/ywan0794/EvalMDE/FE2E/modules/layers.py", line 639, in forward
196
+ return self._forward(img, txt, vec, pe)
197
+ File "/home/ywan0794/EvalMDE/FE2E/modules/layers.py", line 600, in _forward
198
+ attn = attention_after_rope(q, k, v, pe=pe)
199
+ File "/home/ywan0794/EvalMDE/FE2E/modules/layers.py", line 403, in attention_after_rope
200
+ x = attention(q, k, v, mode="flash")
201
+ File "/home/ywan0794/EvalMDE/FE2E/modules/attention.py", line 82, in attention
202
+ assert flash_attn_func is not None, "flash_attn_func未定义"
203
+ AssertionError: flash_attn_func未定义
204
+ [FAIL rc=1] fe2e
205
+
206
+ ============================================
207
+ sanity-all finished at Wed May 13 03:25:36 AM AEST 2026
208
+ ============================================
209
+ === Summary ===
210
+ [OK] marigold -> sanity_output/sanity_marigold_20260513_032026.json
211
+ [OK] lotus -> sanity_output/sanity_lotus_20260513_032026.json
212
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260513_032026.json
213
+ [OK] ppd -> sanity_output/sanity_ppd_20260513_032026.json
214
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260513_032026.json
215
+ [FAIL rc=1] fe2e
sanity_all_12098.log ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ sanity-all started at Wed May 13 03:56:23 AM AEST 2026
3
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
4
+ TIMESTAMP: 20260513_035623
5
+ Summary file: sanity_output/_sanity_all_20260513_035623.summary.txt
6
+ ============================================
7
+ Wed May 13 03:56:24 2026
8
+ +-----------------------------------------------------------------------------------------+
9
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
10
+ |-----------------------------------------+------------------------+----------------------+
11
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
12
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
13
+ | | | MIG M. |
14
+ |=========================================+========================+======================|
15
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
16
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
17
+ | | | Disabled |
18
+ +-----------------------------------------+------------------------+----------------------+
19
+
20
+ +-----------------------------------------------------------------------------------------+
21
+ | Processes: |
22
+ | GPU GI CI PID Type Process name GPU Memory |
23
+ | ID ID Usage |
24
+ |=========================================================================================|
25
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
26
+ +-----------------------------------------------------------------------------------------+
27
+
28
+ ============================================
29
+ [marigold] starting at Wed May 13 03:56:24 AM AEST 2026 (conda env: marigold)
30
+ ============================================
31
+ Active env: marigold
32
+ CUDA: True NVIDIA H100 NVL
33
+ The config attributes {'prediction_type': 'depth'} were passed to MarigoldDepthPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
34
+ Keyword arguments {'prediction_type': 'depth'} are not expected by MarigoldDepthPipeline and will be ignored.
35
+
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
  
47
+ [OK] marigold -> sanity_output/sanity_marigold_20260513_035623.json
48
+
49
+ ============================================
50
+ [lotus] starting at Wed May 13 03:56:58 AM AEST 2026 (conda env: lotus)
51
+ ============================================
52
+ Active env: lotus
53
+ CUDA: True NVIDIA H100 NVL
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
  
64
+ [OK] lotus -> sanity_output/sanity_lotus_20260513_035623.json
65
+
66
+ ============================================
67
+ [depthmaster] starting at Wed May 13 03:57:21 AM AEST 2026 (conda env: depthmaster)
68
+ ============================================
69
+ Active env: depthmaster
70
+ CUDA: True NVIDIA H100 NVL
71
+ The config attributes {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} were passed to DepthMasterPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
72
+ Keyword arguments {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} are not expected by DepthMasterPipeline and will be ignored.
73
+
74
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
75
+ Some weights of the model checkpoint at /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet were not used when initializing UNet2DConditionModel:
76
+ ['fftblock.norm.weight, fftblock.conv_f2.bias, fftblock.conv_f4.weight, fftblock.conv_s2.bias, fftblock.conv_f1.bias, fftblock.conv_f2.weight, fftblock.conv_f3.bias, fftblock.fuse.bias, fftblock.conv_f4.bias, fftblock.norm.bias, fftblock.conv_f3.weight, fftblock.fuse.weight, fftblock.conv_s1.weight, fftblock.conv_f1.weight, fftblock.conv_s1.bias, fftblock.conv_s2.weight']
77
+
78
+
79
+
80
+ Expected types for unet: (<class 'depthmaster.modules.unet_2d_condition_s2.UNet2DConditionModel'>,), got <class 'diffusers.models.unets.unet_2d_condition.UNet2DConditionModel'>.
81
+ An error occurred while trying to fetch /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet: Error no file named diffusion_pytorch_model.safetensors found in directory /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet.
82
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
  
92
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260513_035623.json
93
+
94
+ ============================================
95
+ [ppd] starting at Wed May 13 03:58:14 AM AEST 2026 (conda env: ppd)
96
+ ============================================
97
+ Active env: ppd
98
+ CUDA: True NVIDIA H100 NVL
99
+ xFormers not available
100
+ xFormers not available
101
+
102
+
103
+
104
+
105
+
106
+
107
+
108
+
109
  
110
+ [OK] ppd -> sanity_output/sanity_ppd_20260513_035623.json
111
+
112
+ ============================================
113
+ [da3_mono] starting at Wed May 13 03:59:10 AM AEST 2026 (conda env: da3)
114
+ ============================================
115
+ Active env: da3
116
+ CUDA: True NVIDIA H100 NVL
117
+ [WARN ] Dependency `gsplat` is required for rendering 3DGS. Install via: pip install git+https://github.com/nerfstudio-project/gsplat.git@0b4dddf04cb687367602c01196913cde6a743d70
118
+
119
+
120
+
121
+
122
+
123
  
124
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260513_035623.json
125
+
126
+ ============================================
127
+ [fe2e] starting at Wed May 13 03:59:26 AM AEST 2026 (conda env: fe2e)
128
+ ============================================
129
+ Active env: fe2e
130
+ CUDA: True NVIDIA H100 NVL
131
+ [INFO] prompt_type=empty, 跳过Qwen模型加载
132
+ create LoRA network from weights
133
+ train all blocks only
134
+ create LoRA for DIT all blocks: 304 modules.
135
+ enable LoRA for U-Net
136
+ weights are merged
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
  
146
+ [OK] fe2e -> sanity_output/sanity_fe2e_20260513_035623.json
147
+
148
+ ============================================
149
+ sanity-all finished at Wed May 13 04:00:03 AM AEST 2026
150
+ ============================================
151
+ === Summary ===
152
+ [OK] marigold -> sanity_output/sanity_marigold_20260513_035623.json
153
+ [OK] lotus -> sanity_output/sanity_lotus_20260513_035623.json
154
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260513_035623.json
155
+ [OK] ppd -> sanity_output/sanity_ppd_20260513_035623.json
156
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260513_035623.json
157
+ [OK] fe2e -> sanity_output/sanity_fe2e_20260513_035623.json
sanity_all_12104.log ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ sanity-all started at Thu May 14 12:15:41 AM AEST 2026
3
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
4
+ TIMESTAMP: 20260514_001541
5
+ Summary file: sanity_output/_sanity_all_20260514_001541.summary.txt
6
+ ============================================
7
+ Thu May 14 00:15:41 2026
8
+ +-----------------------------------------------------------------------------------------+
9
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
10
+ |-----------------------------------------+------------------------+----------------------+
11
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
12
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
13
+ | | | MIG M. |
14
+ |=========================================+========================+======================|
15
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
16
+ | N/A 39C P0 93W / 400W | 14MiB / 95830MiB | 0% Default |
17
+ | | | Disabled |
18
+ +-----------------------------------------+------------------------+----------------------+
19
+
20
+ +-----------------------------------------------------------------------------------------+
21
+ | Processes: |
22
+ | GPU GI CI PID Type Process name GPU Memory |
23
+ | ID ID Usage |
24
+ |=========================================================================================|
25
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
26
+ +-----------------------------------------------------------------------------------------+
27
+
28
+ ============================================
29
+ [marigold] starting at Thu May 14 12:15:41 AM AEST 2026 (conda env: marigold)
30
+ ============================================
31
+ Active env: marigold
32
+ CUDA: True NVIDIA H100 NVL
33
+
34
+ The config attributes {'prediction_type': 'depth'} were passed to MarigoldDepthPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
35
+ Keyword arguments {'prediction_type': 'depth'} are not expected by MarigoldDepthPipeline and will be ignored.
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
  
49
+ Traceback (most recent call last):
50
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
51
+ main()
52
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
53
+ return self.main(*args, **kwargs)
54
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1435, in main
55
+ rv = self.invoke(ctx)
56
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
57
+ return ctx.invoke(self.callback, **ctx.params)
58
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/core.py", line 853, in invoke
59
+ return callback(*args, **kwargs)
60
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
61
+ return f(get_current_context(), *args, **kwargs)
62
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 70, in main
63
+ pred = baseline.infer_for_evaluation(image)
64
+ File "/home/ywan0794/MoGe/moge/test/baseline.py", line 43, in infer_for_evaluation
65
+ return self.infer(image, intrinsics)
66
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
67
+ return func(*args, **kwargs)
68
+ File "/home/ywan0794/MoGe/baselines/marigold.py", line 103, in infer
69
+ out = self.pipe(pil, **kwargs)
70
+ File "/home/ywan0794/miniconda3/envs/marigold/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
71
+ return func(*args, **kwargs)
72
+ TypeError: MarigoldDepthPipeline.__call__() got an unexpected keyword argument 'denoise_steps'
73
+ [FAIL rc=1] marigold
74
+
75
+ ============================================
76
+ [lotus] starting at Thu May 14 12:20:01 AM AEST 2026 (conda env: lotus)
77
+ ============================================
78
+ Active env: lotus
79
+ CUDA: True NVIDIA H100 NVL
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
  
90
+ [OK] lotus -> sanity_output/sanity_lotus_20260514_001541.json
91
+
92
+ ============================================
93
+ [depthmaster] starting at Thu May 14 12:21:26 AM AEST 2026 (conda env: depthmaster)
94
+ ============================================
95
+ Active env: depthmaster
96
+ CUDA: True NVIDIA H100 NVL
97
+ The config attributes {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} were passed to DepthMasterPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
98
+ Keyword arguments {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} are not expected by DepthMasterPipeline and will be ignored.
99
+
100
+
101
+
102
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
103
+ Some weights of the model checkpoint at /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet were not used when initializing UNet2DConditionModel:
104
+ ['fftblock.conv_f3.bias, fftblock.conv_f1.weight, fftblock.fuse.weight, fftblock.conv_s1.bias, fftblock.conv_f4.weight, fftblock.norm.weight, fftblock.conv_s1.weight, fftblock.conv_f4.bias, fftblock.conv_f3.weight, fftblock.conv_f2.weight, fftblock.norm.bias, fftblock.conv_f1.bias, fftblock.fuse.bias, fftblock.conv_s2.weight, fftblock.conv_f2.bias, fftblock.conv_s2.bias']
105
+
106
+ Expected types for unet: (<class 'depthmaster.modules.unet_2d_condition_s2.UNet2DConditionModel'>,), got <class 'diffusers.models.unets.unet_2d_condition.UNet2DConditionModel'>.
107
+ An error occurred while trying to fetch /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet: Error no file named diffusion_pytorch_model.safetensors found in directory /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet.
108
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
109
+
110
+
111
+
112
+
113
+
114
+
115
+
116
+
117
  
118
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260514_001541.json
119
+
120
+ ============================================
121
+ [ppd] starting at Thu May 14 12:23:03 AM AEST 2026 (conda env: ppd)
122
+ ============================================
123
+ Active env: ppd
124
+ CUDA: True NVIDIA H100 NVL
125
+ xFormers not available
126
+ xFormers not available
127
+
128
+
129
+
130
+
131
+
132
+
133
+
134
+
135
  
136
+ [OK] ppd -> sanity_output/sanity_ppd_20260514_001541.json
137
+
138
+ ============================================
139
+ [da3_mono] starting at Thu May 14 12:24:16 AM AEST 2026 (conda env: da3)
140
+ ============================================
141
+ Active env: da3
142
+ CUDA: True NVIDIA H100 NVL
143
+ [WARN ] Dependency `gsplat` is required for rendering 3DGS. Install via: pip install git+https://github.com/nerfstudio-project/gsplat.git@0b4dddf04cb687367602c01196913cde6a743d70
144
+
145
+
146
+
147
+
148
+
149
  
150
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260514_001541.json
151
+
152
+ ============================================
153
+ [fe2e] starting at Thu May 14 12:25:14 AM AEST 2026 (conda env: fe2e)
154
+ ============================================
155
+ Active env: fe2e
156
+ CUDA: True NVIDIA H100 NVL
157
+ [INFO] prompt_type=empty, 跳过Qwen模型加载
158
+ create LoRA network from weights
159
+ train all blocks only
160
+ create LoRA for DIT all blocks: 304 modules.
161
+ enable LoRA for U-Net
162
+ weights are merged
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
  
172
+ [OK] fe2e -> sanity_output/sanity_fe2e_20260514_001541.json
173
+
174
+ ============================================
175
+ sanity-all finished at Thu May 14 12:29:27 AM AEST 2026
176
+ ============================================
177
+ === Summary ===
178
+ [FAIL rc=1] marigold
179
+ [OK] lotus -> sanity_output/sanity_lotus_20260514_001541.json
180
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260514_001541.json
181
+ [OK] ppd -> sanity_output/sanity_ppd_20260514_001541.json
182
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260514_001541.json
183
+ [OK] fe2e -> sanity_output/sanity_fe2e_20260514_001541.json
sanity_all_12107.log ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ sanity-all started at Thu May 14 12:34:57 AM AEST 2026
3
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
4
+ TIMESTAMP: 20260514_003457
5
+ Summary file: sanity_output/_sanity_all_20260514_003457.summary.txt
6
+ ============================================
7
+ Thu May 14 00:34:57 2026
8
+ +-----------------------------------------------------------------------------------------+
9
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
10
+ |-----------------------------------------+------------------------+----------------------+
11
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
12
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
13
+ | | | MIG M. |
14
+ |=========================================+========================+======================|
15
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
16
+ | N/A 36C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
17
+ | | | Disabled |
18
+ +-----------------------------------------+------------------------+----------------------+
19
+
20
+ +-----------------------------------------------------------------------------------------+
21
+ | Processes: |
22
+ | GPU GI CI PID Type Process name GPU Memory |
23
+ | ID ID Usage |
24
+ |=========================================================================================|
25
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
26
+ +-----------------------------------------------------------------------------------------+
27
+
28
+ ============================================
29
+ [depth_pro] starting at Thu May 14 12:34:57 AM AEST 2026 (conda env: depth-pro)
30
+ ============================================
31
+ Active env: depth-pro
32
+ CUDA: True NVIDIA H100 NVL
33
+
34
+
35
+
36
+
37
+
38
+
39
+
40
+
41
  
42
+ [OK] depth_pro -> sanity_output/sanity_depth_pro_20260514_003457.json
43
+
44
+ ============================================
45
+ [marigold] starting at Thu May 14 12:36:47 AM AEST 2026 (conda env: marigold)
46
+ ============================================
47
+ Active env: marigold
48
+ CUDA: True NVIDIA H100 NVL
49
+ The config attributes {'prediction_type': 'depth'} were passed to MarigoldDepthPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
50
+ Keyword arguments {'prediction_type': 'depth'} are not expected by MarigoldDepthPipeline and will be ignored.
51
+
52
+
53
+
54
+
55
+
56
+ WARNING:root:The loaded `DDIMScheduler` is configured with `rescale_betas_zero_snr=False`; the recommended setting is True. Consider using `prs-eth/marigold-depth-v1-1` for the best experience.
57
+
58
+
59
+ WARNING:root:The loaded `DDIMScheduler` is configured with `rescale_betas_zero_snr=False`; the recommended setting is True. Consider using `prs-eth/marigold-depth-v1-1` for the best experience.
60
+
61
+
62
+ WARNING:root:The loaded `DDIMScheduler` is configured with `rescale_betas_zero_snr=False`; the recommended setting is True. Consider using `prs-eth/marigold-depth-v1-1` for the best experience.
63
+
64
+
65
+ WARNING:root:The loaded `DDIMScheduler` is configured with `rescale_betas_zero_snr=False`; the recommended setting is True. Consider using `prs-eth/marigold-depth-v1-1` for the best experience.
66
+
67
+
68
+ WARNING:root:The loaded `DDIMScheduler` is configured with `rescale_betas_zero_snr=False`; the recommended setting is True. Consider using `prs-eth/marigold-depth-v1-1` for the best experience.
69
+
70
+
71
+
72
  
73
+ [OK] marigold -> sanity_output/sanity_marigold_20260514_003457.json
74
+
75
+ ============================================
76
+ [lotus] starting at Thu May 14 12:39:11 AM AEST 2026 (conda env: lotus)
77
+ ============================================
78
+ Active env: lotus
79
+ CUDA: True NVIDIA H100 NVL
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
  
90
+ [OK] lotus -> sanity_output/sanity_lotus_20260514_003457.json
91
+
92
+ ============================================
93
+ [depthmaster] starting at Thu May 14 12:39:30 AM AEST 2026 (conda env: depthmaster)
94
+ ============================================
95
+ Active env: depthmaster
96
+ CUDA: True NVIDIA H100 NVL
97
+ The config attributes {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} were passed to DepthMasterPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
98
+ Keyword arguments {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} are not expected by DepthMasterPipeline and will be ignored.
99
+
100
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
101
+ Some weights of the model checkpoint at /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet were not used when initializing UNet2DConditionModel:
102
+ ['fftblock.conv_f1.bias, fftblock.conv_s1.bias, fftblock.conv_f3.weight, fftblock.conv_f1.weight, fftblock.conv_f4.weight, fftblock.norm.bias, fftblock.conv_f3.bias, fftblock.fuse.weight, fftblock.conv_s2.bias, fftblock.conv_f2.weight, fftblock.conv_f4.bias, fftblock.conv_f2.bias, fftblock.norm.weight, fftblock.fuse.bias, fftblock.conv_s1.weight, fftblock.conv_s2.weight']
103
+
104
+
105
+
106
+
107
+
108
+
109
+
110
+
111
+
112
+
113
+ Expected types for unet: (<class 'depthmaster.modules.unet_2d_condition_s2.UNet2DConditionModel'>,), got <class 'diffusers.models.unets.unet_2d_condition.UNet2DConditionModel'>.
114
+ An error occurred while trying to fetch /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet: Error no file named diffusion_pytorch_model.safetensors found in directory /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet.
115
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
116
+
117
+
118
+
119
+
120
+
121
+
122
+
123
+
124
  
125
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260514_003457.json
126
+
127
+ ============================================
128
+ [ppd] starting at Thu May 14 12:41:08 AM AEST 2026 (conda env: ppd)
129
+ ============================================
130
+ Active env: ppd
131
+ CUDA: True NVIDIA H100 NVL
132
+ xFormers not available
133
+ xFormers not available
134
+
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
  
143
+ [OK] ppd -> sanity_output/sanity_ppd_20260514_003457.json
144
+
145
+ ============================================
146
+ [da3_mono] starting at Thu May 14 12:42:38 AM AEST 2026 (conda env: da3)
147
+ ============================================
148
+ Active env: da3
149
+ CUDA: True NVIDIA H100 NVL
150
+ [WARN ] Dependency `gsplat` is required for rendering 3DGS. Install via: pip install git+https://github.com/nerfstudio-project/gsplat.git@0b4dddf04cb687367602c01196913cde6a743d70
151
+
152
+
153
+
154
+
155
+
156
+
157
  
158
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260514_003457.json
159
+
160
+ ============================================
161
+ [fe2e] starting at Thu May 14 12:43:07 AM AEST 2026 (conda env: fe2e)
162
+ ============================================
163
+ Active env: fe2e
164
+ CUDA: True NVIDIA H100 NVL
165
+ [INFO] prompt_type=empty, 跳过Qwen模型加载
166
+ create LoRA network from weights
167
+ train all blocks only
168
+ create LoRA for DIT all blocks: 304 modules.
169
+ enable LoRA for U-Net
170
+ weights are merged
171
+
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
  
180
+ [OK] fe2e -> sanity_output/sanity_fe2e_20260514_003457.json
181
+
182
+ ============================================
183
+ sanity-all finished at Thu May 14 12:45:52 AM AEST 2026
184
+ ============================================
185
+ === Summary ===
186
+ [OK] depth_pro -> sanity_output/sanity_depth_pro_20260514_003457.json
187
+ [OK] marigold -> sanity_output/sanity_marigold_20260514_003457.json
188
+ [OK] lotus -> sanity_output/sanity_lotus_20260514_003457.json
189
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260514_003457.json
190
+ [OK] ppd -> sanity_output/sanity_ppd_20260514_003457.json
191
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260514_003457.json
192
+ [OK] fe2e -> sanity_output/sanity_fe2e_20260514_003457.json
sanity_all_12109.log ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ sanity-all started at Thu May 14 04:40:31 AM AEST 2026
3
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
4
+ TIMESTAMP: 20260514_044031
5
+ Summary file: sanity_output/_sanity_all_20260514_044031.summary.txt
6
+ ============================================
7
+ Thu May 14 04:40:31 2026
8
+ +-----------------------------------------------------------------------------------------+
9
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
10
+ |-----------------------------------------+------------------------+----------------------+
11
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
12
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
13
+ | | | MIG M. |
14
+ |=========================================+========================+======================|
15
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
16
+ | N/A 36C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
17
+ | | | Disabled |
18
+ +-----------------------------------------+------------------------+----------------------+
19
+
20
+ +-----------------------------------------------------------------------------------------+
21
+ | Processes: |
22
+ | GPU GI CI PID Type Process name GPU Memory |
23
+ | ID ID Usage |
24
+ |=========================================================================================|
25
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
26
+ +-----------------------------------------------------------------------------------------+
27
+
28
+ ============================================
29
+ [depth_pro] starting at Thu May 14 04:40:31 AM AEST 2026 (conda env: depth-pro)
30
+ ============================================
31
+ Active env: depth-pro
32
+ CUDA: True NVIDIA H100 NVL
33
+
34
+
35
+
36
+
37
+
38
+
39
+
40
+
41
  
42
+ [OK] depth_pro -> sanity_output/sanity_depth_pro_20260514_044031.json
43
+
44
+ ============================================
45
+ [marigold] starting at Thu May 14 04:40:58 AM AEST 2026 (conda env: marigold)
46
+ ============================================
47
+ Active env: marigold
48
+ CUDA: True NVIDIA H100 NVL
49
+ The config attributes {'prediction_type': 'depth'} were passed to MarigoldDepthPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
50
+ Keyword arguments {'prediction_type': 'depth'} are not expected by MarigoldDepthPipeline and will be ignored.
51
+
52
+
53
+
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+
72
+
73
+
74
+
75
  
76
+ [OK] marigold -> sanity_output/sanity_marigold_20260514_044031.json
77
+
78
+ ============================================
79
+ [lotus] starting at Thu May 14 04:42:38 AM AEST 2026 (conda env: lotus)
80
+ ============================================
81
+ Active env: lotus
82
+ CUDA: True NVIDIA H100 NVL
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
  
93
+ [OK] lotus -> sanity_output/sanity_lotus_20260514_044031.json
94
+
95
+ ============================================
96
+ [depthmaster] starting at Thu May 14 04:44:16 AM AEST 2026 (conda env: depthmaster)
97
+ ============================================
98
+ Active env: depthmaster
99
+ CUDA: True NVIDIA H100 NVL
100
+ The config attributes {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} were passed to DepthMasterPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
101
+ Keyword arguments {'default_denoising_steps': 10, 'scheduler': ['diffusers', 'DDIMScheduler']} are not expected by DepthMasterPipeline and will be ignored.
102
+
103
+
104
+
105
+
106
+
107
+
108
+
109
+
110
+
111
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
112
+ Some weights of the model checkpoint at /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet were not used when initializing UNet2DConditionModel:
113
+ ['fftblock.conv_f4.weight, fftblock.norm.weight, fftblock.norm.bias, fftblock.conv_s2.weight, fftblock.conv_f4.bias, fftblock.conv_s1.weight, fftblock.conv_f1.bias, fftblock.conv_s2.bias, fftblock.fuse.weight, fftblock.conv_f2.bias, fftblock.conv_f1.weight, fftblock.conv_f3.bias, fftblock.fuse.bias, fftblock.conv_f3.weight, fftblock.conv_f2.weight, fftblock.conv_s1.bias']
114
+
115
+ Expected types for unet: (<class 'depthmaster.modules.unet_2d_condition_s2.UNet2DConditionModel'>,), got <class 'diffusers.models.unets.unet_2d_condition.UNet2DConditionModel'>.
116
+ An error occurred while trying to fetch /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet: Error no file named diffusion_pytorch_model.safetensors found in directory /home/ywan0794/EvalMDE/DepthMaster/ckpt/eval/unet.
117
+ Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
118
+
119
+
120
+
121
+
122
+
123
+
124
+
125
+
126
  
127
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260514_044031.json
128
+
129
+ ============================================
130
+ [ppd] starting at Thu May 14 04:45:58 AM AEST 2026 (conda env: ppd)
131
+ ============================================
132
+ Active env: ppd
133
+ CUDA: True NVIDIA H100 NVL
134
+ xFormers not available
135
+ xFormers not available
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
  
145
+ [OK] ppd -> sanity_output/sanity_ppd_20260514_044031.json
146
+
147
+ ============================================
148
+ [da3_mono] starting at Thu May 14 04:47:15 AM AEST 2026 (conda env: da3)
149
+ ============================================
150
+ Active env: da3
151
+ CUDA: True NVIDIA H100 NVL
152
+ [WARN ] Dependency `gsplat` is required for rendering 3DGS. Install via: pip install git+https://github.com/nerfstudio-project/gsplat.git@0b4dddf04cb687367602c01196913cde6a743d70
153
+
154
+
155
+
156
+
157
+
158
  
159
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260514_044031.json
160
+
161
+ ============================================
162
+ [fe2e] starting at Thu May 14 04:47:49 AM AEST 2026 (conda env: fe2e)
163
+ ============================================
164
+ Active env: fe2e
165
+ CUDA: True NVIDIA H100 NVL
166
+ [INFO] prompt_type=empty, 跳过Qwen模型加载
167
+ create LoRA network from weights
168
+ train all blocks only
169
+ create LoRA for DIT all blocks: 304 modules.
170
+ enable LoRA for U-Net
171
+ weights are merged
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
  
181
+ [OK] fe2e -> sanity_output/sanity_fe2e_20260514_044031.json
182
+
183
+ ============================================
184
+ sanity-all finished at Thu May 14 04:49:52 AM AEST 2026
185
+ ============================================
186
+ === Summary ===
187
+ [OK] depth_pro -> sanity_output/sanity_depth_pro_20260514_044031.json
188
+ [OK] marigold -> sanity_output/sanity_marigold_20260514_044031.json
189
+ [OK] lotus -> sanity_output/sanity_lotus_20260514_044031.json
190
+ [OK] depthmaster -> sanity_output/sanity_depthmaster_20260514_044031.json
191
+ [OK] ppd -> sanity_output/sanity_ppd_20260514_044031.json
192
+ [OK] da3_mono -> sanity_output/sanity_da3_mono_20260514_044031.json
193
+ [OK] fe2e -> sanity_output/sanity_fe2e_20260514_044031.json
sanity_depth_pro_12089.log ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ Activated conda environment: depth-pro
3
+ CUDA_HOME: /home/ywan0794/miniconda3/envs/depth-pro
4
+ ============================================
5
+ === GPU Info ===
6
+ Wed May 13 01:59:42 2026
7
+ +-----------------------------------------------------------------------------------------+
8
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
9
+ |-----------------------------------------+------------------------+----------------------+
10
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
11
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
12
+ | | | MIG M. |
13
+ |=========================================+========================+======================|
14
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
15
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
16
+ | | | Disabled |
17
+ +-----------------------------------------+------------------------+----------------------+
18
+
19
+ +-----------------------------------------------------------------------------------------+
20
+ | Processes: |
21
+ | GPU GI CI PID Type Process name GPU Memory |
22
+ | ID ID Usage |
23
+ |=========================================================================================|
24
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
25
+ +-----------------------------------------------------------------------------------------+
26
+ CUDA: True NVIDIA H100 NVL
27
+ ============================================
28
+ Starting MoGe Eval for Depth Pro at Wed May 13 02:00:06 AM AEST 2026
29
+ Repo: /home/ywan0794/EvalMDE/ml-depth-pro
30
+ Checkpoint: /home/ywan0794/EvalMDE/ml-depth-pro/checkpoints/depth_pro.pt
31
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
32
+ ============================================
33
+ Traceback (most recent call last):
34
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
35
+ main()
36
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/core.py", line 1161, in __call__
37
+ return self.main(*args, **kwargs)
38
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/core.py", line 1082, in main
39
+ rv = self.invoke(ctx)
40
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/core.py", line 1443, in invoke
41
+ return ctx.invoke(self.callback, **ctx.params)
42
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/core.py", line 788, in invoke
43
+ return __callback(*args, **kwargs)
44
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/decorators.py", line 33, in new_func
45
+ return f(get_current_context(), *args, **kwargs)
46
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 25, in main
47
+ import cv2
48
+ ModuleNotFoundError: No module named 'cv2'
49
+ ============================================
50
+ Evaluation completed at Wed May 13 02:00:07 AM AEST 2026
51
+ ============================================
sanity_depth_pro_12090.log ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ Activated conda environment: depth-pro
3
+ CUDA_HOME: /home/ywan0794/miniconda3/envs/depth-pro
4
+ ============================================
5
+ === GPU Info ===
6
+ Wed May 13 02:05:28 2026
7
+ +-----------------------------------------------------------------------------------------+
8
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
9
+ |-----------------------------------------+------------------------+----------------------+
10
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
11
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
12
+ | | | MIG M. |
13
+ |=========================================+========================+======================|
14
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
15
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
16
+ | | | Disabled |
17
+ +-----------------------------------------+------------------------+----------------------+
18
+
19
+ +-----------------------------------------------------------------------------------------+
20
+ | Processes: |
21
+ | GPU GI CI PID Type Process name GPU Memory |
22
+ | ID ID Usage |
23
+ |=========================================================================================|
24
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
25
+ +-----------------------------------------------------------------------------------------+
26
+ CUDA: True NVIDIA H100 NVL
27
+ ============================================
28
+ Starting MoGe Eval for Depth Pro at Wed May 13 02:05:30 AM AEST 2026
29
+ Repo: /home/ywan0794/EvalMDE/ml-depth-pro
30
+ Checkpoint: /home/ywan0794/EvalMDE/ml-depth-pro/checkpoints/depth_pro.pt
31
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
32
+ ============================================
33
+ Traceback (most recent call last):
34
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
35
+ main()
36
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/core.py", line 1161, in __call__
37
+ return self.main(*args, **kwargs)
38
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/core.py", line 1082, in main
39
+ rv = self.invoke(ctx)
40
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/core.py", line 1443, in invoke
41
+ return ctx.invoke(self.callback, **ctx.params)
42
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/core.py", line 788, in invoke
43
+ return __callback(*args, **kwargs)
44
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/decorators.py", line 33, in new_func
45
+ return f(get_current_context(), *args, **kwargs)
46
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 32, in main
47
+ from moge.test.baseline import MGEBaselineInterface
48
+ File "/home/ywan0794/MoGe/moge/test/baseline.py", line 7, in <module>
49
+ class MGEBaselineInterface:
50
+ File "/home/ywan0794/MoGe/moge/test/baseline.py", line 15, in MGEBaselineInterface
51
+ def load(*args, **kwargs) -> "MGEBaselineInterface":
52
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.9/site-packages/click/decorators.py", line 235, in decorator
53
+ name=name or f.__name__.lower().replace("_", "-"),
54
+ AttributeError: 'staticmethod' object has no attribute '__name__'
55
+ ============================================
56
+ Evaluation completed at Wed May 13 02:05:32 AM AEST 2026
57
+ ============================================
sanity_depth_pro_12091.log ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ============================================
2
+ Activated conda environment: depth-pro
3
+ CUDA_HOME: /home/ywan0794/miniconda3/envs/depth-pro
4
+ ============================================
5
+ === GPU Info ===
6
+ Wed May 13 02:11:06 2026
7
+ +-----------------------------------------------------------------------------------------+
8
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
9
+ |-----------------------------------------+------------------------+----------------------+
10
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
11
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
12
+ | | | MIG M. |
13
+ |=========================================+========================+======================|
14
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
15
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
16
+ | | | Disabled |
17
+ +-----------------------------------------+------------------------+----------------------+
18
+
19
+ +-----------------------------------------------------------------------------------------+
20
+ | Processes: |
21
+ | GPU GI CI PID Type Process name GPU Memory |
22
+ | ID ID Usage |
23
+ |=========================================================================================|
24
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
25
+ +-----------------------------------------------------------------------------------------+
26
+ /home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/torch/cuda/__init__.py:180: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 12040). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:119.)
27
+ return torch._C._cuda_getDeviceCount() > 0
28
+ CUDA: False
29
+ ============================================
30
+ Starting MoGe Eval for Depth Pro at Wed May 13 02:11:09 AM AEST 2026
31
+ Repo: /home/ywan0794/EvalMDE/ml-depth-pro
32
+ Checkpoint: /home/ywan0794/EvalMDE/ml-depth-pro/checkpoints/depth_pro.pt
33
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
34
+ ============================================
35
+ Traceback (most recent call last):
36
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 165, in <module>
37
+ main()
38
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/click/core.py", line 1514, in __call__
39
+ return self.main(*args, **kwargs)
40
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/click/core.py", line 1435, in main
41
+ rv = self.invoke(ctx)
42
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
43
+ return ctx.invoke(self.callback, **ctx.params)
44
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/click/core.py", line 853, in invoke
45
+ return callback(*args, **kwargs)
46
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/click/decorators.py", line 34, in new_func
47
+ return f(get_current_context(), *args, **kwargs)
48
+ File "/home/ywan0794/MoGe/moge/scripts/eval_baseline.py", line 42, in main
49
+ baseline : MGEBaselineInterface = baseline_cls.load.main(ctx.args, standalone_mode=False)
50
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/click/core.py", line 1435, in main
51
+ rv = self.invoke(ctx)
52
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/click/core.py", line 1298, in invoke
53
+ return ctx.invoke(self.callback, **ctx.params)
54
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/click/core.py", line 853, in invoke
55
+ return callback(*args, **kwargs)
56
+ File "/home/ywan0794/MoGe/baselines/depth_pro.py", line 74, in load
57
+ return Baseline(repo_path, checkpoint_path, precision, device)
58
+ File "/home/ywan0794/MoGe/baselines/depth_pro.py", line 57, in __init__
59
+ model, _ = depth_pro.create_model_and_transforms(config=config, device=device, precision=precision_dtype)
60
+ File "/home/ywan0794/EvalMDE/ml-depth-pro/src/depth_pro/depth_pro.py", line 120, in create_model_and_transforms
61
+ ).to(device)
62
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1384, in to
63
+ return self._apply(convert)
64
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/torch/nn/modules/module.py", line 934, in _apply
65
+ module._apply(fn)
66
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/torch/nn/modules/module.py", line 934, in _apply
67
+ module._apply(fn)
68
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/torch/nn/modules/module.py", line 934, in _apply
69
+ module._apply(fn)
70
+ [Previous line repeated 1 more time]
71
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/torch/nn/modules/module.py", line 965, in _apply
72
+ param_applied = fn(param)
73
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1370, in convert
74
+ return t.to(
75
+ File "/home/ywan0794/miniconda3/envs/depth-pro/lib/python3.10/site-packages/torch/cuda/__init__.py", line 478, in _lazy_init
76
+ torch._C._cuda_init()
77
+ RuntimeError: The NVIDIA driver on your system is too old (found version 12040). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.
78
+ ============================================
79
+ Evaluation completed at Wed May 13 02:11:24 AM AEST 2026
80
+ ============================================
sanity_depth_pro_12092.log ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  
 
 
 
 
1
+ ============================================
2
+ Activated conda environment: depth-pro
3
+ CUDA_HOME: /home/ywan0794/miniconda3/envs/depth-pro
4
+ ============================================
5
+ === GPU Info ===
6
+ Wed May 13 02:18:58 2026
7
+ +-----------------------------------------------------------------------------------------+
8
+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
9
+ |-----------------------------------------+------------------------+----------------------+
10
+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
11
+ | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
12
+ | | | MIG M. |
13
+ |=========================================+========================+======================|
14
+ | 0 NVIDIA H100 NVL Off | 00000000:E1:00.0 Off | 0 |
15
+ | N/A 35C P0 60W / 400W | 14MiB / 95830MiB | 0% Default |
16
+ | | | Disabled |
17
+ +-----------------------------------------+------------------------+----------------------+
18
+
19
+ +-----------------------------------------------------------------------------------------+
20
+ | Processes: |
21
+ | GPU GI CI PID Type Process name GPU Memory |
22
+ | ID ID Usage |
23
+ |=========================================================================================|
24
+ | 0 N/A N/A 4274 G /usr/lib/xorg/Xorg 4MiB |
25
+ +-----------------------------------------------------------------------------------------+
26
+ CUDA: True NVIDIA H100 NVL
27
+ ============================================
28
+ Starting MoGe Eval for Depth Pro at Wed May 13 02:19:02 AM AEST 2026
29
+ Repo: /home/ywan0794/EvalMDE/ml-depth-pro
30
+ Checkpoint: /home/ywan0794/EvalMDE/ml-depth-pro/checkpoints/depth_pro.pt
31
+ Config: /home/ywan0794/MoGe/configs/eval/sanity_benchmarks.json
32
+ ============================================
33
+
34
+
35
+
36
+
37
+
38
+
39
+
40
+
41
  
42
+ ============================================
43
+ Evaluation completed at Wed May 13 02:20:04 AM AEST 2026
44
+ ============================================
vis_depth_8709.log ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Loading models...
2
+ Loading DA2-DPT...
3
+ Traceback (most recent call last):
4
+ File "/home/ywan0794/MoGe/visualize_depth.py", line 328, in <module>
5
+ main()
6
+ File "/home/ywan0794/MoGe/visualize_depth.py", line 209, in main
7
+ da2_dpt = load_da2_model(CHECKPOINTS['da2_dpt'], 'dpt')
8
+ File "/home/ywan0794/MoGe/visualize_depth.py", line 46, in load_da2_model
9
+ model = DepthAnythingV2(**model_configs, decoder=decoder_type)
10
+ TypeError: DepthAnythingV2.__init__() got an unexpected keyword argument 'decoder'
11
+ Visualization completed!
vis_depth_8711.log ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /home/ywan0794/MoGe/visualize_depth.py:73: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
2
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
3
+ /home/ywan0794/MoGe/visualize_depth.py:135: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
4
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
5
+ /home/ywan0794/MoGe/visualize_depth.py:178: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
6
+ with torch.cuda.amp.autocast(dtype=torch.bfloat16):
7
+ Loading models...
8
+ Loading DA2-DPT...
9
+ Loaded DA2 dpt from /home/ywan0794/Depth-Anything-V2/training/exp/dpt_vitb_both/epoch_007.pth
10
+ Loading DA2-SDT...
11
+ Loaded DA2 sdt from /home/ywan0794/Depth-Anything-V2/training/exp/sdt_vitb_both/epoch_008.pth
12
+ Loading DA3-DPT...
13
+ [INFO ] using MLP layer as FFN
14
+ Loaded DA3 dpt from /home/ywan0794/Depth-Anything-3/training/exp/da3_dpt_vitl_both/epoch_010.pth
15
+ Loading DA3-SDT...
16
+ [INFO ] using MLP layer as FFN
17
+ Loaded DA3 sdt from /home/ywan0794/Depth-Anything-3/training/exp/da3_sdt_vitl_both/epoch_010.pth
18
+ Loading DA3-DualDPT...
19
+ [INFO ] using MLP layer as FFN
20
+ Loaded DA3 dualdpt from /home/ywan0794/Depth-Anything-3/training/exp/da3_dualdpt_vitl_both/epoch_010.pth
21
+ All models loaded!
22
+
23
+ Processing 10 KITTI samples...
24
+ [1/10] 2011_09_26_drive_0059_0000000154
25
+ [2/10] 2011_09_26_drive_0029_0000000296
26
+ [3/10] 2011_09_26_drive_0029_0000000154
27
+ [4/10] 2011_09_26_drive_0096_0000000171
28
+ [5/10] 2011_10_03_drive_0027_0000000362
29
+ [6/10] 2011_09_26_drive_0064_0000000462
30
+ [7/10] 2011_09_26_drive_0002_0000000051
31
+ [8/10] 2011_09_26_drive_0048_0000000016
32
+ [9/10] 2011_09_30_drive_0016_0000000110
33
+ [10/10] 2011_09_26_drive_0059_0000000098
34
+
35
+ Processing 10 DDAD samples...
36
+ [1/10] 000508_CAMERA_05
37
+ [2/10] 001971_CAMERA_09
38
+ [3/10] 003267_CAMERA_06
39
+ [4/10] 001726_CAMERA_09
40
+ [5/10] 002738_CAMERA_05
41
+ [6/10] 000339_CAMERA_01
42
+ [7/10] 000104_CAMERA_05
43
+ [8/10] 001069_CAMERA_06
44
+ [9/10] 003710_CAMERA_06
45
+ [10/10] 003376_CAMERA_05
46
+
47
+ Done! Results saved to /home/ywan0794/MoGe/vis_output
48
+ Structure:
49
+ /home/ywan0794/MoGe/vis_output/
50
+ KITTI/
51
+ rgb/, gt/, da2_dpt/, da2_sdt/, da3_dpt/, da3_sdt/, da3_dualdpt/
52
+ DDAD/
53
+ rgb/, gt/, da2_dpt/, da2_sdt/, da3_dpt/, da3_sdt/, da3_dualdpt/
54
+ Visualization completed!
vis_depth_8712.log ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ /home/ywan0794/MoGe/visualize_depth.py:73: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
2
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
3
+ /home/ywan0794/MoGe/visualize_depth.py:135: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
4
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
5
+ /home/ywan0794/MoGe/visualize_depth.py:178: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
6
+ with torch.cuda.amp.autocast(dtype=torch.bfloat16):
7
+ slurmstepd-hades: error: *** JOB 8712 ON hades CANCELLED AT 2026-01-14T23:06:30 ***
vis_depth_8714.log ADDED
@@ -0,0 +1,434 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /home/ywan0794/MoGe/visualize_depth.py:73: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
2
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
3
+ /home/ywan0794/MoGe/visualize_depth.py:135: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
4
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
5
+ /home/ywan0794/MoGe/visualize_depth.py:178: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
6
+ with torch.cuda.amp.autocast(dtype=torch.bfloat16):
7
+ Loading models...
8
+ Loading DA2-DPT...
9
+ Loaded DA2 dpt from /home/ywan0794/Depth-Anything-V2/training/exp/dpt_vitb_both/epoch_007.pth
10
+ Loading DA2-SDT...
11
+ Loaded DA2 sdt from /home/ywan0794/Depth-Anything-V2/training/exp/sdt_vitb_both/epoch_008.pth
12
+ Loading DA3-DPT...
13
+ [INFO ] using MLP layer as FFN
14
+ Loaded DA3 dpt from /home/ywan0794/Depth-Anything-3/training/exp/da3_dpt_vitl_both/epoch_010.pth
15
+ Loading DA3-SDT...
16
+ [INFO ] using MLP layer as FFN
17
+ Loaded DA3 sdt from /home/ywan0794/Depth-Anything-3/training/exp/da3_sdt_vitl_both/epoch_010.pth
18
+ Loading DA3-DualDPT...
19
+ [INFO ] using MLP layer as FFN
20
+ Loaded DA3 dualdpt from /home/ywan0794/Depth-Anything-3/training/exp/da3_dualdpt_vitl_both/epoch_010.pth
21
+ All models loaded!
22
+
23
+ Processing 200 KITTI samples...
24
+ [1/200] 2011_09_26_drive_0059_0000000154
25
+ [2/200] 2011_09_26_drive_0029_0000000296
26
+ [3/200] 2011_09_26_drive_0029_0000000154
27
+ [4/200] 2011_09_26_drive_0096_0000000171
28
+ [5/200] 2011_10_03_drive_0027_0000000362
29
+ [6/200] 2011_09_26_drive_0064_0000000462
30
+ [7/200] 2011_09_26_drive_0002_0000000051
31
+ [8/200] 2011_09_26_drive_0048_0000000016
32
+ [9/200] 2011_09_30_drive_0016_0000000110
33
+ [10/200] 2011_09_26_drive_0059_0000000098
34
+ [11/200] 2011_09_26_drive_0009_0000000032
35
+ [12/200] 2011_09_26_drive_0027_0000000147
36
+ [13/200] 2011_09_26_drive_0086_0000000277
37
+ [14/200] 2011_10_03_drive_0027_0000002001
38
+ [15/200] 2011_09_30_drive_0016_0000000121
39
+ [16/200] 2011_09_29_drive_0071_0000000252
40
+ [17/200] 2011_09_26_drive_0059_0000000070
41
+ [18/200] 2011_09_26_drive_0023_0000000198
42
+ [19/200] 2011_09_26_drive_0046_0000000110
43
+ [20/200] 2011_09_26_drive_0093_0000000176
44
+ [21/200] 2011_09_26_drive_0027_0000000014
45
+ [22/200] 2011_09_26_drive_0046_0000000080
46
+ [23/200] 2011_09_26_drive_0056_0000000275
47
+ [24/200] 2011_09_26_drive_0046_0000000035
48
+ [25/200] 2011_09_30_drive_0027_0000000123
49
+ [26/200] 2011_09_26_drive_0009_0000000176
50
+ [27/200] 2011_09_26_drive_0096_0000000437
51
+ [28/200] 2011_09_26_drive_0084_0000000296
52
+ [29/200] 2011_09_26_drive_0020_0000000054
53
+ [30/200] 2011_09_26_drive_0117_0000000208
54
+ [31/200] 2011_09_26_drive_0029_0000000112
55
+ [32/200] 2011_09_26_drive_0046_0000000040
56
+ [33/200] 2011_09_30_drive_0018_0000002033
57
+ [34/200] 2011_09_26_drive_0023_0000000450
58
+ [35/200] 2011_09_30_drive_0027_0000000835
59
+ [36/200] 2011_09_26_drive_0013_0000000050
60
+ [37/200] 2011_09_26_drive_0106_0000000147
61
+ [38/200] 2011_09_26_drive_0013_0000000045
62
+ [39/200] 2011_09_26_drive_0013_0000000060
63
+ [40/200] 2011_09_30_drive_0018_0000000214
64
+ [41/200] 2011_09_30_drive_0018_0000001070
65
+ [42/200] 2011_09_26_drive_0009_0000000276
66
+ [43/200] 2011_09_26_drive_0096_0000000361
67
+ [44/200] 2011_10_03_drive_0027_0000001096
68
+ [45/200] 2011_09_26_drive_0086_0000000250
69
+ [46/200] 2011_09_26_drive_0093_0000000048
70
+ [47/200] 2011_09_26_drive_0059_0000000224
71
+ [48/200] 2011_09_26_drive_0020_0000000012
72
+ [49/200] 2011_09_26_drive_0064_0000000396
73
+ [50/200] 2011_09_26_drive_0084_0000000140
74
+ [51/200] 2011_09_26_drive_0059_0000000302
75
+ [52/200] 2011_10_03_drive_0027_0000003811
76
+ [53/200] 2011_09_30_drive_0016_0000000143
77
+ [54/200] 2011_09_26_drive_0036_0000000768
78
+ [55/200] 2011_09_26_drive_0117_0000000182
79
+ [56/200] 2011_09_28_drive_0002_0000000045
80
+ [57/200] 2011_09_30_drive_0018_0000002247
81
+ [58/200] 2011_09_26_drive_0056_0000000011
82
+ [59/200] 2011_09_26_drive_0117_0000000572
83
+ [60/200] 2011_09_26_drive_0059_0000000056
84
+ [61/200] 2011_10_03_drive_0027_0000001458
85
+ [62/200] 2011_09_26_drive_0013_0000000085
86
+ [63/200] 2011_09_26_drive_0106_0000000075
87
+ [64/200] 2011_09_26_drive_0064_0000000044
88
+ [65/200] 2011_09_29_drive_0071_0000000915
89
+ [66/200] 2011_09_26_drive_0056_0000000242
90
+ [67/200] 2011_09_29_drive_0071_0000000288
91
+ [68/200] 2011_09_26_drive_0020_0000000063
92
+ [69/200] 2011_09_30_drive_0018_0000000856
93
+ [70/200] 2011_09_26_drive_0096_0000000190
94
+ [71/200] 2011_09_26_drive_0046_0000000090
95
+ [72/200] 2011_10_03_drive_0047_0000000192
96
+ [73/200] 2011_09_26_drive_0046_0000000010
97
+ [74/200] 2011_09_26_drive_0029_0000000338
98
+ [75/200] 2011_09_26_drive_0056_0000000154
99
+ [76/200] 2011_09_26_drive_0117_0000000416
100
+ [77/200] 2011_09_26_drive_0013_0000000070
101
+ [78/200] 2011_09_26_drive_0052_0000000006
102
+ [79/200] 2011_09_26_drive_0093_0000000112
103
+ [80/200] 2011_09_26_drive_0027_0000000049
104
+ [81/200] 2011_09_26_drive_0023_0000000270
105
+ [82/200] 2011_09_26_drive_0020_0000000015
106
+ [83/200] 2011_09_26_drive_0084_0000000179
107
+ [84/200] 2011_09_26_drive_0013_0000000115
108
+ [85/200] 2011_09_26_drive_0023_0000000252
109
+ [86/200] 2011_09_26_drive_0052_0000000030
110
+ [87/200] 2011_10_03_drive_0027_0000003087
111
+ [88/200] 2011_09_26_drive_0029_0000000140
112
+ [89/200] 2011_09_26_drive_0064_0000000418
113
+ [90/200] 2011_09_26_drive_0027_0000000175
114
+ [91/200] 2011_09_26_drive_0106_0000000139
115
+ [92/200] 2011_09_26_drive_0101_0000000658
116
+ [93/200] 2011_09_26_drive_0117_0000000468
117
+ [94/200] 2011_09_28_drive_0002_0000000030
118
+ [95/200] 2011_09_26_drive_0002_0000000036
119
+ [96/200] 2011_09_26_drive_0046_0000000045
120
+ [97/200] 2011_09_26_drive_0059_0000000316
121
+ [98/200] 2011_09_26_drive_0009_0000000080
122
+ [99/200] 2011_09_26_drive_0009_0000000016
123
+ [100/200] 2011_09_26_drive_0101_0000000556
124
+ [101/200] 2011_09_26_drive_0013_0000000010
125
+ [102/200] 2011_09_26_drive_0106_0000000131
126
+ [103/200] 2011_09_26_drive_0117_0000000026
127
+ [104/200] 2011_09_26_drive_0106_0000000211
128
+ [105/200] 2011_09_26_drive_0101_0000000114
129
+ [106/200] 2011_09_26_drive_0096_0000000038
130
+ [107/200] 2011_09_26_drive_0084_0000000309
131
+ [108/200] 2011_10_03_drive_0027_0000002363
132
+ [109/200] 2011_09_26_drive_0064_0000000440
133
+ [110/200] 2011_09_30_drive_0018_0000001284
134
+ [111/200] 2011_10_03_drive_0027_0000001277
135
+ [112/200] 2011_09_28_drive_0002_0000000051
136
+ [113/200] 2011_09_26_drive_0046_0000000115
137
+ [114/200] 2011_09_26_drive_0002_0000000012
138
+ [115/200] 2011_09_26_drive_0013_0000000110
139
+ [116/200] 2011_09_26_drive_0086_0000000331
140
+ [117/200] 2011_09_26_drive_0101_0000000080
141
+ [118/200] 2011_09_26_drive_0046_0000000065
142
+ [119/200] 2011_09_30_drive_0027_0000000164
143
+ [120/200] 2011_09_26_drive_0013_0000000040
144
+ [121/200] 2011_09_26_drive_0056_0000000187
145
+ [122/200] 2011_09_26_drive_0086_0000000196
146
+ [123/200] 2011_09_26_drive_0020_0000000057
147
+ [124/200] 2011_09_26_drive_0101_0000000182
148
+ [125/200] 2011_09_26_drive_0023_0000000306
149
+ [126/200] 2011_09_26_drive_0056_0000000088
150
+ [127/200] 2011_10_03_drive_0047_0000000320
151
+ [128/200] 2011_09_26_drive_0086_0000000115
152
+ [129/200] 2011_09_26_drive_0023_0000000360
153
+ [130/200] 2011_09_26_drive_0020_0000000036
154
+ [131/200] 2011_09_26_drive_0056_0000000176
155
+ [132/200] 2011_09_26_drive_0117_0000000156
156
+ [133/200] 2011_09_26_drive_0036_0000000256
157
+ [134/200] 2011_09_29_drive_0071_0000000771
158
+ [135/200] 2011_09_26_drive_0052_0000000046
159
+ [136/200] 2011_09_28_drive_0002_0000000006
160
+ [137/200] 2011_10_03_drive_0047_0000000480
161
+ [138/200] 2011_09_26_drive_0027_0000000007
162
+ [139/200] 2011_09_26_drive_0056_0000000077
163
+ [140/200] 2011_09_28_drive_0002_0000000036
164
+ [141/200] 2011_09_26_drive_0009_0000000308
165
+ [142/200] 2011_09_26_drive_0056_0000000022
166
+ [143/200] 2011_09_26_drive_0056_0000000165
167
+ [144/200] 2011_09_26_drive_0086_0000000088
168
+ [145/200] 2011_09_26_drive_0020_0000000018
169
+ [146/200] 2011_09_26_drive_0029_0000000098
170
+ [147/200] 2011_10_03_drive_0047_0000000512
171
+ [148/200] 2011_09_26_drive_0084_0000000127
172
+ [149/200] 2011_09_30_drive_0027_0000000041
173
+ [150/200] 2011_09_29_drive_0071_0000000576
174
+ [151/200] 2011_09_26_drive_0106_0000000099
175
+ [152/200] 2011_09_26_drive_0106_0000000179
176
+ [153/200] 2011_09_26_drive_0101_0000000896
177
+ [154/200] 2011_09_26_drive_0036_0000000480
178
+ [155/200] 2011_09_26_drive_0093_0000000128
179
+ [156/200] 2011_09_26_drive_0029_0000000014
180
+ [157/200] 2011_09_26_drive_0064_0000000242
181
+ [158/200] 2011_09_26_drive_0056_0000000209
182
+ [159/200] 2011_09_26_drive_0027_0000000098
183
+ [160/200] 2011_09_26_drive_0056_0000000121
184
+ [161/200] 2011_09_26_drive_0086_0000000358
185
+ [162/200] 2011_09_26_drive_0009_0000000292
186
+ [163/200] 2011_09_26_drive_0101_0000000386
187
+ [164/200] 2011_09_28_drive_0002_0000000084
188
+ [165/200] 2011_09_26_drive_0117_0000000546
189
+ [166/200] 2011_09_26_drive_0117_0000000494
190
+ [167/200] 2011_10_03_drive_0027_0000000543
191
+ [168/200] 2011_10_03_drive_0047_0000000064
192
+ [169/200] 2011_09_26_drive_0020_0000000042
193
+ [170/200] 2011_09_26_drive_0046_0000000095
194
+ [171/200] 2011_09_26_drive_0093_0000000192
195
+ [172/200] 2011_09_26_drive_0059_0000000344
196
+ [173/200] 2011_09_28_drive_0002_0000000078
197
+ [174/200] 2011_09_28_drive_0002_0000000087
198
+ [175/200] 2011_09_26_drive_0023_0000000468
199
+ [176/200] 2011_09_26_drive_0029_0000000268
200
+ [177/200] 2011_10_03_drive_0047_0000000032
201
+ [178/200] 2011_09_30_drive_0018_0000002419
202
+ [179/200] 2011_09_28_drive_0002_0000000057
203
+ [180/200] 2011_10_03_drive_0047_0000000672
204
+ [181/200] 2011_10_03_drive_0027_0000002544
205
+ [182/200] 2011_09_26_drive_0002_0000000015
206
+ [183/200] 2011_09_26_drive_0027_0000000182
207
+ [184/200] 2011_09_26_drive_0084_0000000218
208
+ [185/200] 2011_10_03_drive_0027_0000001639
209
+ [186/200] 2011_09_26_drive_0093_0000000417
210
+ [187/200] 2011_09_26_drive_0096_0000000456
211
+ [188/200] 2011_10_03_drive_0047_0000000416
212
+ [189/200] 2011_09_26_drive_0086_0000000034
213
+ [190/200] 2011_09_26_drive_0096_0000000247
214
+ [191/200] 2011_09_26_drive_0096_0000000209
215
+ [192/200] 2011_09_29_drive_0071_0000000144
216
+ [193/200] 2011_09_26_drive_0084_0000000270
217
+ [194/200] 2011_09_26_drive_0101_0000000284
218
+ [195/200] 2011_09_29_drive_0071_0000000036
219
+ [196/200] 2011_09_29_drive_0071_0000000360
220
+ [197/200] 2011_09_26_drive_0086_0000000304
221
+ [198/200] 2011_09_26_drive_0013_0000000065
222
+ [199/200] 2011_09_26_drive_0093_0000000160
223
+ [200/200] 2011_09_26_drive_0036_0000000064
224
+
225
+ Processing 200 DDAD samples...
226
+ [1/200] 000508_CAMERA_05
227
+ [2/200] 001971_CAMERA_09
228
+ [3/200] 003267_CAMERA_06
229
+ [4/200] 001726_CAMERA_09
230
+ [5/200] 002738_CAMERA_05
231
+ [6/200] 000339_CAMERA_01
232
+ [7/200] 000104_CAMERA_05
233
+ [8/200] 001069_CAMERA_06
234
+ [9/200] 003710_CAMERA_06
235
+ [10/200] 003376_CAMERA_05
236
+ [11/200] 000864_CAMERA_09
237
+ [12/200] 003894_CAMERA_06
238
+ [13/200] 002730_CAMERA_01
239
+ [14/200] 000125_CAMERA_05
240
+ [15/200] 002151_CAMERA_05
241
+ [16/200] 002147_CAMERA_09
242
+ [17/200] 003924_CAMERA_09
243
+ [18/200] 002818_CAMERA_01
244
+ [19/200] 003451_CAMERA_09
245
+ [20/200] 001686_CAMERA_05
246
+ [21/200] 002310_CAMERA_01
247
+ [22/200] 003416_CAMERA_05
248
+ [23/200] 003797_CAMERA_06
249
+ [24/200] 001782_CAMERA_05
250
+ [25/200] 002078_CAMERA_09
251
+ [26/200] 001568_CAMERA_05
252
+ [27/200] 002371_CAMERA_06
253
+ [28/200] 001397_CAMERA_06
254
+ [29/200] 000278_CAMERA_05
255
+ [30/200] 000101_CAMERA_09
256
+ [31/200] 001674_CAMERA_09
257
+ [32/200] 001627_CAMERA_01
258
+ [33/200] 002721_CAMERA_05
259
+ [34/200] 002251_CAMERA_01
260
+ [35/200] 000127_CAMERA_06
261
+ [36/200] 000470_CAMERA_05
262
+ [37/200] 000865_CAMERA_05
263
+ [38/200] 002088_CAMERA_01
264
+ [39/200] 002350_CAMERA_09
265
+ [40/200] 002461_CAMERA_01
266
+ [41/200] 001049_CAMERA_01
267
+ [42/200] 001989_CAMERA_01
268
+ [43/200] 002291_CAMERA_05
269
+ [44/200] 003633_CAMERA_06
270
+ [45/200] 003613_CAMERA_06
271
+ [46/200] 002393_CAMERA_05
272
+ [47/200] 001589_CAMERA_05
273
+ [48/200] 001893_CAMERA_09
274
+ [49/200] 000106_CAMERA_06
275
+ [50/200] 001136_CAMERA_01
276
+ [51/200] 000131_CAMERA_09
277
+ [52/200] 001886_CAMERA_01
278
+ [53/200] 001700_CAMERA_05
279
+ [54/200] 001341_CAMERA_06
280
+ [55/200] 003728_CAMERA_09
281
+ [56/200] 002019_CAMERA_01
282
+ [57/200] 000274_CAMERA_06
283
+ [58/200] 000332_CAMERA_06
284
+ [59/200] 002214_CAMERA_01
285
+ [60/200] 000256_CAMERA_06
286
+ [61/200] 001944_CAMERA_06
287
+ [62/200] 000654_CAMERA_01
288
+ [63/200] 001085_CAMERA_06
289
+ [64/200] 002741_CAMERA_01
290
+ [65/200] 001520_CAMERA_06
291
+ [66/200] 001033_CAMERA_05
292
+ [67/200] 002843_CAMERA_05
293
+ [68/200] 002282_CAMERA_01
294
+ [69/200] 000258_CAMERA_05
295
+ [70/200] 000580_CAMERA_01
296
+ [71/200] 000277_CAMERA_05
297
+ [72/200] 002670_CAMERA_06
298
+ [73/200] 003761_CAMERA_05
299
+ [74/200] 000605_CAMERA_06
300
+ [75/200] 003725_CAMERA_06
301
+ [76/200] 000154_CAMERA_01
302
+ [77/200] 002659_CAMERA_06
303
+ [78/200] 002283_CAMERA_05
304
+ [79/200] 003312_CAMERA_06
305
+ [80/200] 001888_CAMERA_05
306
+ [81/200] 001473_CAMERA_06
307
+ [82/200] 002265_CAMERA_01
308
+ [83/200] 000389_CAMERA_09
309
+ [84/200] 001111_CAMERA_09
310
+ [85/200] 002484_CAMERA_09
311
+ [86/200] 000998_CAMERA_01
312
+ [87/200] 003584_CAMERA_01
313
+ [88/200] 002328_CAMERA_01
314
+ [89/200] 003337_CAMERA_05
315
+ [90/200] 001702_CAMERA_09
316
+ [91/200] 003439_CAMERA_06
317
+ [92/200] 002552_CAMERA_05
318
+ [93/200] 003668_CAMERA_09
319
+ [94/200] 001998_CAMERA_05
320
+ [95/200] 003236_CAMERA_06
321
+ [96/200] 002696_CAMERA_05
322
+ [97/200] 001755_CAMERA_06
323
+ [98/200] 003544_CAMERA_01
324
+ [99/200] 001705_CAMERA_05
325
+ [100/200] 003830_CAMERA_01
326
+ [101/200] 001003_CAMERA_09
327
+ [102/200] 003294_CAMERA_06
328
+ [103/200] 003946_CAMERA_01
329
+ [104/200] 000216_CAMERA_05
330
+ [105/200] 000145_CAMERA_06
331
+ [106/200] 003890_CAMERA_05
332
+ [107/200] 000899_CAMERA_06
333
+ [108/200] 002849_CAMERA_01
334
+ [109/200] 003710_CAMERA_01
335
+ [110/200] 001474_CAMERA_09
336
+ [111/200] 001996_CAMERA_06
337
+ [112/200] 002833_CAMERA_09
338
+ [113/200] 002167_CAMERA_06
339
+ [114/200] 001274_CAMERA_05
340
+ [115/200] 002568_CAMERA_06
341
+ [116/200] 002417_CAMERA_06
342
+ [117/200] 002666_CAMERA_05
343
+ [118/200] 000809_CAMERA_06
344
+ [119/200] 001222_CAMERA_05
345
+ [120/200] 001379_CAMERA_01
346
+ [121/200] 002561_CAMERA_09
347
+ [122/200] 001055_CAMERA_09
348
+ [123/200] 002447_CAMERA_05
349
+ [124/200] 003042_CAMERA_09
350
+ [125/200] 000287_CAMERA_09
351
+ [126/200] 000422_CAMERA_09
352
+ [127/200] 001298_CAMERA_09
353
+ [128/200] 003617_CAMERA_09
354
+ [129/200] 001542_CAMERA_06
355
+ [130/200] 002100_CAMERA_06
356
+ [131/200] 001623_CAMERA_05
357
+ [132/200] 001289_CAMERA_09
358
+ [133/200] 001130_CAMERA_06
359
+ [134/200] 001892_CAMERA_06
360
+ [135/200] 000720_CAMERA_06
361
+ [136/200] 000222_CAMERA_09
362
+ [137/200] 000294_CAMERA_09
363
+ [138/200] 000625_CAMERA_05
364
+ [139/200] 003935_CAMERA_06
365
+ [140/200] 001163_CAMERA_01
366
+ [141/200] 003784_CAMERA_06
367
+ [142/200] 002344_CAMERA_01
368
+ [143/200] 001853_CAMERA_05
369
+ [144/200] 000468_CAMERA_06
370
+ [145/200] 002891_CAMERA_05
371
+ [146/200] 002498_CAMERA_06
372
+ [147/200] 002572_CAMERA_06
373
+ [148/200] 002170_CAMERA_09
374
+ [149/200] 003146_CAMERA_09
375
+ [150/200] 002108_CAMERA_06
376
+ [151/200] 000959_CAMERA_05
377
+ [152/200] 001146_CAMERA_06
378
+ [153/200] 001222_CAMERA_09
379
+ [154/200] 002341_CAMERA_06
380
+ [155/200] 003135_CAMERA_05
381
+ [156/200] 000276_CAMERA_01
382
+ [157/200] 002875_CAMERA_05
383
+ [158/200] 000531_CAMERA_09
384
+ [159/200] 002916_CAMERA_01
385
+ [160/200] 003781_CAMERA_09
386
+ [161/200] 003309_CAMERA_01
387
+ [162/200] 002844_CAMERA_06
388
+ [163/200] 002778_CAMERA_06
389
+ [164/200] 001958_CAMERA_06
390
+ [165/200] 003231_CAMERA_06
391
+ [166/200] 000950_CAMERA_06
392
+ [167/200] 003253_CAMERA_09
393
+ [168/200] 000705_CAMERA_09
394
+ [169/200] 000260_CAMERA_05
395
+ [170/200] 001244_CAMERA_05
396
+ [171/200] 002928_CAMERA_06
397
+ [172/200] 003237_CAMERA_05
398
+ [173/200] 000464_CAMERA_05
399
+ [174/200] 003936_CAMERA_06
400
+ [175/200] 000598_CAMERA_01
401
+ [176/200] 001979_CAMERA_06
402
+ [177/200] 000791_CAMERA_05
403
+ [178/200] 002518_CAMERA_05
404
+ [179/200] 002263_CAMERA_01
405
+ [180/200] 001374_CAMERA_05
406
+ [181/200] 000704_CAMERA_06
407
+ [182/200] 003369_CAMERA_01
408
+ [183/200] 003794_CAMERA_05
409
+ [184/200] 002199_CAMERA_06
410
+ [185/200] 000629_CAMERA_09
411
+ [186/200] 001231_CAMERA_05
412
+ [187/200] 001614_CAMERA_05
413
+ [188/200] 001952_CAMERA_01
414
+ [189/200] 002494_CAMERA_01
415
+ [190/200] 003162_CAMERA_06
416
+ [191/200] 001435_CAMERA_05
417
+ [192/200] 001509_CAMERA_06
418
+ [193/200] 002298_CAMERA_09
419
+ [194/200] 002435_CAMERA_01
420
+ [195/200] 000805_CAMERA_05
421
+ [196/200] 003196_CAMERA_09
422
+ [197/200] 003894_CAMERA_09
423
+ [198/200] 000639_CAMERA_06
424
+ [199/200] 000152_CAMERA_09
425
+ [200/200] 001108_CAMERA_06
426
+
427
+ Done! Results saved to /home/ywan0794/MoGe/vis_output
428
+ Structure:
429
+ /home/ywan0794/MoGe/vis_output/
430
+ KITTI/
431
+ rgb/, gt/, da2_dpt/, da2_sdt/, da3_dpt/, da3_sdt/, da3_dualdpt/
432
+ DDAD/
433
+ rgb/, gt/, da2_dpt/, da2_sdt/, da3_dpt/, da3_sdt/, da3_dualdpt/
434
+ Visualization completed!
vis_depth_8787.log ADDED
@@ -0,0 +1,1034 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /home/ywan0794/MoGe/visualize_depth.py:73: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
2
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
3
+ /home/ywan0794/MoGe/visualize_depth.py:135: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
4
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
5
+ /home/ywan0794/MoGe/visualize_depth.py:178: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
6
+ with torch.cuda.amp.autocast(dtype=torch.bfloat16):
7
+ Loading models...
8
+ Loading DA2-DPT...
9
+ Loaded DA2 dpt from /home/ywan0794/Depth-Anything-V2/training/exp/dpt_vitb_both/epoch_007.pth
10
+ Loading DA2-SDT...
11
+ Loaded DA2 sdt from /home/ywan0794/Depth-Anything-V2/training/exp/sdt_vitb_both/epoch_008.pth
12
+ Loading DA3-DPT...
13
+ [INFO ] using MLP layer as FFN
14
+ Loaded DA3 dpt from /home/ywan0794/Depth-Anything-3/training/exp/da3_dpt_vitl_both/epoch_010.pth
15
+ Loading DA3-SDT...
16
+ [INFO ] using MLP layer as FFN
17
+ Loaded DA3 sdt from /home/ywan0794/Depth-Anything-3/training/exp/da3_sdt_vitl_both/epoch_010.pth
18
+ Loading DA3-DualDPT...
19
+ [INFO ] using MLP layer as FFN
20
+ Loaded DA3 dualdpt from /home/ywan0794/Depth-Anything-3/training/exp/da3_dualdpt_vitl_both/epoch_010.pth
21
+ All models loaded!
22
+
23
+ Processing 500 KITTI samples...
24
+ [1/500] 2011_09_26_drive_0059_0000000154
25
+ [2/500] 2011_09_26_drive_0029_0000000296
26
+ [3/500] 2011_09_26_drive_0029_0000000154
27
+ [4/500] 2011_09_26_drive_0096_0000000171
28
+ [5/500] 2011_10_03_drive_0027_0000000362
29
+ [6/500] 2011_09_26_drive_0064_0000000462
30
+ [7/500] 2011_09_26_drive_0002_0000000051
31
+ [8/500] 2011_09_26_drive_0048_0000000016
32
+ [9/500] 2011_09_30_drive_0016_0000000110
33
+ [10/500] 2011_09_26_drive_0059_0000000098
34
+ [11/500] 2011_09_26_drive_0009_0000000032
35
+ [12/500] 2011_09_26_drive_0027_0000000147
36
+ [13/500] 2011_09_26_drive_0086_0000000277
37
+ [14/500] 2011_10_03_drive_0027_0000002001
38
+ [15/500] 2011_09_30_drive_0016_0000000121
39
+ [16/500] 2011_09_29_drive_0071_0000000252
40
+ [17/500] 2011_09_26_drive_0059_0000000070
41
+ [18/500] 2011_09_26_drive_0023_0000000198
42
+ [19/500] 2011_09_26_drive_0046_0000000110
43
+ [20/500] 2011_09_26_drive_0093_0000000176
44
+ [21/500] 2011_09_26_drive_0027_0000000014
45
+ [22/500] 2011_09_26_drive_0046_0000000080
46
+ [23/500] 2011_09_26_drive_0056_0000000275
47
+ [24/500] 2011_09_26_drive_0046_0000000035
48
+ [25/500] 2011_09_30_drive_0027_0000000123
49
+ [26/500] 2011_09_26_drive_0009_0000000176
50
+ [27/500] 2011_09_26_drive_0096_0000000437
51
+ [28/500] 2011_09_26_drive_0084_0000000296
52
+ [29/500] 2011_09_26_drive_0020_0000000054
53
+ [30/500] 2011_09_26_drive_0117_0000000208
54
+ [31/500] 2011_09_26_drive_0029_0000000112
55
+ [32/500] 2011_09_26_drive_0046_0000000040
56
+ [33/500] 2011_09_30_drive_0018_0000002033
57
+ [34/500] 2011_09_26_drive_0023_0000000450
58
+ [35/500] 2011_09_30_drive_0027_0000000835
59
+ [36/500] 2011_09_26_drive_0013_0000000050
60
+ [37/500] 2011_09_26_drive_0106_0000000147
61
+ [38/500] 2011_09_26_drive_0013_0000000045
62
+ [39/500] 2011_09_26_drive_0013_0000000060
63
+ [40/500] 2011_09_30_drive_0018_0000000214
64
+ [41/500] 2011_09_30_drive_0018_0000001070
65
+ [42/500] 2011_09_26_drive_0009_0000000276
66
+ [43/500] 2011_09_26_drive_0096_0000000361
67
+ [44/500] 2011_10_03_drive_0027_0000001096
68
+ [45/500] 2011_09_26_drive_0086_0000000250
69
+ [46/500] 2011_09_26_drive_0093_0000000048
70
+ [47/500] 2011_09_26_drive_0059_0000000224
71
+ [48/500] 2011_09_26_drive_0020_0000000012
72
+ [49/500] 2011_09_26_drive_0064_0000000396
73
+ [50/500] 2011_09_26_drive_0084_0000000140
74
+ [51/500] 2011_09_26_drive_0059_0000000302
75
+ [52/500] 2011_10_03_drive_0027_0000003811
76
+ [53/500] 2011_09_30_drive_0016_0000000143
77
+ [54/500] 2011_09_26_drive_0036_0000000768
78
+ [55/500] 2011_09_26_drive_0117_0000000182
79
+ [56/500] 2011_09_28_drive_0002_0000000045
80
+ [57/500] 2011_09_30_drive_0018_0000002247
81
+ [58/500] 2011_09_26_drive_0056_0000000011
82
+ [59/500] 2011_09_26_drive_0117_0000000572
83
+ [60/500] 2011_09_26_drive_0059_0000000056
84
+ [61/500] 2011_10_03_drive_0027_0000001458
85
+ [62/500] 2011_09_26_drive_0013_0000000085
86
+ [63/500] 2011_09_26_drive_0106_0000000075
87
+ [64/500] 2011_09_26_drive_0064_0000000044
88
+ [65/500] 2011_09_29_drive_0071_0000000915
89
+ [66/500] 2011_09_26_drive_0056_0000000242
90
+ [67/500] 2011_09_29_drive_0071_0000000288
91
+ [68/500] 2011_09_26_drive_0020_0000000063
92
+ [69/500] 2011_09_30_drive_0018_0000000856
93
+ [70/500] 2011_09_26_drive_0096_0000000190
94
+ [71/500] 2011_09_26_drive_0046_0000000090
95
+ [72/500] 2011_10_03_drive_0047_0000000192
96
+ [73/500] 2011_09_26_drive_0046_0000000010
97
+ [74/500] 2011_09_26_drive_0029_0000000338
98
+ [75/500] 2011_09_26_drive_0056_0000000154
99
+ [76/500] 2011_09_26_drive_0117_0000000416
100
+ [77/500] 2011_09_26_drive_0013_0000000070
101
+ [78/500] 2011_09_26_drive_0052_0000000006
102
+ [79/500] 2011_09_26_drive_0093_0000000112
103
+ [80/500] 2011_09_26_drive_0027_0000000049
104
+ [81/500] 2011_09_26_drive_0023_0000000270
105
+ [82/500] 2011_09_26_drive_0020_0000000015
106
+ [83/500] 2011_09_26_drive_0084_0000000179
107
+ [84/500] 2011_09_26_drive_0013_0000000115
108
+ [85/500] 2011_09_26_drive_0023_0000000252
109
+ [86/500] 2011_09_26_drive_0052_0000000030
110
+ [87/500] 2011_10_03_drive_0027_0000003087
111
+ [88/500] 2011_09_26_drive_0029_0000000140
112
+ [89/500] 2011_09_26_drive_0064_0000000418
113
+ [90/500] 2011_09_26_drive_0027_0000000175
114
+ [91/500] 2011_09_26_drive_0106_0000000139
115
+ [92/500] 2011_09_26_drive_0101_0000000658
116
+ [93/500] 2011_09_26_drive_0117_0000000468
117
+ [94/500] 2011_09_28_drive_0002_0000000030
118
+ [95/500] 2011_09_26_drive_0002_0000000036
119
+ [96/500] 2011_09_26_drive_0046_0000000045
120
+ [97/500] 2011_09_26_drive_0059_0000000316
121
+ [98/500] 2011_09_26_drive_0009_0000000080
122
+ [99/500] 2011_09_26_drive_0009_0000000016
123
+ [100/500] 2011_09_26_drive_0101_0000000556
124
+ [101/500] 2011_09_26_drive_0013_0000000010
125
+ [102/500] 2011_09_26_drive_0106_0000000131
126
+ [103/500] 2011_09_26_drive_0117_0000000026
127
+ [104/500] 2011_09_26_drive_0106_0000000211
128
+ [105/500] 2011_09_26_drive_0101_0000000114
129
+ [106/500] 2011_09_26_drive_0096_0000000038
130
+ [107/500] 2011_09_26_drive_0084_0000000309
131
+ [108/500] 2011_10_03_drive_0027_0000002363
132
+ [109/500] 2011_09_26_drive_0064_0000000440
133
+ [110/500] 2011_09_30_drive_0018_0000001284
134
+ [111/500] 2011_10_03_drive_0027_0000001277
135
+ [112/500] 2011_09_28_drive_0002_0000000051
136
+ [113/500] 2011_09_26_drive_0046_0000000115
137
+ [114/500] 2011_09_26_drive_0002_0000000012
138
+ [115/500] 2011_09_26_drive_0013_0000000110
139
+ [116/500] 2011_09_26_drive_0086_0000000331
140
+ [117/500] 2011_09_26_drive_0101_0000000080
141
+ [118/500] 2011_09_26_drive_0046_0000000065
142
+ [119/500] 2011_09_30_drive_0027_0000000164
143
+ [120/500] 2011_09_26_drive_0013_0000000040
144
+ [121/500] 2011_09_26_drive_0056_0000000187
145
+ [122/500] 2011_09_26_drive_0086_0000000196
146
+ [123/500] 2011_09_26_drive_0020_0000000057
147
+ [124/500] 2011_09_26_drive_0101_0000000182
148
+ [125/500] 2011_09_26_drive_0023_0000000306
149
+ [126/500] 2011_09_26_drive_0056_0000000088
150
+ [127/500] 2011_10_03_drive_0047_0000000320
151
+ [128/500] 2011_09_26_drive_0086_0000000115
152
+ [129/500] 2011_09_26_drive_0023_0000000360
153
+ [130/500] 2011_09_26_drive_0020_0000000036
154
+ [131/500] 2011_09_26_drive_0056_0000000176
155
+ [132/500] 2011_09_26_drive_0117_0000000156
156
+ [133/500] 2011_09_26_drive_0036_0000000256
157
+ [134/500] 2011_09_29_drive_0071_0000000771
158
+ [135/500] 2011_09_26_drive_0052_0000000046
159
+ [136/500] 2011_09_28_drive_0002_0000000006
160
+ [137/500] 2011_10_03_drive_0047_0000000480
161
+ [138/500] 2011_09_26_drive_0027_0000000007
162
+ [139/500] 2011_09_26_drive_0056_0000000077
163
+ [140/500] 2011_09_28_drive_0002_0000000036
164
+ [141/500] 2011_09_26_drive_0009_0000000308
165
+ [142/500] 2011_09_26_drive_0056_0000000022
166
+ [143/500] 2011_09_26_drive_0056_0000000165
167
+ [144/500] 2011_09_26_drive_0086_0000000088
168
+ [145/500] 2011_09_26_drive_0020_0000000018
169
+ [146/500] 2011_09_26_drive_0029_0000000098
170
+ [147/500] 2011_10_03_drive_0047_0000000512
171
+ [148/500] 2011_09_26_drive_0084_0000000127
172
+ [149/500] 2011_09_30_drive_0027_0000000041
173
+ [150/500] 2011_09_29_drive_0071_0000000576
174
+ [151/500] 2011_09_26_drive_0106_0000000099
175
+ [152/500] 2011_09_26_drive_0106_0000000179
176
+ [153/500] 2011_09_26_drive_0101_0000000896
177
+ [154/500] 2011_09_26_drive_0036_0000000480
178
+ [155/500] 2011_09_26_drive_0093_0000000128
179
+ [156/500] 2011_09_26_drive_0029_0000000014
180
+ [157/500] 2011_09_26_drive_0064_0000000242
181
+ [158/500] 2011_09_26_drive_0056_0000000209
182
+ [159/500] 2011_09_26_drive_0027_0000000098
183
+ [160/500] 2011_09_26_drive_0056_0000000121
184
+ [161/500] 2011_09_26_drive_0086_0000000358
185
+ [162/500] 2011_09_26_drive_0009_0000000292
186
+ [163/500] 2011_09_26_drive_0101_0000000386
187
+ [164/500] 2011_09_28_drive_0002_0000000084
188
+ [165/500] 2011_09_26_drive_0117_0000000546
189
+ [166/500] 2011_09_26_drive_0117_0000000494
190
+ [167/500] 2011_10_03_drive_0027_0000000543
191
+ [168/500] 2011_10_03_drive_0047_0000000064
192
+ [169/500] 2011_09_26_drive_0020_0000000042
193
+ [170/500] 2011_09_26_drive_0046_0000000095
194
+ [171/500] 2011_09_26_drive_0093_0000000192
195
+ [172/500] 2011_09_26_drive_0059_0000000344
196
+ [173/500] 2011_09_28_drive_0002_0000000078
197
+ [174/500] 2011_09_28_drive_0002_0000000087
198
+ [175/500] 2011_09_26_drive_0023_0000000468
199
+ [176/500] 2011_09_26_drive_0029_0000000268
200
+ [177/500] 2011_10_03_drive_0047_0000000032
201
+ [178/500] 2011_09_30_drive_0018_0000002419
202
+ [179/500] 2011_09_28_drive_0002_0000000057
203
+ [180/500] 2011_10_03_drive_0047_0000000672
204
+ [181/500] 2011_10_03_drive_0027_0000002544
205
+ [182/500] 2011_09_26_drive_0002_0000000015
206
+ [183/500] 2011_09_26_drive_0027_0000000182
207
+ [184/500] 2011_09_26_drive_0084_0000000218
208
+ [185/500] 2011_10_03_drive_0027_0000001639
209
+ [186/500] 2011_09_26_drive_0093_0000000417
210
+ [187/500] 2011_09_26_drive_0096_0000000456
211
+ [188/500] 2011_10_03_drive_0047_0000000416
212
+ [189/500] 2011_09_26_drive_0086_0000000034
213
+ [190/500] 2011_09_26_drive_0096_0000000247
214
+ [191/500] 2011_09_26_drive_0096_0000000209
215
+ [192/500] 2011_09_29_drive_0071_0000000144
216
+ [193/500] 2011_09_26_drive_0084_0000000270
217
+ [194/500] 2011_09_26_drive_0101_0000000284
218
+ [195/500] 2011_09_29_drive_0071_0000000036
219
+ [196/500] 2011_09_29_drive_0071_0000000360
220
+ [197/500] 2011_09_26_drive_0086_0000000304
221
+ [198/500] 2011_09_26_drive_0013_0000000065
222
+ [199/500] 2011_09_26_drive_0093_0000000160
223
+ [200/500] 2011_09_26_drive_0036_0000000064
224
+ [201/500] 2011_09_26_drive_0036_0000000160
225
+ [202/500] 2011_09_26_drive_0027_0000000042
226
+ [203/500] 2011_09_26_drive_0059_0000000126
227
+ [204/500] 2011_09_26_drive_0002_0000000060
228
+ [205/500] 2011_10_03_drive_0027_0000002725
229
+ [206/500] 2011_09_26_drive_0036_0000000096
230
+ [207/500] 2011_09_26_drive_0013_0000000100
231
+ [208/500] 2011_09_26_drive_0013_0000000005
232
+ [209/500] 2011_09_26_drive_0052_0000000040
233
+ [210/500] 2011_09_26_drive_0020_0000000072
234
+ [211/500] 2011_10_03_drive_0027_0000004354
235
+ [212/500] 2011_09_26_drive_0029_0000000380
236
+ [213/500] 2011_09_26_drive_0064_0000000022
237
+ [214/500] 2011_09_26_drive_0027_0000000084
238
+ [215/500] 2011_09_26_drive_0117_0000000130
239
+ [216/500] 2011_09_26_drive_0052_0000000012
240
+ [217/500] 2011_09_28_drive_0002_0000000063
241
+ [218/500] 2011_09_30_drive_0018_0000002526
242
+ [219/500] 2011_09_26_drive_0002_0000000054
243
+ [220/500] 2011_09_26_drive_0101_0000000828
244
+ [221/500] 2011_10_03_drive_0027_0000000915
245
+ [222/500] 2011_09_29_drive_0071_0000000540
246
+ [223/500] 2011_10_03_drive_0027_0000004173
247
+ [224/500] 2011_09_29_drive_0071_0000000396
248
+ [225/500] 2011_09_26_drive_0046_0000000105
249
+ [226/500] 2011_09_26_drive_0036_0000000704
250
+ [227/500] 2011_09_26_drive_0059_0000000140
251
+ [228/500] 2011_09_26_drive_0052_0000000020
252
+ [229/500] 2011_09_26_drive_0093_0000000256
253
+ [230/500] 2011_09_26_drive_0027_0000000168
254
+ [231/500] 2011_09_26_drive_0096_0000000418
255
+ [232/500] 2011_09_26_drive_0096_0000000114
256
+ [233/500] 2011_10_03_drive_0027_0000003449
257
+ [234/500] 2011_09_30_drive_0027_0000000574
258
+ [235/500] 2011_09_26_drive_0106_0000000203
259
+ [236/500] 2011_09_30_drive_0027_0000000410
260
+ [237/500] 2011_09_30_drive_0027_0000000917
261
+ [238/500] 2011_09_26_drive_0117_0000000260
262
+ [239/500] 2011_09_26_drive_0093_0000000016
263
+ [240/500] 2011_09_26_drive_0059_0000000238
264
+ [241/500] 2011_09_26_drive_0036_0000000672
265
+ [242/500] 2011_09_26_drive_0084_0000000049
266
+ [243/500] 2011_09_26_drive_0002_0000000021
267
+ [244/500] 2011_09_30_drive_0016_0000000165
268
+ [245/500] 2011_09_26_drive_0036_0000000224
269
+ [246/500] 2011_09_26_drive_0093_0000000401
270
+ [247/500] 2011_09_26_drive_0046_0000000070
271
+ [248/500] 2011_09_26_drive_0106_0000000195
272
+ [249/500] 2011_09_26_drive_0086_0000000493
273
+ [250/500] 2011_09_26_drive_0096_0000000057
274
+ [251/500] 2011_10_03_drive_0027_0000003630
275
+ [252/500] 2011_09_26_drive_0052_0000000008
276
+ [253/500] 2011_09_26_drive_0009_0000000064
277
+ [254/500] 2011_09_26_drive_0009_0000000212
278
+ [255/500] 2011_09_26_drive_0093_0000000337
279
+ [256/500] 2011_09_26_drive_0009_0000000128
280
+ [257/500] 2011_09_26_drive_0064_0000000352
281
+ [258/500] 2011_09_26_drive_0101_0000000522
282
+ [259/500] 2011_09_26_drive_0056_0000000231
283
+ [260/500] 2011_09_26_drive_0056_0000000143
284
+ [261/500] 2011_09_26_drive_0027_0000000035
285
+ [262/500] 2011_09_26_drive_0084_0000000322
286
+ [263/500] 2011_09_26_drive_0002_0000000048
287
+ [264/500] 2011_09_26_drive_0117_0000000624
288
+ [265/500] 2011_09_26_drive_0029_0000000168
289
+ [266/500] 2011_09_26_drive_0052_0000000038
290
+ [267/500] 2011_09_26_drive_0059_0000000358
291
+ [268/500] 2011_09_30_drive_0027_0000000369
292
+ [269/500] 2011_09_26_drive_0106_0000000123
293
+ [270/500] 2011_09_26_drive_0002_0000000039
294
+ [271/500] 2011_09_26_drive_0020_0000000075
295
+ [272/500] 2011_09_26_drive_0009_0000000260
296
+ [273/500] 2011_09_26_drive_0027_0000000028
297
+ [274/500] 2011_09_26_drive_0036_0000000448
298
+ [275/500] 2011_09_26_drive_0023_0000000432
299
+ [276/500] 2011_09_26_drive_0009_0000000372
300
+ [277/500] 2011_09_26_drive_0064_0000000528
301
+ [278/500] 2011_09_26_drive_0036_0000000416
302
+ [279/500] 2011_09_26_drive_0101_0000000692
303
+ [280/500] 2011_09_26_drive_0048_0000000009
304
+ [281/500] 2011_09_28_drive_0002_0000000024
305
+ [282/500] 2011_09_26_drive_0027_0000000070
306
+ [283/500] 2011_09_26_drive_0052_0000000014
307
+ [284/500] 2011_10_03_drive_0047_0000000224
308
+ [285/500] 2011_09_26_drive_0084_0000000153
309
+ [286/500] 2011_09_26_drive_0059_0000000210
310
+ [287/500] 2011_09_26_drive_0020_0000000006
311
+ [288/500] 2011_09_26_drive_0101_0000000454
312
+ [289/500] 2011_09_26_drive_0101_0000000420
313
+ [290/500] 2011_09_29_drive_0071_0000000180
314
+ [291/500] 2011_09_26_drive_0093_0000000353
315
+ [292/500] 2011_09_30_drive_0018_0000001819
316
+ [293/500] 2011_09_28_drive_0002_0000000072
317
+ [294/500] 2011_09_26_drive_0093_0000000208
318
+ [295/500] 2011_09_26_drive_0117_0000000052
319
+ [296/500] 2011_09_26_drive_0086_0000000385
320
+ [297/500] 2011_09_30_drive_0018_0000001498
321
+ [298/500] 2011_09_26_drive_0084_0000000088
322
+ [299/500] 2011_09_29_drive_0071_0000000432
323
+ [300/500] 2011_09_26_drive_0096_0000000380
324
+ [301/500] 2011_09_26_drive_0036_0000000032
325
+ [302/500] 2011_10_03_drive_0047_0000000448
326
+ [303/500] 2011_09_26_drive_0029_0000000394
327
+ [304/500] 2011_09_26_drive_0101_0000000862
328
+ [305/500] 2011_09_26_drive_0048_0000000011
329
+ [306/500] 2011_09_26_drive_0002_0000000063
330
+ [307/500] 2011_09_26_drive_0009_0000000228
331
+ [308/500] 2011_09_26_drive_0106_0000000171
332
+ [309/500] 2011_09_26_drive_0056_0000000066
333
+ [310/500] 2011_09_30_drive_0018_0000001926
334
+ [311/500] 2011_09_26_drive_0046_0000000050
335
+ [312/500] 2011_09_26_drive_0027_0000000063
336
+ [313/500] 2011_09_26_drive_0013_0000000120
337
+ [314/500] 2011_09_26_drive_0009_0000000112
338
+ [315/500] 2011_09_26_drive_0093_0000000321
339
+ [316/500] 2011_09_26_drive_0027_0000000119
340
+ [317/500] 2011_09_26_drive_0029_0000000084
341
+ [318/500] 2011_09_26_drive_0027_0000000126
342
+ [319/500] 2011_09_26_drive_0020_0000000066
343
+ [320/500] 2011_09_26_drive_0052_0000000026
344
+ [321/500] 2011_09_26_drive_0027_0000000021
345
+ [322/500] 2011_09_26_drive_0023_0000000288
346
+ [323/500] 2011_09_26_drive_0056_0000000099
347
+ [324/500] 2011_10_03_drive_0027_0000004535
348
+ [325/500] 2011_09_30_drive_0018_0000002740
349
+ [326/500] 2011_09_26_drive_0036_0000000128
350
+ [327/500] 2011_09_26_drive_0086_0000000142
351
+ [328/500] 2011_09_30_drive_0027_0000000246
352
+ [329/500] 2011_09_26_drive_0020_0000000060
353
+ [330/500] 2011_09_28_drive_0002_0000000012
354
+ [331/500] 2011_09_26_drive_0093_0000000096
355
+ [332/500] 2011_09_26_drive_0117_0000000598
356
+ [333/500] 2011_09_29_drive_0071_0000000324
357
+ [334/500] 2011_09_26_drive_0064_0000000110
358
+ [335/500] 2011_09_26_drive_0059_0000000182
359
+ [336/500] 2011_09_26_drive_0093_0000000305
360
+ [337/500] 2011_09_26_drive_0046_0000000005
361
+ [338/500] 2011_09_26_drive_0059_0000000028
362
+ [339/500] 2011_09_26_drive_0027_0000000091
363
+ [340/500] 2011_09_26_drive_0093_0000000064
364
+ [341/500] 2011_09_30_drive_0027_0000000205
365
+ [342/500] 2011_09_29_drive_0071_0000000612
366
+ [343/500] 2011_09_26_drive_0036_0000000608
367
+ [344/500] 2011_09_26_drive_0009_0000000160
368
+ [345/500] 2011_09_26_drive_0084_0000000348
369
+ [346/500] 2011_09_26_drive_0009_0000000196
370
+ [347/500] 2011_09_29_drive_0071_0000000735
371
+ [348/500] 2011_09_26_drive_0013_0000000135
372
+ [349/500] 2011_09_26_drive_0117_0000000078
373
+ [350/500] 2011_09_28_drive_0002_0000000009
374
+ [351/500] 2011_09_26_drive_0029_0000000196
375
+ [352/500] 2011_09_26_drive_0046_0000000015
376
+ [353/500] 2011_09_26_drive_0096_0000000076
377
+ [354/500] 2011_09_26_drive_0117_0000000364
378
+ [355/500] 2011_09_30_drive_0018_0000000107
379
+ [356/500] 2011_09_26_drive_0096_0000000228
380
+ [357/500] 2011_09_26_drive_0020_0000000069
381
+ [358/500] 2011_09_30_drive_0018_0000002140
382
+ [359/500] 2011_09_30_drive_0027_0000000615
383
+ [360/500] 2011_09_30_drive_0027_0000001081
384
+ [361/500] 2011_09_26_drive_0052_0000000036
385
+ [362/500] 2011_09_30_drive_0027_0000000753
386
+ [363/500] 2011_09_26_drive_0023_0000000018
387
+ [364/500] 2011_09_26_drive_0059_0000000014
388
+ [365/500] 2011_09_30_drive_0027_0000001040
389
+ [366/500] 2011_09_26_drive_0046_0000000100
390
+ [367/500] 2011_09_26_drive_0064_0000000374
391
+ [368/500] 2011_09_30_drive_0018_0000001391
392
+ [369/500] 2011_10_03_drive_0047_0000000736
393
+ [370/500] 2011_09_30_drive_0018_0000000749
394
+ [371/500] 2011_09_26_drive_0036_0000000384
395
+ [372/500] 2011_09_26_drive_0052_0000000032
396
+ [373/500] 2011_09_26_drive_0027_0000000112
397
+ [374/500] 2011_09_29_drive_0071_0000000807
398
+ [375/500] 2011_09_26_drive_0084_0000000283
399
+ [376/500] 2011_09_26_drive_0101_0000000794
400
+ [377/500] 2011_09_26_drive_0002_0000000006
401
+ [378/500] 2011_09_26_drive_0002_0000000042
402
+ [379/500] 2011_09_26_drive_0096_0000000019
403
+ [380/500] 2011_09_26_drive_0002_0000000024
404
+ [381/500] 2011_09_28_drive_0002_0000000048
405
+ [382/500] 2011_09_26_drive_0106_0000000091
406
+ [383/500] 2011_09_30_drive_0016_0000000011
407
+ [384/500] 2011_10_03_drive_0027_0000000734
408
+ [385/500] 2011_09_30_drive_0016_0000000253
409
+ [386/500] 2011_09_26_drive_0020_0000000078
410
+ [387/500] 2011_09_26_drive_0106_0000000219
411
+ [388/500] 2011_09_26_drive_0064_0000000088
412
+ [389/500] 2011_09_30_drive_0016_0000000077
413
+ [390/500] 2011_09_26_drive_0020_0000000027
414
+ [391/500] 2011_09_26_drive_0013_0000000035
415
+ [392/500] 2011_09_26_drive_0086_0000000223
416
+ [393/500] 2011_09_26_drive_0084_0000000192
417
+ [394/500] 2011_09_30_drive_0027_0000000287
418
+ [395/500] 2011_09_26_drive_0064_0000000550
419
+ [396/500] 2011_09_26_drive_0093_0000000144
420
+ [397/500] 2011_09_26_drive_0086_0000000466
421
+ [398/500] 2011_09_26_drive_0117_0000000338
422
+ [399/500] 2011_09_26_drive_0101_0000000352
423
+ [400/500] 2011_09_26_drive_0029_0000000056
424
+ [401/500] 2011_09_26_drive_0036_0000000192
425
+ [402/500] 2011_09_26_drive_0086_0000000682
426
+ [403/500] 2011_09_26_drive_0064_0000000176
427
+ [404/500] 2011_09_29_drive_0071_0000000951
428
+ [405/500] 2011_09_26_drive_0046_0000000060
429
+ [406/500] 2011_09_26_drive_0106_0000000155
430
+ [407/500] 2011_09_26_drive_0084_0000000205
431
+ [408/500] 2011_09_26_drive_0084_0000000361
432
+ [409/500] 2011_09_26_drive_0084_0000000244
433
+ [410/500] 2011_09_26_drive_0029_0000000182
434
+ [411/500] 2011_09_30_drive_0027_0000000451
435
+ [412/500] 2011_09_26_drive_0009_0000000388
436
+ [413/500] 2011_09_26_drive_0101_0000000726
437
+ [414/500] 2011_09_26_drive_0029_0000000310
438
+ [415/500] 2011_09_26_drive_0023_0000000108
439
+ [416/500] 2011_09_29_drive_0071_0000000504
440
+ [417/500] 2011_09_26_drive_0023_0000000342
441
+ [418/500] 2011_09_26_drive_0117_0000000390
442
+ [419/500] 2011_09_26_drive_0048_0000000012
443
+ [420/500] 2011_09_26_drive_0084_0000000075
444
+ [421/500] 2011_09_26_drive_0036_0000000320
445
+ [422/500] 2011_09_26_drive_0052_0000000044
446
+ [423/500] 2011_09_29_drive_0071_0000000216
447
+ [424/500] 2011_09_26_drive_0084_0000000257
448
+ [425/500] 2011_09_26_drive_0101_0000000590
449
+ [426/500] 2011_09_26_drive_0027_0000000161
450
+ [427/500] 2011_09_30_drive_0016_0000000066
451
+ [428/500] 2011_09_26_drive_0084_0000000114
452
+ [429/500] 2011_09_26_drive_0023_0000000378
453
+ [430/500] 2011_09_26_drive_0101_0000000624
454
+ [431/500] 2011_09_30_drive_0016_0000000220
455
+ [432/500] 2011_09_29_drive_0071_0000000108
456
+ [433/500] 2011_09_26_drive_0056_0000000055
457
+ [434/500] 2011_09_28_drive_0002_0000000033
458
+ [435/500] 2011_09_26_drive_0048_0000000008
459
+ [436/500] 2011_09_30_drive_0027_0000000656
460
+ [437/500] 2011_09_26_drive_0106_0000000035
461
+ [438/500] 2011_09_26_drive_0101_0000000488
462
+ [439/500] 2011_09_26_drive_0096_0000000266
463
+ [440/500] 2011_09_26_drive_0002_0000000009
464
+ [441/500] 2011_09_30_drive_0016_0000000187
465
+ [442/500] 2011_09_26_drive_0106_0000000043
466
+ [443/500] 2011_10_03_drive_0047_0000000096
467
+ [444/500] 2011_09_26_drive_0056_0000000044
468
+ [445/500] 2011_09_26_drive_0009_0000000096
469
+ [446/500] 2011_09_26_drive_0009_0000000324
470
+ [447/500] 2011_09_26_drive_0029_0000000070
471
+ [448/500] 2011_09_26_drive_0002_0000000027
472
+ [449/500] 2011_09_26_drive_0048_0000000007
473
+ [450/500] 2011_09_26_drive_0052_0000000010
474
+ [451/500] 2011_09_26_drive_0052_0000000052
475
+ [452/500] 2011_09_26_drive_0084_0000000374
476
+ [453/500] 2011_09_26_drive_0056_0000000033
477
+ [454/500] 2011_09_26_drive_0096_0000000399
478
+ [455/500] 2011_09_26_drive_0084_0000000062
479
+ [456/500] 2011_09_26_drive_0059_0000000274
480
+ [457/500] 2011_09_30_drive_0016_0000000033
481
+ [458/500] 2011_09_30_drive_0027_0000000328
482
+ [459/500] 2011_09_30_drive_0016_0000000088
483
+ [460/500] 2011_09_26_drive_0023_0000000324
484
+ [461/500] 2011_09_26_drive_0064_0000000154
485
+ [462/500] 2011_09_26_drive_0048_0000000010
486
+ [463/500] 2011_09_26_drive_0002_0000000033
487
+ [464/500] 2011_09_28_drive_0002_0000000018
488
+ [465/500] 2011_09_26_drive_0023_0000000036
489
+ [466/500] 2011_09_26_drive_0013_0000000125
490
+ [467/500] 2011_09_26_drive_0056_0000000253
491
+ [468/500] 2011_09_26_drive_0046_0000000030
492
+ [469/500] 2011_09_26_drive_0013_0000000105
493
+ [470/500] 2011_09_26_drive_0048_0000000014
494
+ [471/500] 2011_09_26_drive_0027_0000000056
495
+ [472/500] 2011_09_26_drive_0020_0000000051
496
+ [473/500] 2011_09_26_drive_0052_0000000016
497
+ [474/500] 2011_09_26_drive_0027_0000000077
498
+ [475/500] 2011_09_26_drive_0086_0000000007
499
+ [476/500] 2011_09_26_drive_0064_0000000330
500
+ [477/500] 2011_09_26_drive_0106_0000000067
501
+ [478/500] 2011_09_26_drive_0064_0000000506
502
+ [479/500] 2011_10_03_drive_0047_0000000608
503
+ [480/500] 2011_09_26_drive_0059_0000000042
504
+ [481/500] 2011_09_26_drive_0009_0000000340
505
+ [482/500] 2011_09_26_drive_0029_0000000324
506
+ [483/500] 2011_09_26_drive_0046_0000000020
507
+ [484/500] 2011_09_26_drive_0086_0000000061
508
+ [485/500] 2011_09_30_drive_0016_0000000154
509
+ [486/500] 2011_09_26_drive_0106_0000000163
510
+ [487/500] 2011_09_30_drive_0018_0000002633
511
+ [488/500] 2011_09_30_drive_0016_0000000231
512
+ [489/500] 2011_09_26_drive_0020_0000000045
513
+ [490/500] 2011_09_28_drive_0002_0000000090
514
+ [491/500] 2011_09_28_drive_0002_0000000060
515
+ [492/500] 2011_10_03_drive_0047_0000000160
516
+ [493/500] 2011_09_26_drive_0036_0000000544
517
+ [494/500] 2011_09_26_drive_0086_0000000655
518
+ [495/500] 2011_09_26_drive_0101_0000000760
519
+ [496/500] 2011_09_30_drive_0018_0000001712
520
+ [497/500] 2011_09_26_drive_0096_0000000152
521
+ [498/500] 2011_09_26_drive_0036_0000000288
522
+ [499/500] 2011_09_28_drive_0002_0000000021
523
+ [500/500] 2011_09_30_drive_0018_0000000963
524
+
525
+ Processing 500 DDAD samples...
526
+ [1/500] 000508_CAMERA_05
527
+ [2/500] 001971_CAMERA_09
528
+ [3/500] 003267_CAMERA_06
529
+ [4/500] 001726_CAMERA_09
530
+ [5/500] 002738_CAMERA_05
531
+ [6/500] 000339_CAMERA_01
532
+ [7/500] 000104_CAMERA_05
533
+ [8/500] 001069_CAMERA_06
534
+ [9/500] 003710_CAMERA_06
535
+ [10/500] 003376_CAMERA_05
536
+ [11/500] 000864_CAMERA_09
537
+ [12/500] 003894_CAMERA_06
538
+ [13/500] 002730_CAMERA_01
539
+ [14/500] 000125_CAMERA_05
540
+ [15/500] 002151_CAMERA_05
541
+ [16/500] 002147_CAMERA_09
542
+ [17/500] 003924_CAMERA_09
543
+ [18/500] 002818_CAMERA_01
544
+ [19/500] 003451_CAMERA_09
545
+ [20/500] 001686_CAMERA_05
546
+ [21/500] 002310_CAMERA_01
547
+ [22/500] 003416_CAMERA_05
548
+ [23/500] 003797_CAMERA_06
549
+ [24/500] 001782_CAMERA_05
550
+ [25/500] 002078_CAMERA_09
551
+ [26/500] 001568_CAMERA_05
552
+ [27/500] 002371_CAMERA_06
553
+ [28/500] 001397_CAMERA_06
554
+ [29/500] 000278_CAMERA_05
555
+ [30/500] 000101_CAMERA_09
556
+ [31/500] 001674_CAMERA_09
557
+ [32/500] 001627_CAMERA_01
558
+ [33/500] 002721_CAMERA_05
559
+ [34/500] 002251_CAMERA_01
560
+ [35/500] 000127_CAMERA_06
561
+ [36/500] 000470_CAMERA_05
562
+ [37/500] 000865_CAMERA_05
563
+ [38/500] 002088_CAMERA_01
564
+ [39/500] 002350_CAMERA_09
565
+ [40/500] 002461_CAMERA_01
566
+ [41/500] 001049_CAMERA_01
567
+ [42/500] 001989_CAMERA_01
568
+ [43/500] 002291_CAMERA_05
569
+ [44/500] 003633_CAMERA_06
570
+ [45/500] 003613_CAMERA_06
571
+ [46/500] 002393_CAMERA_05
572
+ [47/500] 001589_CAMERA_05
573
+ [48/500] 001893_CAMERA_09
574
+ [49/500] 000106_CAMERA_06
575
+ [50/500] 001136_CAMERA_01
576
+ [51/500] 000131_CAMERA_09
577
+ [52/500] 001886_CAMERA_01
578
+ [53/500] 001700_CAMERA_05
579
+ [54/500] 001341_CAMERA_06
580
+ [55/500] 003728_CAMERA_09
581
+ [56/500] 002019_CAMERA_01
582
+ [57/500] 000274_CAMERA_06
583
+ [58/500] 000332_CAMERA_06
584
+ [59/500] 002214_CAMERA_01
585
+ [60/500] 000256_CAMERA_06
586
+ [61/500] 001944_CAMERA_06
587
+ [62/500] 000654_CAMERA_01
588
+ [63/500] 001085_CAMERA_06
589
+ [64/500] 002741_CAMERA_01
590
+ [65/500] 001520_CAMERA_06
591
+ [66/500] 001033_CAMERA_05
592
+ [67/500] 002843_CAMERA_05
593
+ [68/500] 002282_CAMERA_01
594
+ [69/500] 000258_CAMERA_05
595
+ [70/500] 000580_CAMERA_01
596
+ [71/500] 000277_CAMERA_05
597
+ [72/500] 002670_CAMERA_06
598
+ [73/500] 003761_CAMERA_05
599
+ [74/500] 000605_CAMERA_06
600
+ [75/500] 003725_CAMERA_06
601
+ [76/500] 000154_CAMERA_01
602
+ [77/500] 002659_CAMERA_06
603
+ [78/500] 002283_CAMERA_05
604
+ [79/500] 003312_CAMERA_06
605
+ [80/500] 001888_CAMERA_05
606
+ [81/500] 001473_CAMERA_06
607
+ [82/500] 002265_CAMERA_01
608
+ [83/500] 000389_CAMERA_09
609
+ [84/500] 001111_CAMERA_09
610
+ [85/500] 002484_CAMERA_09
611
+ [86/500] 000998_CAMERA_01
612
+ [87/500] 003584_CAMERA_01
613
+ [88/500] 002328_CAMERA_01
614
+ [89/500] 003337_CAMERA_05
615
+ [90/500] 001702_CAMERA_09
616
+ [91/500] 003439_CAMERA_06
617
+ [92/500] 002552_CAMERA_05
618
+ [93/500] 003668_CAMERA_09
619
+ [94/500] 001998_CAMERA_05
620
+ [95/500] 003236_CAMERA_06
621
+ [96/500] 002696_CAMERA_05
622
+ [97/500] 001755_CAMERA_06
623
+ [98/500] 003544_CAMERA_01
624
+ [99/500] 001705_CAMERA_05
625
+ [100/500] 003830_CAMERA_01
626
+ [101/500] 001003_CAMERA_09
627
+ [102/500] 003294_CAMERA_06
628
+ [103/500] 003946_CAMERA_01
629
+ [104/500] 000216_CAMERA_05
630
+ [105/500] 000145_CAMERA_06
631
+ [106/500] 003890_CAMERA_05
632
+ [107/500] 000899_CAMERA_06
633
+ [108/500] 002849_CAMERA_01
634
+ [109/500] 003710_CAMERA_01
635
+ [110/500] 001474_CAMERA_09
636
+ [111/500] 001996_CAMERA_06
637
+ [112/500] 002833_CAMERA_09
638
+ [113/500] 002167_CAMERA_06
639
+ [114/500] 001274_CAMERA_05
640
+ [115/500] 002568_CAMERA_06
641
+ [116/500] 002417_CAMERA_06
642
+ [117/500] 002666_CAMERA_05
643
+ [118/500] 000809_CAMERA_06
644
+ [119/500] 001222_CAMERA_05
645
+ [120/500] 001379_CAMERA_01
646
+ [121/500] 002561_CAMERA_09
647
+ [122/500] 001055_CAMERA_09
648
+ [123/500] 002447_CAMERA_05
649
+ [124/500] 003042_CAMERA_09
650
+ [125/500] 000287_CAMERA_09
651
+ [126/500] 000422_CAMERA_09
652
+ [127/500] 001298_CAMERA_09
653
+ [128/500] 003617_CAMERA_09
654
+ [129/500] 001542_CAMERA_06
655
+ [130/500] 002100_CAMERA_06
656
+ [131/500] 001623_CAMERA_05
657
+ [132/500] 001289_CAMERA_09
658
+ [133/500] 001130_CAMERA_06
659
+ [134/500] 001892_CAMERA_06
660
+ [135/500] 000720_CAMERA_06
661
+ [136/500] 000222_CAMERA_09
662
+ [137/500] 000294_CAMERA_09
663
+ [138/500] 000625_CAMERA_05
664
+ [139/500] 003935_CAMERA_06
665
+ [140/500] 001163_CAMERA_01
666
+ [141/500] 003784_CAMERA_06
667
+ [142/500] 002344_CAMERA_01
668
+ [143/500] 001853_CAMERA_05
669
+ [144/500] 000468_CAMERA_06
670
+ [145/500] 002891_CAMERA_05
671
+ [146/500] 002498_CAMERA_06
672
+ [147/500] 002572_CAMERA_06
673
+ [148/500] 002170_CAMERA_09
674
+ [149/500] 003146_CAMERA_09
675
+ [150/500] 002108_CAMERA_06
676
+ [151/500] 000959_CAMERA_05
677
+ [152/500] 001146_CAMERA_06
678
+ [153/500] 001222_CAMERA_09
679
+ [154/500] 002341_CAMERA_06
680
+ [155/500] 003135_CAMERA_05
681
+ [156/500] 000276_CAMERA_01
682
+ [157/500] 002875_CAMERA_05
683
+ [158/500] 000531_CAMERA_09
684
+ [159/500] 002916_CAMERA_01
685
+ [160/500] 003781_CAMERA_09
686
+ [161/500] 003309_CAMERA_01
687
+ [162/500] 002844_CAMERA_06
688
+ [163/500] 002778_CAMERA_06
689
+ [164/500] 001958_CAMERA_06
690
+ [165/500] 003231_CAMERA_06
691
+ [166/500] 000950_CAMERA_06
692
+ [167/500] 003253_CAMERA_09
693
+ [168/500] 000705_CAMERA_09
694
+ [169/500] 000260_CAMERA_05
695
+ [170/500] 001244_CAMERA_05
696
+ [171/500] 002928_CAMERA_06
697
+ [172/500] 003237_CAMERA_05
698
+ [173/500] 000464_CAMERA_05
699
+ [174/500] 003936_CAMERA_06
700
+ [175/500] 000598_CAMERA_01
701
+ [176/500] 001979_CAMERA_06
702
+ [177/500] 000791_CAMERA_05
703
+ [178/500] 002518_CAMERA_05
704
+ [179/500] 002263_CAMERA_01
705
+ [180/500] 001374_CAMERA_05
706
+ [181/500] 000704_CAMERA_06
707
+ [182/500] 003369_CAMERA_01
708
+ [183/500] 003794_CAMERA_05
709
+ [184/500] 002199_CAMERA_06
710
+ [185/500] 000629_CAMERA_09
711
+ [186/500] 001231_CAMERA_05
712
+ [187/500] 001614_CAMERA_05
713
+ [188/500] 001952_CAMERA_01
714
+ [189/500] 002494_CAMERA_01
715
+ [190/500] 003162_CAMERA_06
716
+ [191/500] 001435_CAMERA_05
717
+ [192/500] 001509_CAMERA_06
718
+ [193/500] 002298_CAMERA_09
719
+ [194/500] 002435_CAMERA_01
720
+ [195/500] 000805_CAMERA_05
721
+ [196/500] 003196_CAMERA_09
722
+ [197/500] 003894_CAMERA_09
723
+ [198/500] 000639_CAMERA_06
724
+ [199/500] 000152_CAMERA_09
725
+ [200/500] 001108_CAMERA_06
726
+ [201/500] 001399_CAMERA_01
727
+ [202/500] 000187_CAMERA_09
728
+ [203/500] 001839_CAMERA_06
729
+ [204/500] 003150_CAMERA_01
730
+ [205/500] 001194_CAMERA_05
731
+ [206/500] 003586_CAMERA_09
732
+ [207/500] 003940_CAMERA_06
733
+ [208/500] 001552_CAMERA_01
734
+ [209/500] 003391_CAMERA_06
735
+ [210/500] 003113_CAMERA_01
736
+ [211/500] 001392_CAMERA_06
737
+ [212/500] 000615_CAMERA_06
738
+ [213/500] 000442_CAMERA_05
739
+ [214/500] 001577_CAMERA_05
740
+ [215/500] 002074_CAMERA_05
741
+ [216/500] 000958_CAMERA_01
742
+ [217/500] 003523_CAMERA_01
743
+ [218/500] 000661_CAMERA_05
744
+ [219/500] 002221_CAMERA_09
745
+ [220/500] 003078_CAMERA_01
746
+ [221/500] 002235_CAMERA_05
747
+ [222/500] 001975_CAMERA_01
748
+ [223/500] 000301_CAMERA_05
749
+ [224/500] 001480_CAMERA_01
750
+ [225/500] 000505_CAMERA_05
751
+ [226/500] 002834_CAMERA_09
752
+ [227/500] 000839_CAMERA_06
753
+ [228/500] 002476_CAMERA_09
754
+ [229/500] 003159_CAMERA_09
755
+ [230/500] 002373_CAMERA_01
756
+ [231/500] 000763_CAMERA_09
757
+ [232/500] 000068_CAMERA_06
758
+ [233/500] 002538_CAMERA_09
759
+ [234/500] 001930_CAMERA_06
760
+ [235/500] 003650_CAMERA_01
761
+ [236/500] 002627_CAMERA_05
762
+ [237/500] 001525_CAMERA_01
763
+ [238/500] 002813_CAMERA_05
764
+ [239/500] 001784_CAMERA_05
765
+ [240/500] 001974_CAMERA_05
766
+ [241/500] 002403_CAMERA_09
767
+ [242/500] 002543_CAMERA_06
768
+ [243/500] 002394_CAMERA_05
769
+ [244/500] 002722_CAMERA_09
770
+ [245/500] 001110_CAMERA_01
771
+ [246/500] 000777_CAMERA_06
772
+ [247/500] 002345_CAMERA_01
773
+ [248/500] 000821_CAMERA_09
774
+ [249/500] 003320_CAMERA_01
775
+ [250/500] 001931_CAMERA_09
776
+ [251/500] 001400_CAMERA_06
777
+ [252/500] 003302_CAMERA_01
778
+ [253/500] 001568_CAMERA_01
779
+ [254/500] 001631_CAMERA_06
780
+ [255/500] 002773_CAMERA_06
781
+ [256/500] 003836_CAMERA_05
782
+ [257/500] 002305_CAMERA_06
783
+ [258/500] 000305_CAMERA_06
784
+ [259/500] 002410_CAMERA_06
785
+ [260/500] 000254_CAMERA_05
786
+ [261/500] 002729_CAMERA_09
787
+ [262/500] 000187_CAMERA_05
788
+ [263/500] 000393_CAMERA_05
789
+ [264/500] 000061_CAMERA_06
790
+ [265/500] 000397_CAMERA_09
791
+ [266/500] 003896_CAMERA_06
792
+ [267/500] 000417_CAMERA_06
793
+ [268/500] 001703_CAMERA_09
794
+ [269/500] 002732_CAMERA_09
795
+ [270/500] 002513_CAMERA_05
796
+ [271/500] 002370_CAMERA_06
797
+ [272/500] 002476_CAMERA_01
798
+ [273/500] 002368_CAMERA_05
799
+ [274/500] 003159_CAMERA_06
800
+ [275/500] 003757_CAMERA_09
801
+ [276/500] 000489_CAMERA_06
802
+ [277/500] 001373_CAMERA_09
803
+ [278/500] 002718_CAMERA_06
804
+ [279/500] 003056_CAMERA_05
805
+ [280/500] 002013_CAMERA_09
806
+ [281/500] 001741_CAMERA_01
807
+ [282/500] 002122_CAMERA_01
808
+ [283/500] 001205_CAMERA_05
809
+ [284/500] 001971_CAMERA_06
810
+ [285/500] 001704_CAMERA_01
811
+ [286/500] 001582_CAMERA_06
812
+ [287/500] 003500_CAMERA_05
813
+ [288/500] 003831_CAMERA_09
814
+ [289/500] 003536_CAMERA_01
815
+ [290/500] 003902_CAMERA_06
816
+ [291/500] 002796_CAMERA_01
817
+ [292/500] 001768_CAMERA_05
818
+ [293/500] 001849_CAMERA_09
819
+ [294/500] 000887_CAMERA_05
820
+ [295/500] 000906_CAMERA_05
821
+ [296/500] 002482_CAMERA_01
822
+ [297/500] 000713_CAMERA_06
823
+ [298/500] 003575_CAMERA_01
824
+ [299/500] 000610_CAMERA_09
825
+ [300/500] 001776_CAMERA_09
826
+ [301/500] 002187_CAMERA_05
827
+ [302/500] 001093_CAMERA_05
828
+ [303/500] 000553_CAMERA_01
829
+ [304/500] 001428_CAMERA_05
830
+ [305/500] 003679_CAMERA_09
831
+ [306/500] 002278_CAMERA_05
832
+ [307/500] 000678_CAMERA_09
833
+ [308/500] 001250_CAMERA_01
834
+ [309/500] 000732_CAMERA_06
835
+ [310/500] 003301_CAMERA_06
836
+ [311/500] 000265_CAMERA_09
837
+ [312/500] 003072_CAMERA_09
838
+ [313/500] 003913_CAMERA_06
839
+ [314/500] 000718_CAMERA_05
840
+ [315/500] 002625_CAMERA_01
841
+ [316/500] 002823_CAMERA_05
842
+ [317/500] 001471_CAMERA_09
843
+ [318/500] 003567_CAMERA_05
844
+ [319/500] 001407_CAMERA_01
845
+ [320/500] 002647_CAMERA_09
846
+ [321/500] 000864_CAMERA_01
847
+ [322/500] 002358_CAMERA_05
848
+ [323/500] 001175_CAMERA_06
849
+ [324/500] 001732_CAMERA_05
850
+ [325/500] 000112_CAMERA_06
851
+ [326/500] 003191_CAMERA_01
852
+ [327/500] 002382_CAMERA_09
853
+ [328/500] 000290_CAMERA_06
854
+ [329/500] 000568_CAMERA_01
855
+ [330/500] 003259_CAMERA_05
856
+ [331/500] 002091_CAMERA_06
857
+ [332/500] 002788_CAMERA_09
858
+ [333/500] 003881_CAMERA_01
859
+ [334/500] 003725_CAMERA_05
860
+ [335/500] 003497_CAMERA_05
861
+ [336/500] 002809_CAMERA_06
862
+ [337/500] 002945_CAMERA_01
863
+ [338/500] 002770_CAMERA_05
864
+ [339/500] 003192_CAMERA_05
865
+ [340/500] 001966_CAMERA_09
866
+ [341/500] 003366_CAMERA_09
867
+ [342/500] 003940_CAMERA_01
868
+ [343/500] 002831_CAMERA_05
869
+ [344/500] 001995_CAMERA_09
870
+ [345/500] 002649_CAMERA_01
871
+ [346/500] 000939_CAMERA_05
872
+ [347/500] 001142_CAMERA_06
873
+ [348/500] 000998_CAMERA_06
874
+ [349/500] 003856_CAMERA_09
875
+ [350/500] 003175_CAMERA_01
876
+ [351/500] 000949_CAMERA_05
877
+ [352/500] 002674_CAMERA_06
878
+ [353/500] 000050_CAMERA_01
879
+ [354/500] 000065_CAMERA_06
880
+ [355/500] 001497_CAMERA_09
881
+ [356/500] 000439_CAMERA_06
882
+ [357/500] 000826_CAMERA_09
883
+ [358/500] 001953_CAMERA_06
884
+ [359/500] 002549_CAMERA_06
885
+ [360/500] 003004_CAMERA_01
886
+ [361/500] 000258_CAMERA_01
887
+ [362/500] 001654_CAMERA_06
888
+ [363/500] 001913_CAMERA_05
889
+ [364/500] 002137_CAMERA_01
890
+ [365/500] 003300_CAMERA_01
891
+ [366/500] 001151_CAMERA_05
892
+ [367/500] 002896_CAMERA_01
893
+ [368/500] 001969_CAMERA_01
894
+ [369/500] 001488_CAMERA_01
895
+ [370/500] 003243_CAMERA_05
896
+ [371/500] 000886_CAMERA_05
897
+ [372/500] 003344_CAMERA_05
898
+ [373/500] 003821_CAMERA_05
899
+ [374/500] 001201_CAMERA_06
900
+ [375/500] 002291_CAMERA_09
901
+ [376/500] 000100_CAMERA_01
902
+ [377/500] 003792_CAMERA_09
903
+ [378/500] 003171_CAMERA_05
904
+ [379/500] 000930_CAMERA_05
905
+ [380/500] 002269_CAMERA_09
906
+ [381/500] 000757_CAMERA_05
907
+ [382/500] 003001_CAMERA_09
908
+ [383/500] 000016_CAMERA_05
909
+ [384/500] 000309_CAMERA_05
910
+ [385/500] 000717_CAMERA_01
911
+ [386/500] 002188_CAMERA_01
912
+ [387/500] 000148_CAMERA_05
913
+ [388/500] 001565_CAMERA_01
914
+ [389/500] 000432_CAMERA_05
915
+ [390/500] 000547_CAMERA_01
916
+ [391/500] 003624_CAMERA_06
917
+ [392/500] 000564_CAMERA_01
918
+ [393/500] 002013_CAMERA_06
919
+ [394/500] 001071_CAMERA_05
920
+ [395/500] 003256_CAMERA_01
921
+ [396/500] 002925_CAMERA_06
922
+ [397/500] 001275_CAMERA_05
923
+ [398/500] 003606_CAMERA_05
924
+ [399/500] 001630_CAMERA_05
925
+ [400/500] 002052_CAMERA_01
926
+ [401/500] 002419_CAMERA_06
927
+ [402/500] 001632_CAMERA_01
928
+ [403/500] 003522_CAMERA_09
929
+ [404/500] 000458_CAMERA_01
930
+ [405/500] 002223_CAMERA_01
931
+ [406/500] 001892_CAMERA_01
932
+ [407/500] 000321_CAMERA_06
933
+ [408/500] 000348_CAMERA_05
934
+ [409/500] 002422_CAMERA_09
935
+ [410/500] 002478_CAMERA_09
936
+ [411/500] 000335_CAMERA_09
937
+ [412/500] 002819_CAMERA_01
938
+ [413/500] 002193_CAMERA_09
939
+ [414/500] 002988_CAMERA_05
940
+ [415/500] 002437_CAMERA_06
941
+ [416/500] 003048_CAMERA_01
942
+ [417/500] 003053_CAMERA_05
943
+ [418/500] 003466_CAMERA_05
944
+ [419/500] 001348_CAMERA_05
945
+ [420/500] 001043_CAMERA_01
946
+ [421/500] 001327_CAMERA_06
947
+ [422/500] 000998_CAMERA_09
948
+ [423/500] 002130_CAMERA_01
949
+ [424/500] 002506_CAMERA_05
950
+ [425/500] 003248_CAMERA_06
951
+ [426/500] 002439_CAMERA_05
952
+ [427/500] 002360_CAMERA_05
953
+ [428/500] 003893_CAMERA_05
954
+ [429/500] 002378_CAMERA_05
955
+ [430/500] 001823_CAMERA_09
956
+ [431/500] 000318_CAMERA_09
957
+ [432/500] 001564_CAMERA_01
958
+ [433/500] 000602_CAMERA_01
959
+ [434/500] 001518_CAMERA_06
960
+ [435/500] 001090_CAMERA_06
961
+ [436/500] 001177_CAMERA_06
962
+ [437/500] 000494_CAMERA_05
963
+ [438/500] 001501_CAMERA_05
964
+ [439/500] 000247_CAMERA_09
965
+ [440/500] 001701_CAMERA_09
966
+ [441/500] 001085_CAMERA_05
967
+ [442/500] 003943_CAMERA_01
968
+ [443/500] 002406_CAMERA_05
969
+ [444/500] 003164_CAMERA_05
970
+ [445/500] 001984_CAMERA_05
971
+ [446/500] 003332_CAMERA_01
972
+ [447/500] 000866_CAMERA_05
973
+ [448/500] 000438_CAMERA_05
974
+ [449/500] 002583_CAMERA_06
975
+ [450/500] 002629_CAMERA_06
976
+ [451/500] 000657_CAMERA_05
977
+ [452/500] 002141_CAMERA_09
978
+ [453/500] 002413_CAMERA_06
979
+ [454/500] 001953_CAMERA_09
980
+ [455/500] 000094_CAMERA_06
981
+ [456/500] 000095_CAMERA_06
982
+ [457/500] 001733_CAMERA_01
983
+ [458/500] 000541_CAMERA_09
984
+ [459/500] 001172_CAMERA_05
985
+ [460/500] 001757_CAMERA_05
986
+ [461/500] 001248_CAMERA_06
987
+ [462/500] 002000_CAMERA_05
988
+ [463/500] 000593_CAMERA_05
989
+ [464/500] 000130_CAMERA_01
990
+ [465/500] 003158_CAMERA_09
991
+ [466/500] 000829_CAMERA_05
992
+ [467/500] 001834_CAMERA_05
993
+ [468/500] 002416_CAMERA_06
994
+ [469/500] 002626_CAMERA_06
995
+ [470/500] 000849_CAMERA_05
996
+ [471/500] 002450_CAMERA_06
997
+ [472/500] 003770_CAMERA_01
998
+ [473/500] 003017_CAMERA_09
999
+ [474/500] 001345_CAMERA_01
1000
+ [475/500] 003552_CAMERA_09
1001
+ [476/500] 003183_CAMERA_09
1002
+ [477/500] 000718_CAMERA_01
1003
+ [478/500] 001999_CAMERA_05
1004
+ [479/500] 003817_CAMERA_06
1005
+ [480/500] 001420_CAMERA_06
1006
+ [481/500] 003027_CAMERA_09
1007
+ [482/500] 000548_CAMERA_06
1008
+ [483/500] 002001_CAMERA_09
1009
+ [484/500] 001506_CAMERA_01
1010
+ [485/500] 000311_CAMERA_05
1011
+ [486/500] 003026_CAMERA_06
1012
+ [487/500] 003868_CAMERA_05
1013
+ [488/500] 000207_CAMERA_09
1014
+ [489/500] 001950_CAMERA_05
1015
+ [490/500] 001531_CAMERA_09
1016
+ [491/500] 000586_CAMERA_09
1017
+ [492/500] 003510_CAMERA_01
1018
+ [493/500] 000559_CAMERA_06
1019
+ [494/500] 001995_CAMERA_05
1020
+ [495/500] 003759_CAMERA_05
1021
+ [496/500] 001168_CAMERA_09
1022
+ [497/500] 003762_CAMERA_06
1023
+ [498/500] 000598_CAMERA_06
1024
+ [499/500] 001434_CAMERA_01
1025
+ [500/500] 002774_CAMERA_09
1026
+
1027
+ Done! Results saved to /home/ywan0794/MoGe/vis_output
1028
+ Structure:
1029
+ /home/ywan0794/MoGe/vis_output/
1030
+ KITTI/
1031
+ rgb/, gt/, gt_reverse/, da2_dpt/, da2_sdt/, da3_dpt/, da3_sdt/, da3_dualdpt/
1032
+ DDAD/
1033
+ rgb/, gt/, gt_reverse/, da2_dpt/, da2_sdt/, da3_dpt/, da3_sdt/, da3_dualdpt/
1034
+ Visualization completed!
vis_gt_8719.log ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Processing KITTI GT...
2
+ KITTI: 50/200
3
+ KITTI: 100/200
4
+ KITTI: 150/200
5
+ KITTI: 200/200
6
+ Processing DDAD GT...
7
+ DDAD: 50/200
8
+ DDAD: 100/200
9
+ DDAD: 150/200
10
+ DDAD: 200/200
11
+ Done!
12
+ GT visualization completed!
vis_gt_8722.log ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Starting GT visualization at Wed Jan 14 11:22:04 PM AEDT 2026
2
+ ==================================================
3
+ GT Depth Visualization (both versions)
4
+ ==================================================
5
+
6
+ Collecting KITTI samples...
7
+ Processing 200 KITTI GT samples...
8
+
9
+
10
+ Collecting DDAD samples...
11
+ Processing 200 DDAD GT samples...
12
+
13
+
14
+ ==================================================
15
+ Done!
16
+ Output:
17
+ /home/ywan0794/MoGe/vis_output/KITTI/gt/ (non-reverse)
18
+ /home/ywan0794/MoGe/vis_output/KITTI/gt_reverse/ (reverse)
19
+ /home/ywan0794/MoGe/vis_output/DDAD/gt/ (non-reverse)
20
+ /home/ywan0794/MoGe/vis_output/DDAD/gt_reverse/ (reverse)
21
+ ==================================================
22
+ GT visualization completed at Wed Jan 14 11:23:23 PM AEDT 2026
vis_gt_8725.log ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Starting GT visualization at Thu Jan 15 01:58:52 AM AEDT 2026
2
+ ==================================================
3
+ GT Depth Visualization (both versions)
4
+ ==================================================
5
+
6
+ Collecting KITTI samples...
7
+ Processing 10 KITTI GT samples...
8
+
9
+
10
+ Collecting DDAD samples...
11
+ Processing 10 DDAD GT samples...
12
+
13
+
14
+ ==================================================
15
+ Done!
16
+ Output:
17
+ /home/ywan0794/MoGe/vis_output/KITTI/gt/ (non-reverse)
18
+ /home/ywan0794/MoGe/vis_output/KITTI/gt_reverse/ (reverse)
19
+ /home/ywan0794/MoGe/vis_output/DDAD/gt/ (non-reverse)
20
+ /home/ywan0794/MoGe/vis_output/DDAD/gt_reverse/ (reverse)
21
+ ==================================================
22
+ GT visualization completed at Thu Jan 15 01:59:22 AM AEDT 2026
visualize_depth.py ADDED
@@ -0,0 +1,387 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Visualize depth predictions from different decoders on KITTI and DDAD datasets
3
+ """
4
+ import os
5
+ import sys
6
+ import argparse
7
+ import numpy as np
8
+ import torch
9
+ import torch.nn.functional as F
10
+ import torchvision.transforms as T
11
+ import torchvision.transforms.functional as TF
12
+ from PIL import Image
13
+ import matplotlib.pyplot as plt
14
+ import matplotlib
15
+ matplotlib.use('Agg')
16
+
17
+ # Paths
18
+ DA2_REPO = '/home/ywan0794/Depth-Anything-V2'
19
+ DA3_REPO = '/home/ywan0794/Depth-Anything-3'
20
+
21
+ # Checkpoints
22
+ CHECKPOINTS = {
23
+ 'da2_dpt': '/home/ywan0794/Depth-Anything-V2/training/exp/dpt_vitb_both/epoch_007.pth',
24
+ 'da2_sdt': '/home/ywan0794/Depth-Anything-V2/training/exp/sdt_vitb_both/epoch_008.pth',
25
+ 'da3_dpt': '/home/ywan0794/Depth-Anything-3/training/exp/da3_dpt_vitl_both/epoch_010.pth',
26
+ 'da3_sdt': '/home/ywan0794/Depth-Anything-3/training/exp/da3_sdt_vitl_both/epoch_010.pth',
27
+ 'da3_dualdpt': '/home/ywan0794/Depth-Anything-3/training/exp/da3_dualdpt_vitl_both/epoch_010.pth',
28
+ }
29
+
30
+ # Dataset paths
31
+ KITTI_BASE = '/home/ywan0794/datasets/eval/moge_style_eval/KITTI'
32
+ DDAD_BASE = '/home/ywan0794/datasets/eval/moge_style_eval/DDAD/val'
33
+
34
+
35
+ # ============================================
36
+ # DA2 Model Loading (same as da2_custom.py)
37
+ # ============================================
38
+ def load_da2_model(checkpoint_path, encoder='vitb', decoder='dpt'):
39
+ """Load DA2 model with DPT or SDT decoder"""
40
+ repo_path = DA2_REPO
41
+ training_path = os.path.join(repo_path, 'training')
42
+
43
+ if repo_path not in sys.path:
44
+ sys.path.insert(0, repo_path)
45
+ if training_path not in sys.path:
46
+ sys.path.insert(0, training_path)
47
+
48
+ # Model configurations (same as training)
49
+ model_configs = {
50
+ 'vits': {'encoder': 'vits', 'features': 64, 'out_channels': [48, 96, 192, 384]},
51
+ 'vitb': {'encoder': 'vitb', 'features': 128, 'out_channels': [96, 192, 384, 768]},
52
+ 'vitl': {'encoder': 'vitl', 'features': 256, 'out_channels': [256, 512, 1024, 1024]},
53
+ 'vitg': {'encoder': 'vitg', 'features': 384, 'out_channels': [1536, 1536, 1536, 1536]}
54
+ }
55
+
56
+ # Build model based on decoder type
57
+ if decoder == 'dpt':
58
+ from depth_anything_v2.dpt import DepthAnythingV2
59
+ model = DepthAnythingV2(**model_configs[encoder])
60
+ elif decoder == 'sdt':
61
+ from depth_anything_v2.sdt import DepthAnythingV2SDT
62
+ model = DepthAnythingV2SDT(
63
+ encoder=encoder,
64
+ features=model_configs[encoder]['features'],
65
+ out_channels=model_configs[encoder]['out_channels'],
66
+ use_clstoken=True,
67
+ upsampler='dysample'
68
+ )
69
+ else:
70
+ raise ValueError(f"Unknown decoder: {decoder}")
71
+
72
+ # Load checkpoint
73
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
74
+ if 'model' in ckpt:
75
+ state_dict = ckpt['model']
76
+ else:
77
+ state_dict = ckpt
78
+ state_dict = {k.replace('module.', ''): v for k, v in state_dict.items()}
79
+ missing, unexpected = model.load_state_dict(state_dict, strict=False)
80
+ print(f"Loaded DA2 {decoder} from {checkpoint_path}")
81
+ if missing:
82
+ print(f" Missing keys: {len(missing)}")
83
+ if unexpected:
84
+ print(f" Unexpected keys: {len(unexpected)}")
85
+
86
+ return model
87
+
88
+
89
+ # ============================================
90
+ # DA3 Model Loading (same as da3_custom.py)
91
+ # ============================================
92
+ class DA3Wrapper(torch.nn.Module):
93
+ def __init__(self, model):
94
+ super().__init__()
95
+ self.model = model
96
+
97
+ def forward(self, x):
98
+ # x: [B, 3, H, W]
99
+ x = x.unsqueeze(1) # [B, 1, 3, H, W]
100
+ output = self.model(x)
101
+ depth = output.depth.squeeze(1) # [B, H, W]
102
+ return depth
103
+
104
+
105
+ def load_da3_model(checkpoint_path, decoder='dpt'):
106
+ """Load DA3 model with DPT, SDT, or DualDPT decoder"""
107
+ repo_path = DA3_REPO
108
+ src_path = os.path.join(repo_path, 'src')
109
+ training_path = os.path.join(repo_path, 'training')
110
+
111
+ if src_path not in sys.path:
112
+ sys.path.insert(0, src_path)
113
+ if training_path not in sys.path:
114
+ sys.path.insert(0, training_path)
115
+
116
+ # Config paths
117
+ config_dir = os.path.join(repo_path, 'src', 'depth_anything_3', 'configs')
118
+ if decoder == 'dpt':
119
+ config_path = os.path.join(config_dir, 'da3dpt-large.yaml')
120
+ elif decoder == 'sdt':
121
+ config_path = os.path.join(config_dir, 'da3sdt-large.yaml')
122
+ elif decoder == 'dualdpt':
123
+ config_path = os.path.join(config_dir, 'da3dualdpt-large.yaml')
124
+ else:
125
+ raise ValueError(f"Unknown decoder: {decoder}")
126
+
127
+ from depth_anything_3.cfg import load_config, create_object
128
+
129
+ # Build model
130
+ cfg = load_config(config_path)
131
+ base_model = create_object(cfg)
132
+ model = DA3Wrapper(base_model)
133
+
134
+ # Load checkpoint
135
+ ckpt = torch.load(checkpoint_path, map_location='cpu')
136
+ if 'model' in ckpt:
137
+ state_dict = ckpt['model']
138
+ else:
139
+ state_dict = ckpt
140
+ state_dict = {k.replace('module.', ''): v for k, v in state_dict.items()}
141
+ missing, unexpected = model.load_state_dict(state_dict, strict=False)
142
+ print(f"Loaded DA3 {decoder} from {checkpoint_path}")
143
+ if missing:
144
+ print(f" Missing keys: {len(missing)}")
145
+ if unexpected:
146
+ print(f" Unexpected keys: {len(unexpected)}")
147
+
148
+ return model
149
+
150
+
151
+ # ============================================
152
+ # Inference Wrapper
153
+ # ============================================
154
+ class ModelWrapper:
155
+ def __init__(self, model, device, use_amp=True):
156
+ self.model = model.to(device).eval()
157
+ self.device = device
158
+ self.use_amp = use_amp
159
+
160
+ @torch.inference_mode()
161
+ def predict(self, image):
162
+ """image: PIL Image, returns disparity numpy array"""
163
+ # Convert to tensor
164
+ img = TF.to_tensor(image).unsqueeze(0) # [1, 3, H, W]
165
+ original_height, original_width = img.shape[-2:]
166
+
167
+ # Resize to multiple of 14
168
+ resize_factor = 518 / min(original_height, original_width)
169
+ expected_width = round(original_width * resize_factor / 14) * 14
170
+ expected_height = round(original_height * resize_factor / 14) * 14
171
+
172
+ img = TF.resize(img, (expected_height, expected_width), interpolation=T.InterpolationMode.BICUBIC, antialias=True)
173
+ img = TF.normalize(img, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
174
+ img = img.to(self.device)
175
+
176
+ # Forward
177
+ if self.use_amp:
178
+ with torch.cuda.amp.autocast(dtype=torch.bfloat16):
179
+ disp = self.model(img)
180
+ else:
181
+ disp = self.model(img)
182
+
183
+ # Resize back
184
+ disp = F.interpolate(disp[:, None], size=(original_height, original_width), mode='bilinear', align_corners=False)[:, 0]
185
+ disp = disp.squeeze().cpu().numpy()
186
+
187
+ return disp
188
+
189
+
190
+ def colorize_depth(depth, cmap='Spectral', reverse=False, mask_invalid=False):
191
+ """Convert depth/disparity to colorized image using Spectral colormap"""
192
+ depth = depth.copy()
193
+
194
+ # Create mask for invalid (zero) regions
195
+ if mask_invalid:
196
+ invalid_mask = depth <= 0
197
+
198
+ # Only use valid values for percentile calculation
199
+ if mask_invalid:
200
+ valid_depth = depth[~invalid_mask]
201
+ if len(valid_depth) > 0:
202
+ vmin = np.percentile(valid_depth, 2)
203
+ vmax = np.percentile(valid_depth, 98)
204
+ else:
205
+ vmin, vmax = 0, 1
206
+ else:
207
+ vmin = np.percentile(depth, 2)
208
+ vmax = np.percentile(depth, 98)
209
+
210
+ depth = (depth - vmin) / (vmax - vmin + 1e-8)
211
+ depth = np.clip(depth, 0, 1)
212
+
213
+ # Reverse if needed
214
+ if reverse:
215
+ depth = 1 - depth
216
+
217
+ cm = plt.get_cmap(cmap)
218
+ colored = cm(depth)[:, :, :3]
219
+ colored = (colored * 255).astype(np.uint8)
220
+
221
+ # Set invalid regions to black
222
+ if mask_invalid:
223
+ colored[invalid_mask] = 0
224
+
225
+ return colored
226
+
227
+
228
+ def load_gt_depth(depth_path):
229
+ """Load ground truth depth from PNG"""
230
+ depth = np.array(Image.open(depth_path))
231
+ if depth.dtype == np.uint16:
232
+ depth = depth.astype(np.float32) / 256.0
233
+ elif depth.dtype == np.uint8:
234
+ depth = depth.astype(np.float32)
235
+ return depth
236
+
237
+
238
+ def main():
239
+ parser = argparse.ArgumentParser()
240
+ parser.add_argument('--output-dir', type=str, default='/home/ywan0794/MoGe/vis_output')
241
+ parser.add_argument('--num-samples', type=int, default=10)
242
+ parser.add_argument('--device', type=str, default='cuda')
243
+ args = parser.parse_args()
244
+
245
+ device = torch.device(args.device)
246
+
247
+ # Create output directories
248
+ datasets = ['KITTI', 'DDAD']
249
+ subfolders = ['rgb', 'gt', 'gt_reverse', 'da2_dpt', 'da2_sdt', 'da3_dpt', 'da3_sdt', 'da3_dualdpt']
250
+
251
+ for dataset in datasets:
252
+ for subfolder in subfolders:
253
+ os.makedirs(os.path.join(args.output_dir, dataset, subfolder), exist_ok=True)
254
+
255
+ print("Loading models...")
256
+ models = {}
257
+
258
+ # Load DA2 models
259
+ print(" Loading DA2-DPT...")
260
+ da2_dpt = load_da2_model(CHECKPOINTS['da2_dpt'], encoder='vitb', decoder='dpt')
261
+ models['da2_dpt'] = ModelWrapper(da2_dpt, device, use_amp=False)
262
+
263
+ print(" Loading DA2-SDT...")
264
+ da2_sdt = load_da2_model(CHECKPOINTS['da2_sdt'], encoder='vitb', decoder='sdt')
265
+ models['da2_sdt'] = ModelWrapper(da2_sdt, device, use_amp=False)
266
+
267
+ # Load DA3 models
268
+ print(" Loading DA3-DPT...")
269
+ da3_dpt = load_da3_model(CHECKPOINTS['da3_dpt'], decoder='dpt')
270
+ models['da3_dpt'] = ModelWrapper(da3_dpt, device, use_amp=True)
271
+
272
+ print(" Loading DA3-SDT...")
273
+ da3_sdt = load_da3_model(CHECKPOINTS['da3_sdt'], decoder='sdt')
274
+ models['da3_sdt'] = ModelWrapper(da3_sdt, device, use_amp=True)
275
+
276
+ print(" Loading DA3-DualDPT...")
277
+ da3_dualdpt = load_da3_model(CHECKPOINTS['da3_dualdpt'], decoder='dualdpt')
278
+ models['da3_dualdpt'] = ModelWrapper(da3_dualdpt, device, use_amp=True)
279
+
280
+ print("All models loaded!")
281
+
282
+ # Get KITTI samples
283
+ kitti_samples = []
284
+ for drive in os.listdir(KITTI_BASE):
285
+ drive_path = os.path.join(KITTI_BASE, drive, 'image_02')
286
+ if os.path.isdir(drive_path):
287
+ for frame in sorted(os.listdir(drive_path)):
288
+ sample_dir = os.path.join(drive_path, frame)
289
+ img_path = os.path.join(sample_dir, 'image.jpg')
290
+ gt_path = os.path.join(sample_dir, 'depth.png')
291
+ if os.path.exists(img_path) and os.path.exists(gt_path):
292
+ kitti_samples.append({
293
+ 'image': img_path,
294
+ 'gt': gt_path,
295
+ 'name': f"{drive}_{frame}"
296
+ })
297
+
298
+ # Get DDAD samples
299
+ ddad_samples = []
300
+ for scene in sorted(os.listdir(DDAD_BASE)):
301
+ scene_path = os.path.join(DDAD_BASE, scene)
302
+ if os.path.isdir(scene_path):
303
+ for cam in sorted(os.listdir(scene_path)):
304
+ sample_dir = os.path.join(scene_path, cam)
305
+ img_path = os.path.join(sample_dir, 'image.jpg')
306
+ gt_path = os.path.join(sample_dir, 'depth.png')
307
+ if os.path.exists(img_path) and os.path.exists(gt_path):
308
+ ddad_samples.append({
309
+ 'image': img_path,
310
+ 'gt': gt_path,
311
+ 'name': f"{scene}_{cam}"
312
+ })
313
+
314
+ # Select random samples
315
+ np.random.seed(42)
316
+ kitti_selected = np.random.choice(len(kitti_samples), min(args.num_samples, len(kitti_samples)), replace=False)
317
+ ddad_selected = np.random.choice(len(ddad_samples), min(args.num_samples, len(ddad_samples)), replace=False)
318
+
319
+ # Process KITTI
320
+ print(f"\nProcessing {len(kitti_selected)} KITTI samples...")
321
+ for idx, i in enumerate(kitti_selected):
322
+ sample = kitti_samples[i]
323
+ print(f" [{idx+1}/{len(kitti_selected)}] {sample['name']}")
324
+
325
+ # Load image
326
+ image = Image.open(sample['image']).convert('RGB')
327
+
328
+ # Save RGB
329
+ image.save(os.path.join(args.output_dir, 'KITTI', 'rgb', f"{idx:03d}.png"))
330
+
331
+ # Load and save GT (both versions)
332
+ gt_depth = load_gt_depth(sample['gt'])
333
+ gt_colored = colorize_depth(gt_depth, reverse=False, mask_invalid=True)
334
+ gt_colored_rev = colorize_depth(gt_depth, reverse=True, mask_invalid=True)
335
+ Image.fromarray(gt_colored).save(os.path.join(args.output_dir, 'KITTI', 'gt', f"{idx:03d}.png"))
336
+ Image.fromarray(gt_colored_rev).save(os.path.join(args.output_dir, 'KITTI', 'gt_reverse', f"{idx:03d}.png"))
337
+
338
+ # Predict and save for each model
339
+ for model_name, wrapper in models.items():
340
+ pred = wrapper.predict(image)
341
+ # DA3 DPT needs reverse
342
+ need_reverse = (model_name == 'da3_dpt')
343
+ pred_colored = colorize_depth(pred, reverse=need_reverse)
344
+ Image.fromarray(pred_colored).save(
345
+ os.path.join(args.output_dir, 'KITTI', model_name, f"{idx:03d}.png")
346
+ )
347
+
348
+ # Process DDAD
349
+ print(f"\nProcessing {len(ddad_selected)} DDAD samples...")
350
+ for idx, i in enumerate(ddad_selected):
351
+ sample = ddad_samples[i]
352
+ print(f" [{idx+1}/{len(ddad_selected)}] {sample['name']}")
353
+
354
+ # Load image
355
+ image = Image.open(sample['image']).convert('RGB')
356
+
357
+ # Save RGB
358
+ image.save(os.path.join(args.output_dir, 'DDAD', 'rgb', f"{idx:03d}.png"))
359
+
360
+ # Load and save GT (both versions)
361
+ gt_depth = load_gt_depth(sample['gt'])
362
+ gt_colored = colorize_depth(gt_depth, reverse=False, mask_invalid=True)
363
+ gt_colored_rev = colorize_depth(gt_depth, reverse=True, mask_invalid=True)
364
+ Image.fromarray(gt_colored).save(os.path.join(args.output_dir, 'DDAD', 'gt', f"{idx:03d}.png"))
365
+ Image.fromarray(gt_colored_rev).save(os.path.join(args.output_dir, 'DDAD', 'gt_reverse', f"{idx:03d}.png"))
366
+
367
+ # Predict and save for each model
368
+ for model_name, wrapper in models.items():
369
+ pred = wrapper.predict(image)
370
+ # DA3 DPT needs reverse
371
+ need_reverse = (model_name == 'da3_dpt')
372
+ pred_colored = colorize_depth(pred, reverse=need_reverse)
373
+ Image.fromarray(pred_colored).save(
374
+ os.path.join(args.output_dir, 'DDAD', model_name, f"{idx:03d}.png")
375
+ )
376
+
377
+ print(f"\nDone! Results saved to {args.output_dir}")
378
+ print(f"Structure:")
379
+ print(f" {args.output_dir}/")
380
+ print(f" KITTI/")
381
+ print(f" rgb/, gt/, gt_reverse/, da2_dpt/, da2_sdt/, da3_dpt/, da3_sdt/, da3_dualdpt/")
382
+ print(f" DDAD/")
383
+ print(f" rgb/, gt/, gt_reverse/, da2_dpt/, da2_sdt/, da3_dpt/, da3_sdt/, da3_dualdpt/")
384
+
385
+
386
+ if __name__ == '__main__':
387
+ main()
visualize_gt_only.py ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Visualize GT depth - both reverse and non-reverse versions
3
+ Must match exactly with visualize_depth.py sample selection
4
+ """
5
+ import os
6
+ import sys
7
+ import numpy as np
8
+ from PIL import Image
9
+ import matplotlib.pyplot as plt
10
+ import matplotlib
11
+ matplotlib.use('Agg')
12
+ from tqdm import tqdm
13
+
14
+ # Dataset paths - MUST match visualize_depth.py
15
+ KITTI_BASE = '/home/ywan0794/datasets/eval/moge_style_eval/KITTI'
16
+ DDAD_BASE = '/home/ywan0794/datasets/eval/moge_style_eval/DDAD/val'
17
+ OUTPUT_DIR = '/home/ywan0794/MoGe/vis_output'
18
+
19
+
20
+ def colorize_depth(depth, cmap='Spectral', reverse=False, mask_invalid=False):
21
+ depth = depth.copy()
22
+ if mask_invalid:
23
+ invalid_mask = depth <= 0
24
+ valid_depth = depth[~invalid_mask]
25
+ if len(valid_depth) > 0:
26
+ vmin = np.percentile(valid_depth, 2)
27
+ vmax = np.percentile(valid_depth, 98)
28
+ else:
29
+ vmin, vmax = 0, 1
30
+ else:
31
+ vmin = np.percentile(depth, 2)
32
+ vmax = np.percentile(depth, 98)
33
+
34
+ depth = (depth - vmin) / (vmax - vmin + 1e-8)
35
+ depth = np.clip(depth, 0, 1)
36
+
37
+ if reverse:
38
+ depth = 1 - depth
39
+
40
+ cm = plt.get_cmap(cmap)
41
+ colored = cm(depth)[:, :, :3]
42
+ colored = (colored * 255).astype(np.uint8)
43
+
44
+ if mask_invalid:
45
+ colored[invalid_mask] = 0
46
+
47
+ return colored
48
+
49
+
50
+ def load_gt_depth(depth_path):
51
+ depth = np.array(Image.open(depth_path))
52
+ if depth.dtype == np.uint16:
53
+ depth = depth.astype(np.float32) / 256.0
54
+ elif depth.dtype == np.uint8:
55
+ depth = depth.astype(np.float32)
56
+ return depth
57
+
58
+
59
+ def main():
60
+ print("=" * 50, flush=True)
61
+ print("GT Depth Visualization (both versions)", flush=True)
62
+ print("=" * 50, flush=True)
63
+
64
+ # Create output directories
65
+ for dataset in ['KITTI', 'DDAD']:
66
+ os.makedirs(os.path.join(OUTPUT_DIR, dataset, 'gt'), exist_ok=True)
67
+ os.makedirs(os.path.join(OUTPUT_DIR, dataset, 'gt_reverse'), exist_ok=True)
68
+
69
+ # ============================================
70
+ # KITTI - MUST match visualize_depth.py exactly
71
+ # ============================================
72
+ print("\nCollecting KITTI samples...", flush=True)
73
+ kitti_samples = []
74
+ for drive in os.listdir(KITTI_BASE):
75
+ drive_path = os.path.join(KITTI_BASE, drive, 'image_02')
76
+ if os.path.isdir(drive_path):
77
+ for frame in sorted(os.listdir(drive_path)):
78
+ sample_dir = os.path.join(drive_path, frame)
79
+ img_path = os.path.join(sample_dir, 'image.jpg')
80
+ gt_path = os.path.join(sample_dir, 'depth.png')
81
+ # MUST check both img and gt exist - same as visualize_depth.py
82
+ if os.path.exists(img_path) and os.path.exists(gt_path):
83
+ kitti_samples.append({
84
+ 'image': img_path,
85
+ 'gt': gt_path,
86
+ 'name': f"{drive}_{frame}"
87
+ })
88
+
89
+ # Same random seed and selection as visualize_depth.py
90
+ np.random.seed(42)
91
+ num_samples = 10 # Must match --num-samples in visualize_depth.py
92
+ kitti_selected = np.random.choice(len(kitti_samples), min(num_samples, len(kitti_samples)), replace=False)
93
+
94
+ print(f"Processing {len(kitti_selected)} KITTI GT samples...", flush=True)
95
+ for idx, i in tqdm(enumerate(kitti_selected), total=len(kitti_selected), desc="KITTI GT", file=sys.stdout):
96
+ sample = kitti_samples[i]
97
+ gt_depth = load_gt_depth(sample['gt'])
98
+
99
+ # Non-reverse version
100
+ gt_colored = colorize_depth(gt_depth, reverse=False, mask_invalid=True)
101
+ Image.fromarray(gt_colored).save(os.path.join(OUTPUT_DIR, 'KITTI', 'gt', f"{idx:03d}.png"))
102
+
103
+ # Reverse version
104
+ gt_colored_rev = colorize_depth(gt_depth, reverse=True, mask_invalid=True)
105
+ Image.fromarray(gt_colored_rev).save(os.path.join(OUTPUT_DIR, 'KITTI', 'gt_reverse', f"{idx:03d}.png"))
106
+
107
+ # ============================================
108
+ # DDAD - MUST match visualize_depth.py exactly
109
+ # ============================================
110
+ print("\nCollecting DDAD samples...", flush=True)
111
+ ddad_samples = []
112
+ for scene in sorted(os.listdir(DDAD_BASE)):
113
+ scene_path = os.path.join(DDAD_BASE, scene)
114
+ if os.path.isdir(scene_path):
115
+ for cam in sorted(os.listdir(scene_path)):
116
+ sample_dir = os.path.join(scene_path, cam)
117
+ img_path = os.path.join(sample_dir, 'image.jpg')
118
+ gt_path = os.path.join(sample_dir, 'depth.png')
119
+ # MUST check both img and gt exist - same as visualize_depth.py
120
+ if os.path.exists(img_path) and os.path.exists(gt_path):
121
+ ddad_samples.append({
122
+ 'image': img_path,
123
+ 'gt': gt_path,
124
+ 'name': f"{scene}_{cam}"
125
+ })
126
+
127
+ # Same random seed and selection as visualize_depth.py
128
+ # Note: seed was already set to 42 above, and kitti_selected consumed some random numbers
129
+ # We need to match the exact sequence
130
+ ddad_selected = np.random.choice(len(ddad_samples), min(num_samples, len(ddad_samples)), replace=False)
131
+
132
+ print(f"Processing {len(ddad_selected)} DDAD GT samples...", flush=True)
133
+ for idx, i in tqdm(enumerate(ddad_selected), total=len(ddad_selected), desc="DDAD GT", file=sys.stdout):
134
+ sample = ddad_samples[i]
135
+ gt_depth = load_gt_depth(sample['gt'])
136
+
137
+ # Non-reverse version
138
+ gt_colored = colorize_depth(gt_depth, reverse=False, mask_invalid=True)
139
+ Image.fromarray(gt_colored).save(os.path.join(OUTPUT_DIR, 'DDAD', 'gt', f"{idx:03d}.png"))
140
+
141
+ # Reverse version
142
+ gt_colored_rev = colorize_depth(gt_depth, reverse=True, mask_invalid=True)
143
+ Image.fromarray(gt_colored_rev).save(os.path.join(OUTPUT_DIR, 'DDAD', 'gt_reverse', f"{idx:03d}.png"))
144
+
145
+ print("\n" + "=" * 50, flush=True)
146
+ print("Done!", flush=True)
147
+ print("Output:", flush=True)
148
+ print(f" {OUTPUT_DIR}/KITTI/gt/ (non-reverse)", flush=True)
149
+ print(f" {OUTPUT_DIR}/KITTI/gt_reverse/ (reverse)", flush=True)
150
+ print(f" {OUTPUT_DIR}/DDAD/gt/ (non-reverse)", flush=True)
151
+ print(f" {OUTPUT_DIR}/DDAD/gt_reverse/ (reverse)", flush=True)
152
+ print("=" * 50, flush=True)
153
+
154
+
155
+ if __name__ == '__main__':
156
+ main()
visualize_gt_slurm.sh ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=vis-gt
3
+ #SBATCH --output=vis_gt_%j.log
4
+ #SBATCH --error=vis_gt_%j.log
5
+ #SBATCH --open-mode=append
6
+ #SBATCH --ntasks=1
7
+ #SBATCH --cpus-per-task=4
8
+ #SBATCH --time=0:30:00
9
+ #SBATCH --mem=16G
10
+
11
+ # 禁用Python输出缓冲
12
+ export PYTHONUNBUFFERED=1
13
+
14
+ # Initialize conda
15
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
16
+ conda activate da3
17
+
18
+ cd /home/ywan0794/MoGe
19
+
20
+ echo "Starting GT visualization at $(date)"
21
+ python visualize_gt_only.py
22
+ echo "GT visualization completed at $(date)"
visualize_slurm.sh ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ #SBATCH --job-name=vis-depth
3
+ #SBATCH --output=vis_depth_%j.log
4
+ #SBATCH --error=vis_depth_%j.log
5
+ #SBATCH --ntasks=1
6
+ #SBATCH --cpus-per-task=8
7
+ #SBATCH --gres=gpu:1
8
+ #SBATCH --time=1:00:00
9
+ #SBATCH --mem=40G
10
+
11
+ # Initialize conda
12
+ source /home/ywan0794/miniconda3/etc/profile.d/conda.sh
13
+ conda activate da3
14
+
15
+ cd /home/ywan0794/MoGe
16
+
17
+ python visualize_depth.py \
18
+ --output-dir /home/ywan0794/MoGe/vis_output \
19
+ --num-samples 500 \
20
+ --device cuda
21
+
22
+ echo "Visualization completed!"