DepthAnythingV2 / README.md
cooper_robot
release(main): sync from cooper-model-garden-files 20260402_153007
21afd43
metadata
library_name: pytorch

depthanythingv2_logo

DepthAnythingV2 is a lightweight depth estimation model designed to predict accurate per-pixel depth maps from a single RGB image, optimized for efficiency and versatility across scenes.

Original paper: DepthAnything V2

DepthAnythingV2-Small

This model uses the DepthAnythingV2-Small variant, which balances model size and inference speed while maintaining strong depth estimation accuracy. It is well suited for applications such as AR/VR, robotics, scene reconstruction, and real-time 3D perception on edge devices.

Model Configuration:

Model Device compression Model Link
DepthAnythingV2-Small N1-655 Amba_optimized Model_Link
DepthAnythingV2-Small N1-655 Activation_fp16 Model_Link
DepthAnythingV2-Small CV7 Amba_optimized Model_Link
DepthAnythingV2-Small CV7 Activation_fp16 Model_Link
DepthAnythingV2-Small CV72 Amba_optimized Model_Link
DepthAnythingV2-Small CV72 Activation_fp16 Model_Link
DepthAnythingV2-Small CV75 Amba_optimized Model_Link
DepthAnythingV2-Small CV75 Activation_fp16 Model_Link