extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
- en
tags:
- chmv2
- dinov3
license_name: dinov3-license
license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license
base_model: facebook/dinov3-vitl16-chmv2-dpt-head
pipeline_tag: depth-estimation
library_name: transformers
Model Card for CHMv2
The Canopy Height Maps v2 (CHMv2) model is a DPT-based decoder estimating canopy height given satellite imagery, leveraging DINOv3 as the backbone. Building on our original high-resolution canopy height maps released in 2024, CHMv2 delivers substantial improvements in accuracy, detail, and global consistency.
Model Details
CHMv2 model was developed using the satellite DINOv3 ViT-L as the frozen backbone. Released with world-scale maps generated with it, they will help researchers and governments measure and understand every tree, gap, and canopy edge — enabling smarter biodiversity support and land-management decisions.
Usage With Transformers
Run inference on an image with the following code:
from PIL import Image
import torch
from transformers import CHMv2ForDepthEstimation, CHMv2ImageProcessorFast
processor = CHMv2ImageProcessorFast.from_pretrained("facebook/dinov3-vitl16-chmv2-dpt-head")
model = CHMv2ForDepthEstimation.from_pretrained("facebook/dinov3-vitl16-chmv2-dpt-head")
image = Image.open("image.tif")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
depth = processor.post_process_depth_estimation(
outputs, target_sizes=[(image.height, image.width)]
)[0]["predicted_depth"]
Model Description
- Developed by: Meta AI
- Model type: DPT head
- License: DINOv3 License
Model Sources
- Repository: https://github.com/facebookresearch/dinov3
- Paper: https://arxiv.org/abs/2603.06382
Direct Use
The model can be used without fine-tuning to obtain competitive results on various satellite datasets (paper link).