HHDC-2m — Diffusion Model for Satellite LiDAR Reconstruction
Pretrained checkpoint for the paper:
Diffusion-Based Joint Recovery, Denoising, and Super-Resolution of Compressed-Sensing Satellite LiDAR Data
Andres Ramirez-Jaime, Nestor Porras-Diaz, Mark Stephen, Guangning Yang, Gonzalo R. Arce
University of Delaware · NASA Goddard Space Flight Center
What this model does
A Gaussian Diffusion U-Net trained to jointly reconstruct, denoise, and super-resolve 3D canopy volume data from compressed-sensing satellite LiDAR acquisitions (HHDC instrument). The inference pipeline uses Diffusion Posterior Sampling (DPS) with a physics-based Poisson forward imaging model as the data-consistency constraint.
- Resolution: 2× super-resolution (
model2.pt) - Architecture:
Unet(dim=128, dim_mults=(8, 16, 16, 16), flash_attn=True, channels=128) - Diffusion: Gaussian diffusion, 1000 training timesteps, DDIM sampling (250 steps default)
- Guidance: DPS gradient-based guidance enforcing Poisson log-likelihood
Quickstart
# 1. Clone the inference repo
git clone https://github.com/Anfera/DenoisSuperResOfCSHHDC.git
cd DenoisSuperResOfCSHHDC
# 2. Install dependencies (Python 3.10+ recommended)
pip install -r requirements.txt
# 3. Download the checkpoint
mkdir -p results
hf download anfera236/HHDC-2m model2.pt --local-dir results/
# 4. Place test data in data/TestCube/ (gt2.npy is provided in the repo)
# 5. Run inference
python SingleLikelihood.py
Outputs are saved to resultCubes/ (final reconstructions) and intermediateCubesTest/ (DDIM snapshots).
Configuration
All tunable parameters live in src/config.py: resolution, DDIM steps, mask type (blue_noise / random / bayer), sampling ratio, physics model parameters (footprint diameter, background rate, readout noise), and output paths.
Dataset
Test data and full dataset: anfera236/HHDC
Funding
Supported by U.S. National Science Foundation Grant No. 2404740 and NASA Grant No. 80NSSC25K7395.