Instructions to use qian43/Sat3DGen with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use qian43/Sat3DGen with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("qian43/Sat3DGen", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
File size: 2,365 Bytes
7c4c301 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | ---
license: mit
pipeline_tag: image-to-3d
---
# Sat3DGen: Comprehensive Street-Level 3D Scene Generation from Single Satellite Image
Sat3DGen is a framework for generating street-level 3D scenes from a single satellite image. It uses a geometry-first methodology to bridge the extreme viewpoint gap between satellite and street views, achieving high geometric fidelity and photorealism.
[**Paper**](https://arxiv.org/abs/2605.14984) | [**Project Page**](https://qianmingduowan.github.io/Sat3DGen_project_page/) | [**GitHub**](https://github.com/qianmingduowan/Sat3DGen) | [**Demo**](https://huggingface.co/spaces/qian43/Sat3DGen)
## Sample Usage
To use this model, you will need the code from the [official repository](https://github.com/qianmingduowan/Sat3DGen).
```python
from source.generator import Sat3DGen
# Load the model
Sat3DGen._skip_backbone_weights = True
model = Sat3DGen.from_pretrained("qian43/Sat3DGen")
model = model.to("cuda:0").eval()
# Proceed with inference as described in the repository
```
## Citation
If you find this work useful for your research, please cite:
```bibtex
@inproceedings{
qian2026satdgen,
title={Sat3{DG}en: Comprehensive Street-Level 3D Scene Generation from Single Satellite Image},
author={Ming Qian and Zimin Xia and Changkun Liu and Shuailei Ma and Wen Wang and Zeran Ke and Bin Tan and Hang Zhang and Gui-Song Xia},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=E7JzkZCofa}
}
@ARTICLE{Qian_2026_Sat2Densitypp,
author={Qian, Ming and Tan, Bin and Wang, Qiuyu and Zheng, Xianwei and Xiong, Hanjiang and Xia, Gui-Song and Shen, Yujun and Xue, Nan},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Seeing Through Satellite Images at Street Views},
year={2026},
volume={48},
number={5},
pages={5692-5709},
doi={10.1109/TPAMI.2026.3652860}}
@InProceedings{Qian_2023_Sat2Density,
author = {Qian, Ming and Xiong, Jincheng and Xia, Gui-Song and Xue, Nan},
title = {Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {3683-3692}
}
``` |