--- pipeline_tag: image-to-3d ---

AnyRecon: Arbitrary-View 3D Reconstruction with Video Diffusion Model

Yutian Chen, Shi Guo, Renbiao Jin, Tianshuo Yang, Xin Cai, Yawen Luo, Mingxin Yang Mulin Yu, Linning Xu, Tianfan Xue

       

## 🌟 Abstract Sparse-view 3D reconstruction is essential for modeling scenes from casual captures, but remain challenging for non-generative reconstruction. Existing diffusion-based approaches mitigates this issues by synthesizing novel views, but they often condition on only one or two capture frames, which restricts geometric consistency and limits scalability to large or diverse scenes. We propose AnyRecon, a scalable framework for reconstruction from arbitrary and unordered sparse inputs that preserves explicit geometric control while supporting flexible conditioning cardinality. To support long-range conditioning, our method constructs a persistent global scene memory via a prepended capture view cache, and removes temporal compression to maintain frame-level correspondence under large viewpoint changes. ## 🛠️ Environment Setup ```bash git clone https://github.com/OpenImagingLab/AnyRecon.git cd AnyRecon conda create -n anyrecon python=3.10 -y conda activate anyrecon pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu118 pip install -r requirements.txt ``` ## 🚀 Quick Start ### Inference You can run the inference using the provided python script (ensure you have downloaded the required weights and placed them in the `./checkpoints` folder): ```bash python run_AnyRecon.py \ --root_dir example/valley \ --output_dir example/valley \ --lora_path full_attention.ckpt ``` ## 🔗 Citation If you find our work helpful, please cite it: ```bibtex @article{chen2026anyrecon, title={AnyRecon: Arbitrary-View 3D Reconstruction with Video Diffusion Model}, author={Chen, Yutian and Guo, Shi and Jin, Renbiao and Yang, Tianshuo and Cai, Xin and Luo, Yawen and Yang, Mingxin and Yu, Mulin and Xu, Linning and Xue, Tianfan}, journal={arXiv preprint arXiv:2604.19747}, year={2026} } ``` ## 💗 Acknowledgments Thanks to these great repositories: [Wan2.1](https://github.com/Wan-Video/Wan2.1) and [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio).