HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating and Simulating 3D Worlds

English | ็ฎ€ไฝ“ไธญๆ–‡

HY-World-2.0 Teaser


"What Is Now Proved Was Once Only Imagined"

๐ŸŽฅ Video

๐Ÿ”ฅ News

  • [April 15, 2026]: ๐Ÿš€ Release HY-World 2.0 technical report & partial codes!
  • [April 15, 2026]: ๐Ÿค— Open-source WorldMirror 2.0 inference code and model weights!
  • [Coming Soon]: Release Full HY-World 2.0 (World Generation) inference code.
  • [Coming Soon]: Release Panorama Generation (HY-Pano 2.0) model weights & code.
  • [Coming Soon]: Release Trajectory Planning๏ผˆWorldNav๏ผ‰ code.
  • [Coming Soon]: Release World Expansion(WorldStereo 2.0) model weights & inference code.

๐Ÿ“‹ Table of Contents

๐Ÿ“– Introduction

HY-World 2.0 is a multi-modal world model framework for world generation and world reconstruction. It accepts diverse input modalities โ€” text, single-view images, multi-view images, and videos โ€” and produces 3D world representations (meshes / Gaussian Splattings). It offers two core capabilities:

  • World Generation (text / single image โ†’ 3D world): syntheses high-fidelity, navigable 3D scenes through a four-stage method โ€”โ€” a) Panorama Generation with HY-Pano 2.0, b) Trajectory Planning with WorldNav, c) World Expansion with WorldStereo 2.0, and d) World Composition with WorldMirror 2.0 & 3DGS learning.
  • World Reconstruction (multi-view images / video โ†’ 3D): Powered by WorldMirror 2.0, a unified feed-forward model that simultaneously predicts depth, surface normals, camera parameters, 3D point clouds, and 3DGS attributes in a single forward pass.

HY-World 2.0 is the first open-source state-of-the-art 3D world model, delivering results comparable to closed-source methods such as Marble. We will release all model weights, code, and technical details to facilitate reproducibility and advance research in this field.

Why 3D World Models?

Existing world models, such as Genie 3, Cosmos, and HY-World 1.5 (WorldPlay+WorldCompass), generate pixel-level videos โ€” essentially "watching a movie" that vanishes once playback ends. HY-World 2.0 takes a fundamentally different approach: it directly produces editable, persistent 3D assets (meshes / 3DGS) that can be imported into game engines like Blender/Unity/Unreal Engine/Isaac Sim โ€” more like "building a playable game" than recording a clip. This paradigm shift natively resolves many long-standing pain points of video world models:

Video World Models 3D World Model (HY-World 2.0)
Output Pixel videos (non-editable) Real 3D assets โ€” meshes / 3DGS (fully editable)
Playable Duration Limited (typically < 1 min) Unlimited โ€” assets persist permanently
3D Consistency Poor (flickering, artifacts across views) Native โ€” inherently consistent in 3D
Real-Time Rendering Requires per-frame inference; high latency Consumer GPUs can render in real time
Controllability Weak (imprecise character control, no real physics) Precise โ€” zero-error control, real physics collision, accurate lighting
Inference Cost Accumulates with every interaction One-time generation; rendering cost โ‰ˆ 0
Engine Compatibility โœ— Video files only โœ“ Directly importable into Blender / UE / Isaac Engine
$\color{IndianRed}{\textsf{Watch a video, then it's gone}}$ $\color{RoyalBlue}{\textbf{Build a world, keep it forever}}$

All above are real 3D assets (not generated videos) and entirely created by HY-World 2.0 -- captured from live real-time interaction.

โœจ Highlights

  • Real 3D Worlds, Not Just Videos

    Unlike video-only world models (e.g., Genie 3, HY World 1.5), HY-World 2.0 generates real 3D assets โ€” 3DGS, meshes, and point clouds โ€” that are freely explorable, editable, and directly importable into Unity / Unreal Engine / Isaac. From a single text prompt or image, create navigable 3D worlds with diverse styles: realistic, cartoon, game, and more.

  • Instant 3D Reconstruction from Photos & Videos

    Powered by WorldMirror 2.0, a unified feed-forward model that predicts dense point clouds, depth maps, surface normals, camera parameters, and 3DGS from multi-view images or casual videos in a single forward pass. Supports flexible-resolution inference (50Kโ€“500K pixels) with SOTA accuracy. Capture a video, get a digital twin.

  • Interactive Character Exploration

    Go beyond viewing โ€” play inside your generated worlds. HY-World 2.0 supports first-person navigation and third-person character mode, enabling users to freely explore AI-generated streets, buildings, and landscapes with physics-based collision. Go to our product page for free try.

๐Ÿงฉ Architecture

  • Refer to our tech report for more details

    A systematic pipeline of HY-World 2.0 โ€” Panorama Generation (HY-Pano-2.0) โ†’ Trajectory Planning (WorldNav) โ†’ World Expansion (WorldStereo 2.0) โ†’ World Composition (WorldMirror 2.0 + 3DGS) โ€” that automatically transforms text or a single image into a high-fidelity, navigable 3D world (3DGS/mesh outputs).

๐Ÿ“ Open-Source Plan

  • โœ… Technical Report
  • โœ… WorldMirror 2.0 Code & Model Checkpoints
  • โฌœ Full Inference Code for World Generation (WorldNav + World Composition)
  • โฌœ Panorama Generation (HY-Pano 2.0) Model & Code โ€” HunyuanWorld 1.0 available as interim alternative
  • โฌœ World Expansion (WorldStereo 2.0) Model & Code โ€” WorldStereo available as interim alternative

๐ŸŽ Model Zoo

World Reconstruction โ€” WorldMirror Series

Model Description Params Date Hugging Face
WorldMirror 2.0 Multi-view / video โ†’ 3D reconstruction ~1.2B 2026 Download
WorldMirror 1.0 Multi-view / video โ†’ 3D reconstruction (legacy) ~1.2B 2025 Download

Panorama Generation

Model Description Params Date Hugging Face
HY-PanoGen Text / image โ†’ 360ยฐ panorama โ€” Coming Soon โ€”

World Generation

Model Description Params Date Hugging Face
WorldStereo 2.0 Panorama โ†’ navigable 3DGS world โ€” Coming Soon โ€”

We recommend referring to our previous works, WorldStereo and WorldMirror, for background knowledge on world generation and reconstruction.

๐Ÿค— Get Started

Install Requirements

We recommend CUDA 12.4 for installation.

# 1. Clone the repository
git clone https://github.com/Tencent-Hunyuan/HY-World-2.0
cd HY-World-2.0

# 2. Create conda environment
conda create -n hyworld2 python=3.10
conda activate hyworld2

# 3. Install PyTorch (CUDA 12.4)
pip install torch==2.4.0 torchvision==0.19.0 --index-url https://download.pytorch.org/whl/cu124

# 4. Install dependencies
pip install -r requirements.txt

# 5. Install FlashAttention
# (Recommended) Install FlashAttention-3
git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention/hopper
python setup.py install
cd ../../
rm -rf flash-attention

# For simpler installation, you can also use FlashAttention-2
pip install flash-attn --no-build-isolation

Code Usage โ€” Panorama Generation (HY-Pano-2)

Coming soon.

Code Usage โ€” World Generation (WorldNav, WorldStereo-2, and 3DGS)

Coming soon.

We recommend referring to our previous work, WorldStereo, for the open-source preview version of WorldStereo-2.

Code Usage โ€” WorldMirror 2.0

WorldMirror 2.0 supports the following usage modes:

We provide a diffusers-like Python API for WorldMirror 2.0. Model weights are automatically downloaded from Hugging Face on first run.

from hyworld2.worldrecon.pipeline import WorldMirrorPipeline

pipeline = WorldMirrorPipeline.from_pretrained('tencent/HY-World-2.0')
result = pipeline('path/to/images')

With Prior Injection (Camera & Depth):

result = pipeline(
    'path/to/images',
    prior_cam_path='path/to/prior_camera.json',
    prior_depth_path='path/to/prior_depth/',
)

For the detailed structure of camera/depth priors and how to prepare them, see Prior Preparation Guide.

CLI:

# Single GPU
python -m hyworld2.worldrecon.pipeline --input_path path/to/images

# Multi-GPU
torchrun --nproc_per_node=2 -m hyworld2.worldrecon.pipeline \
    --input_path path/to/images \
    --use_fsdp --enable_bf16

Important: In multi-GPU mode, the number of input images must be >= the number of GPUs. For example, with --nproc_per_node=8, provide at least 8 images.

Gradio App โ€” WorldMirror 2.0

We provide an interactive Gradio web demo for WorldMirror 2.0. Upload images or videos and visualize 3DGS, point clouds, depth maps, normal maps, and camera parameters in your browser.

# Single GPU
python -m hyworld2.worldrecon.gradio_app

# Multi-GPU
torchrun --nproc_per_node=2 -m hyworld2.worldrecon.gradio_app \
    --use_fsdp --enable_bf16

For the full list of Gradio app arguments (port, share, local checkpoints, etc.), see DOCUMENTATION.md.

๐Ÿ”ฎ Performance

For full benchmark results, please refer to the technical report.

WorldStereo 2.0 โ€” Camera Control

Methods Camera Metrics Visual Quality
RotErr โ†“TransErr โ†“ATE โ†“ Q-Align โ†‘CLIP-IQA+ โ†‘Laion-Aes โ†‘CLIP-I โ†‘
SEVA1.6901.5782.8793.2320.4794.62377.16
Gen3C0.9441.5802.7893.3530.4894.86382.33
WorldStereo0.7621.2452.1414.1490.5475.25789.05
WorldStereo 2.00.4920.9681.7684.2050.5445.26689.43

WorldStereo 2.0 โ€” Single-View-Generated Reconstruction

Methods Tanks-and-Temples MipNeRF360
Precision โ†‘ Recall โ†‘ F1-Score โ†‘ AUC โ†‘ Precision โ†‘ Recall โ†‘ F1-Score โ†‘ AUC โ†‘
SEVA 33.59 35.34 36.73 51.03 22.38 55.63 28.75 46.81
Gen3C 46.73 25.51 31.24 42.44 23.28 75.37 35.26 52.10
Lyra 50.38 28.67 32.54 43.05 30.02 58.60 36.05 49.89
FlashWorld 26.58 20.72 22.29 30.45 35.97 53.77 42.60 53.86
WorldStereo 2.0 43.62 41.02 41.43 58.19 43.19 65.32 51.27 65.79
WorldStereo 2.0 (DMD) 40.41 44.41 43.16 60.09 42.34 64.83 50.52 65.64

WorldMirror 2.0 โ€” Point Map Reconstruction

Point Map Reconstruction on 7-Scenes, NRGBD, and DTU. We report the mean Accuracy and Completeness of WorldMirror under different input configurations. Bold results are best. "L / M / H" denote low / medium / high inference resolution. "+ all priors" denotes injection of camera extrinsics, camera intrinsics, and depth priors.

Method 7-Scenes (scene) NRGBD (scene) DTU (object)
Acc. โ†“Comp. โ†“ Acc. โ†“Comp. โ†“ Acc. โ†“Comp. โ†“
WorldMirror 1.0
  L0.0430.0550.0460.0491.4761.768
  L + all priors0.0210.0260.0220.0201.3471.392
  M0.0430.0490.0410.0451.0171.780
  M + all priors0.0180.0230.0160.0140.7350.935
  H0.0790.0870.0770.0932.2712.113
  H + all priors0.0420.0410.0780.0821.7731.478
WorldMirror 2.0
  L0.0410.0520.0470.0581.3522.009
  L + all priors0.0190.0240.0170.0151.1001.201
  M0.0330.0460.0390.0471.0051.892
  M + all priors0.0130.0170.0130.0130.6900.876
  H0.0370.0400.0460.0530.8451.904
  H + all priors0.0120.0160.0150.0160.5540.771

WorldMirror 2.0 โ€” Prior Comparison

Comparison with Pow3R and MapAnything under Different Prior Conditions. Results are averaged on 7-Scenes, NRGBD, and DTU datasets. Pow3R (pro) refers to the original Pow3R with Procrustes alignment.

๐ŸŽฌ More Examples

๐Ÿ“– Documentation

For detailed usage guides, parameter references, output format specifications, and prior injection instructions, see DOCUMENTATION.md.

๐Ÿ“š Citation

If you find HunyuanWorld 2.0 useful for your research, please cite:

@article{hyworld22026,
  title={HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating and Simulating 3D Worlds},
  author={Tencent HY-World Team},
  journal={arXiv preprint},
  year={2026}
}

@article{hunyuanworld2025tencent,
    title={HunyuanWorld 1.0: Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels},
    author={Team HunyuanWorld},
    year={2025},
    journal={arXiv preprint}
}

๐Ÿ“ง Contact

Please send emails to tengfeiwang12@gmail.com for questions or feedback.

๐Ÿ™ Acknowledgements

We would like to thank HunyuanWorld 1.0, WorldMirror, WorldPlay, WorldStereo, HunyuanImage for their great work.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support