A newer version of the Gradio SDK is available: 6.12.0
title: NeAR
emoji: π§
colorFrom: blue
colorTo: indigo
sdk: gradio
sdk_version: 6.9.0
python_version: '3.10'
app_file: app.py
pinned: false
license: apache-2.0
short_description: 'Relightable 3D from one image: SLAT, neural renderer, HDRI'
NeAR
NeAR is a relightable 3D generation and rendering project built on top of TRELLIS-style Structured Latents (SLAT) and a lighting-aware neural renderer. Given a casually lit input image, NeAR estimates relightable neural assets and renders them under novel environment lighting and viewpoints.
This repository combines:
- a TRELLIS-derived latent pipeline for image-conditioned SLAT prediction,
- a lighting-aware neural renderer conditioned on HDR environment maps,
- an optional geometry frontend based on Hunyuan3D-2.1,
- tools for single-view relighting, novel-view relighting, HDRI rotation videos, and GLB export.
Release Status
- Checkpoints / model weights
- Inference code
- Hugging Face demo (Space)
- Data release
- Training code
Space: huggingface.co/spaces/luh0502/NeAR β see
DEPLOY_HF_SPACE.mdfor push steps and GPU/CUDA notes. Data and training code are coming soon.
ZeroGPU Runtime Notes
- The Space entry script is
app_filein the YAML header at the very top of this README (currentlyapp_gsplat.py). Useapp.pyfor the full NeAR app orapp_hyshape.pyfor HyShape-only diagnostics. - Space still runs the old entry? (1) Open the Space URL: huggingface.co/spaces/luh0502/NeAR (not
huggingface.co/luh0502/NeAR, which is the model repo). (2) In Space Settings β App file, ensure it matches the README (the dashboard can override or lag). (3) Restart the Space (or trigger a new build) after pushing. app_hyshape.py(when used as entry): defaults toNEAR_HYSHAPE_GEOMETRY_CPU_PRELOAD_AT_START=1β background CPU Hunyuan load at start; Generate Mesh pays GPU move + inference in@spaces.GPU.- The full
app.pySpace keeps page-load image defaults and HDRI preview on lightweight CPU paths so the first page visit does not spend the first ZeroGPU allocation on model initialization. app.py: optional background CPU preload of Hunyuan + NeAR (NEAR_MODEL_CPU_PRELOAD_AT_START);@spaces.GPUcallbacks move each pipeline to CUDA once, then run inference. gsplat is used when the pipeline renders (no separate app-level warmup pass).- Binary wheels and mirrored auxiliary assets are stored separately:
luh0502/near-wheels: prebuilt wheels such asnvdiffrastand optional futuregsplatwheelsluh0502/near-assets: torch.hub-compatible mirrored auxiliary assets such as the DINOv2 repo used by NeAR/TRELLIS image-conditioning
- See
DEPLOY_HF_SPACE.mdfor the recommended ZeroGPU environment variable matrix and thehfupload workflow.
Teaser
Relightable 3D generative rendering results. Columns from left to right depict the target illumination, the casually lit input image, Blender-rendered results from Trellis 3D, Hunyuan 3D-2.1 (with PBR materials), our method's estimated multi-view PBR materials back-projected onto the given mesh, our neural rendering results, and ground truth.
Example Relighting / Material Videos
The following videos are produced by the local NeAR example pipeline and are useful for quickly previewing:
- Novel-view relighting video: camera moves while the illumination stays fixed.
- HDRI rotation preview: environment map rotates while the camera stays fixed.
- Relighting under rotating HDRI: material response changes under time-varying illumination.
If these local videos are not present, you can generate them with example.py and --video_frames > 0.
Overview
NeAR couples asset representation and renderer design:
- Asset side: from an input image, a structured latent representation stores geometry-aware and material-aware information in a compact sparse latent.
- Renderer side: a neural renderer takes the latent, view parameters, and an HDR environment map, then predicts relightable outputs such as color, base color, metallic, roughness, and shadow.
Compared with a standard image-to-3D pipeline, NeAR focuses on:
- relighting under novel HDR illumination,
- view-consistent rendering,
- fast feed-forward inference, and
- material-aware rendering outputs.
Repository Structure
Key files and directories:
example.pyβ minimal end-to-end inference example.app.pyβ full NeAR Gradio app; setapp_file: app.pyin the README YAML header to run it on the Space.app_gsplat.pyβ gsplat image-fitting Gradio demo (ZeroGPU); README YAMLapp_fileis set to this when you want this entry (see top of file).app_hyshape.pyβ HyShape-only diagnostic; setapp_file: app_hyshape.pyfor Hunyuan geometry in isolation.setup.shβ environment setup helper.checkpoints/β local pipeline configuration and model checkpoints.trellis/pipelines/near_image_to_relightable_3d.pyβ main NeAR inference pipeline.trellis/utils/render_utils_rl.pyβ relighting rendering utilities.trellis/datasets/hdri_processer.pyβ HDRI preprocessing and rotation helpers.hy3dshape/β local Hunyuan3D code used for geometry generation.
Installation
Requirements
- Linux
- NVIDIA GPU
- Python 3.10+ recommended
- CUDA-compatible PyTorch environment
NeAR inherits many dependencies from TRELLIS and additionally uses relighting-related packages such as pyexr, simple_ocio, open3d, and the local hy3dshape module.
Setup
Use the provided setup script as a starting point:
cd /root/code/3diclight/NeAR
. ./setup.sh --help
A typical TRELLIS-style setup may look like:
. ./setup.sh --new-env --basic --xformers --flash-attn --diffoctreerast --spconv --mipgaussian --kaolin --nvdiffrast
Depending on your environment, you may still need to manually install extra packages used by NeAR, for example:
pip install pyexr simple-ocio open3d rembg imageio easydict
If you use Hunyuan3D geometry generation, make sure the hy3dshape dependencies are also installed.
Checkpoints
The local pipeline configuration is defined in:
checkpoints/pipeline.yaml
It references the main model components used by NeAR, including:
decoderhdri_encoderneural_basisrendererslat_flow_model
The geometry model is currently run separately in example.py via:
tencent/Hunyuan3D-2.1
Inference Modes
NeAR currently supports two practical inference modes.
1. From image to relightable result
Pipeline:
- preprocess the image,
- generate geometry using Hunyuan3D,
- convert geometry to sparse coordinates,
- predict SLAT from the image and geometry,
- render under a target HDRI.
2. From existing SLaT to relightable result
If you already have a saved .npz SLaT file, NeAR can skip geometry and latent generation, and directly render under a target HDRI.
Minimal Example
The main entry point is example.py.
Single-image relighting
python example.py \
--image assets/example_image/T.png \
--hdri assets/hdris/studio_small_03_1k.exr \
--out_dir relight_out
Rotate the environment light
python example.py \
--image assets/example_image/T.png \
--hdri assets/hdris/studio_small_03_1k.exr \
--hdri_rot 90 \
--out_dir relight_out
Render from an existing SLaT
python example.py \
--slat /path/to/sample_slat.npz \
--hdri assets/hdris/studio_small_03_1k.exr \
--out_dir relight_out
Generate camera-path and HDRI-rotation videos
python example.py \
--image assets/example_image/T.png \
--hdri assets/hdris/studio_small_03_1k.exr \
--video_frames 40 \
--out_dir relight_out
Example Outputs
Running example.py typically produces:
relight_out/initial_3d_shape.glbβ geometry generated by Hunyuan3Drelight_out/relight_color.pngβ relit color resultrelight_out/base_color.pngβ estimated base colorrelight_out/metallic.pngβ metallic map visualizationrelight_out/roughness.pngβ roughness map visualizationrelight_out/shadow.pngβ shadow map visualizationrelight_out/relight_camera_path.mp4β novel-view relighting videorelight_out/hdri_roll.mp4β rotating HDRI previewrelight_out/relight_hdri_rotation.mp4β fixed-view relighting under rotating HDRI
If --save_slat is specified, the inferred SLaT will also be saved as an .npz file.
Important Notes
1. Geometry is run outside the main NeAR pipeline
To avoid coupling geometry inference too tightly with the relighting pipeline, the current codebase runs Hunyuan3D separately in example.py, then passes the generated mesh into:
pipeline.run_with_shape(...)
This design keeps the relighting pipeline cleaner and makes geometry easier to swap out.
2. HDRI rotation
--hdri_rot in example.py controls the static rotation angle for regular rendering.
For continuous environment rotation, example.py also calls:
pipeline.render_hdri_rotation_video(...)
which returns both:
- rotated HDRI preview frames, and
- rendered relighting frames.
3. Full video rendering
Some video modes concatenate multiple outputs side by side:
- color
- base color
- metallic
- roughness
- shadow
This is useful for debugging and qualitative comparison, but increases video width and storage size.
4. Resolution and rendering options
The renderer is configured inside the pipeline via:
setup_renderer(...)render_view(...)render_camera_path_video(...)render_hdri_rotation_video(...)
You can adjust:
- output resolution,
- camera FOV,
- camera radius,
- background color,
- HDRI rotation,
- video frame count.
Core API
Main NeAR pipeline methods include:
preprocess_image(image)run_with_shape(image, mesh, ...)run_with_coords(image_list, coords, ...)load_slat(path)load_hdri(path)render_view(...)render_camera_path_video(...)render_hdri_rotation_video(...)export_glb_from_slat(...)
Typical Workflow
A practical workflow is:
- start from an image,
- generate geometry with Hunyuan3D,
- infer SLaT with
run_with_shape, - save the SLaT,
- reuse the SLaT for different HDRIs, different HDRI rotations, and different camera paths.
This avoids recomputing geometry and latent generation every time you want to test a new lighting setup.
Related Projects
Acknowledgements
This repository builds on and adapts ideas, codebases, and problem settings from several recent works on structured 3D latents, relighting, inverse rendering, and PBR-aware 3D generation, including:
- TRELLIS for structured latent generation and sparse 3D asset representations,
- Hunyuan3D 2.1 for image-to-geometry generation,
- DiLightNet and Neural Gaffer for diffusion-based lighting control and object relighting,
- DiffusionRenderer for neural inverse / forward rendering under complex appearance and illumination,
- MeshGen for PBR textured mesh generation,
- RGBβX for material- and lighting-aware decomposition and synthesis,
We thank the authors of these projects for releasing their papers, code, models, and project pages. If you use this repository, please also check the licenses and terms of the upstream dependencies and models.
BibTeX
If you find this project useful, please consider citing our paper:
@inproceedings{li2025near,
title={NeAR: Coupled Neural Asset-Renderer Stack},
author={Li, Hong and Ye, Chongjie and Chen, Houyuan and Xiao, Weiqing and Yan, Ziyang and Xiao, Lixing and Chen, Zhaoxi and Xiang, Jianfeng and Xu, Shaocong and Liu, Xuhui and Wang, Yikai and Zhang, Baochang and Han, Xiaoguang and Yang, Jiaolong and Zhao, Hao},
booktitle={CVPR},
year={2026}
}