YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
NAKA-GS
This pipeline was bulided base on VGGT and gsplat, thanks for their excellent works.
The Paper can be found at: https://arxiv.org/abs/2604.11142; or view the .pdf file at: https://arxiv.org/pdf/2604.11142
NAKA-GS is an end-to-end pipeline for low-light 3D scene reconstruction and novel-view synthesis:
Nakaenhances low-light training images.VGGTreconstructs sparse cameras and geometry from the enhanced images.gsplatperforms Gaussian Splatting training, with optionalPPMdense-point preprocessing.
The qualitative result (visual comparison on RealX3D) can be found at folder"asset"
1. What The Pipeline Expects
Each scene directory should look like this before the first run:
data/
βββ Scene1/
βββ train/ # low-light training images
βββ transforms_train.json # training camera poses
βββ transforms_test.json # render trajectory / test poses
βββ test/ # optional GT test images for metrics
After the pipeline runs, it will automatically create:
data/
βββ Scene/
βββ images/ # Naka-enhanced images
βββ sparse/ # VGGT reconstruction outputs
β βββ cameras.bin
β βββ images.bin
β βββ points3D.bin
β βββ points.ply
βββ gsplat_results/ # rendering results, stats, checkpoints
Notes:
images/,sparse/, andgsplat_results/do not need to exist before the first run.sparse/points.plyis produced by the VGGT stage and then reused by the PPM stage.- If a scene does not contain ground-truth test images, the pipeline still renders novel views but skips reference-image metrics.
2. System Requirements
- Linux
- NVIDIA GPU
- CUDA-compatible PyTorch environment
- A working CUDA toolkit /
nvccvisible to the environment forgsplatextension compilation
All experiments and internal validation for this repository were tested on an NVIDIA RTX A6000 GPU.
3. Install The Environment
We recommend Conda for reproducibility.
If the unified environment in this README does not solve cleanly on your machine, use the original environment setup procedures from the two upstream components instead:
vggt/README.mdgsplat/README.md
In that fallback workflow, configure the VGGT and gsplat environments separately first, then return to this repository and run the unified pipeline script.
Option A: Conda
From the repository root:
conda env create -f environment.yaml
conda activate naka-gs
pip install git+https://github.com/rahul-goel/fused-ssim@328dc9836f513d00c4b5bc38fe30478b4435cbb5
pip install git+https://github.com/harry7557558/fused-bilagrid@90f9788e57d3545e3a033c1038bb9986549632fe
pip install git+https://github.com/nerfstudio-project/nerfview@4538024fe0d15fd1a0e4d760f3695fc44ca72787
pip install ppisp @ git+https://github.com/nv-tlabs/ppisp@v1.0.0
If your Conda solver is slow, you can use:
conda env create -f environment.yaml --solver=libmamba
Option B: Pip
If you already have a matching CUDA PyTorch installation:
pip install -r requirements.txt
4. Download The VGGT Checkpoint
The repository does not include the VGGT model weight. Download the official checkpoint and place it at:
vggt/checkpoint/model.pt
Official model page:
Direct checkpoint URL:
Example:
mkdir -p vggt/checkpoint
wget -O vggt/checkpoint/model.pt \
https://huggingface.co/facebook/VGGT-1B/resolve/main/model.pt
5. Naka Checkpoint
By default, the pipeline looks for the Naka checkpoint at:
outputs/naka/checkpoints/latest.pth
6. Prepare The Scene
Put your scene under data/ or any other location you prefer. The important part is that --scene_dir points to the scene root.
Example:
/path/to/naka-gs/data/Scene/
βββ train/
βββ transforms_train.json
βββ transforms_test.json
βββ test/ # optional
train/ is required.transforms_train.json is required when using --pose-source replace.transforms_test.json is required when using --render-traj-path testjson.
7. Reproduce The Unified Pipeline Command
From the repository root, run:
python run_lowlight_reconstruction.py \
--scene_dir /path/to/naka-gs/data/Your_Scene \
--pose-source replace \
--render-traj-path testjson \
--disable-viewer \
--ppm-enable \
--ppm-dense-points-path sparse/points.ply \
--ppm-align-mode none \
--ppm-voxel-size 0.01 \
--ppm-tau0 0.005 \
--ppm-beta 0.01 \
--ppm-iters 6
This command runs the full pipeline:
- Low-light
train/images are enhanced intoimages/. VGGTreconstructs the scene and writessparse/plussparse/points.ply.gsplatusesPPMto preprocesssparse/points.ply, then trains and renders the target trajectory fromtransforms_test.json.
8. Example With A Local Conda Python Path
If you want to use a specific Python interpreter inside a Conda environment, the command is equivalent to:
/path/to/conda/env/bin/python /path/to/naka-gs/run_lowlight_reconstruction.py \
--scene_dir /path/to/naka-gs/data/Your_Scene \
--pose-source replace \
--render-traj-path testjson \
--disable-viewer \
--ppm-enable \
--ppm-dense-points-path sparse/points.ply \
--ppm-align-mode none \
--ppm-voxel-size 0.01 \
--ppm-tau0 0.005 \
--ppm-beta 0.01 \
--ppm-iters 6
9. Main Outputs
After a successful run, check:
data/Laboratory/images/for enhanced imagesdata/Laboratory/sparse/for the VGGT sparse reconstructiondata/Laboratory/gsplat_results/for rendered views, metrics, checkpoints, and logsdata/Laboratory/gsplat_results/pipeline_summary.jsonfor a stage-by-stage summary
10. Useful Variants
Reuse Existing Enhanced Images
python run_lowlight_reconstruction.py \
--scene_dir /path/to/scene \
--skip_naka
Reuse Existing Sparse Reconstruction
python run_lowlight_reconstruction.py \
--scene_dir /path/to/scene \
--skip_naka \
--skip_vggt
Disable PPM
python run_lowlight_reconstruction.py \
--scene_dir /path/to/scene \
--ppm-enable false
11. Common Issues
FileNotFoundError: Naka checkpoint is required
Provide --naka_ckpt /path/to/latest.pth, or place the checkpoint at the default path shown above.
No enhanced images found
Make sure train/ contains valid image files and the Naka stage finished successfully.
PPM dense point cloud is missing: .../sparse/points.ply
This usually means the VGGT stage did not finish successfully, so sparse/points.ply was not generated.
torch.cuda.is_available() is False
The gsplat stage requires a visible CUDA GPU.
gsplat spends a long time on the first run
This is expected when the CUDA extension is compiled for the first time.
12. Minimal Checklist Before Running
- Environment created successfully
vggt/checkpoint/model.ptdownloaded- Naka checkpoint available, either at the default path or via
--naka_ckpt - Scene directory contains
train/ transforms_train.jsonexists for--pose-source replacetransforms_test.jsonexists for--render-traj-path testjson
13. Citation
If you find this code useful for your research, please use the following BibTeX entry.
@misc{zhu2026nakagsbionicsinspireddualbranchnaka,
title={Naka-GS: A Bionics-inspired Dual-Branch Naka Correction and Progressive Point Pruning for Low-Light 3DGS},
author={Runyu Zhu and SiXun Dong and Zhiqiang Zhang and Qingxia Ye and Zhihua Xu},
year={2026},
eprint={2604.11142},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2604.11142},
}