hc99's picture
Add files using upload-large-folder tool
e4b9a7b verified
|
raw
history blame
1.99 kB

LAMP: Large Deep Nets with Automated Model Parallelism for Image Segmentation

LAMP on Head and Neck Dataset

If you use this work in your research, please cite the paper.

A reimplementation of the LAMP system originally proposed by:

Wentao Zhu, Can Zhao, Wenqi Li, Holger Roth, Ziyue Xu, and Daguang Xu (2020) "LAMP: Large Deep Nets with Automated Model Parallelism for Image Segmentation." MICCAI 2020 (Early Accept, paper link: https://arxiv.org/abs/2006.12575)

To run the demo:

Prerequisites

  • Download and switch to MONAI 0.2.0 source code:
git clone https://github.com/Project-MONAI/MONAI
cd MONAI
git checkout 0.2.0
pip install -e .  # install from the source code
  • pip install torchgpipe

The rest of the steps assume that the current directory is the folder of this README file.

Data

mkdir ./data;
cd ./data;

Please download and unzip the Head and Neck CT dataset into ./data folder.

unzip HaN.zip;  # unzip could be done with other external tools

Please find more details of the dataset at https://github.com/wentaozhu/AnatomyNet-for-anatomical-segmentation.git

Minimal hardware requirements for full image training

  • U-Net (n_feat=32): 2x 16Gb GPUs
  • U-Net (n_feat=64): 4x 16Gb GPUs
  • U-Net (n_feat=128): 2x 32Gb GPUs

Commands

The number of features in the first block (--n_feat) can be 32, 64, or 128.

mkdir ./log;
python train.py --n_feat=128 --crop_size='64,64,64' --bs=16 --ep=4800  --lr=0.001 > ./log/YOURLOG.log
python train.py --n_feat=128 --crop_size='128,128,128' --bs=4 --ep=1200 --lr=0.001 --pretrain='./HaN_32_16_1200_64,64,64_0.001_*'  > ./log/YOURLOG.log
python train.py --n_feat=128 --crop_size='-1,-1,-1' --bs=1 --ep=300 --lr=0.001 --pretrain='./HaN_32_16_1200_64,64,64_0.001_*' > ./log/YOURLOG.log