GLOW-FDG
Generalized cancer LesiOn Whole-body segmentation model for 18F-FDG PET/CT.
GLOW-FDG is an open-source 3D segmentation model for automated whole-body cancer lesion delineation in 18F-FDG PET/CT. It is built on the nnU-Net framework using the ResEnc L architecture and was trained on a curated, multi-institutional corpus of 1,563 FDG-PET/CT scans spanning lung cancer, head and neck cancer, lymphoma, melanoma, soft tissue sarcoma, prostate cancer, and PET-negative controls. The model was evaluated on 185 external scans from independent cohorts covering breast cancer, nonmetastatic and oligometastatic lung cancer, head and neck cancer, and metastatic melanoma.
The release contains the 5-fold cross-validation checkpoints for ensembling.
Highlights
- Whole-body FDG-PET/CT cancer lesion segmentation across multiple cancer types
- Dual-head design: a primary lesion head and an auxiliary organ-supervision head (spleen, kidneys, liver, urinary bladder, lung, brain, heart, stomach, prostate, parotid and submandibular glands) to suppress physiologic-uptake false positives
- Large-scale multi-modal (CT / MR / PET) MultiTalent-style pretraining followed by task-specific finetuning
- PET/CT misalignment augmentation for robustness to patient motion and registration errors
- Outperforms publicly available FDG-PET/CT benchmarks on lesion detection and segmentation across five external cohorts; performance approaches inter-observer variability between expert radiation oncologists
Intended Use
GLOW-FDG is intended for research use in automated whole-body FDG-PET/CT cancer lesion segmentation and for the extraction of quantitative PET biomarkers such as total tumor burden (TTB) and total lesion glycolysis (TLG). It is not a certified medical device and must not be used as the sole basis for clinical decisions.
Out-of-scope / Limitations
- Trained on standard-dose FDG-PET/CT; behavior on ultra-low-dose acquisitions has not been validated.
- Trained only with FDG; not applicable to other tracers (e.g. PSMA, 68Ga-DOTATATE).
- Lesions without clear PET visibility may not be reliably detected.
- Inputs must be a co-registered PET/CT pair with SUV-normalized PET.
Model Details
| Framework | nnU-Net (3d_fullres, ResEnc L preset) |
| Inputs | 2 channels: CT (HU) and PET (SUVBW) |
| Output | Binary lesion segmentation mask (auxiliary organ head used during training only) |
| Target spacing | 3.0 × 2.04 × 2.04 mm |
| Patch size | 192 × 192 × 192 |
| Training | 1,500 epochs, batch size 3, SGD with Nesterov momentum 0.99, LR 1e-2 with polynomial decay |
| Pretraining | MultiTalent-style multi-dataset pretraining on CT/MR/PET, 4,000 epochs, patch 192³, batch 24 |
| Folds | 5-fold cross-validation checkpoints (intended to be ensembled) |
Repository Contents
GLOW-FDG/
dataset.json # nnU-Net dataset description
plans.json # nnU-Net plans (architecture, preprocessing, etc.)
fold_0/checkpoint_final.pth
fold_1/checkpoint_final.pth
fold_2/checkpoint_final.pth
fold_3/checkpoint_final.pth
fold_4/checkpoint_final.pth
Usage
GLOW-FDG runs through the standard nnU-Net v2 inference API.
1. Install nnU-Net
pip install nnunetv2
2. Download the model
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="<org>/GLOW-FDG")
# model_dir/GLOW-FDG/ now contains dataset.json, plans.json and fold_0..fold_4
3. Prepare your data
Each case must contain two co-registered channels following the nnU-Net naming convention:
input_folder/
CASE001_0000.nii.gz # CT (HU)
CASE001_0001.nii.gz # PET (SUV body-weight normalized)
PET intensities must be converted to body-weight SUV before inference.
4. Run inference
import torch
from nnunetv2.inference.predict_from_raw_data import nnUNetPredictor
predictor = nnUNetPredictor(
tile_step_size=0.5,
use_gaussian=True,
use_mirroring=True,
perform_everything_on_device=True,
device=torch.device("cuda"),
)
predictor.initialize_from_trained_model_folder(
f"{model_dir}/GLOW-FDG",
use_folds=(0, 1, 2, 3, 4),
checkpoint_name="checkpoint_final.pth",
)
predictor.predict_from_files(
list_of_lists_or_source_folder="input_folder",
output_folder_or_list_of_truncated_output_files="output_folder",
save_probabilities=False,
overwrite=True,
num_processes_preprocessing=2,
num_processes_segmentation_export=2,
)
The output is a binary NIfTI mask per case where 1 denotes predicted FDG-avid cancer lesions.
Training Data
GLOW-FDG was trained on 1,563 FDG-PET/CT scans pooled from public and institutional sources, including AutoPET, HECKTOR, DEEP-PSMA, ACRIN-HNSCC, HN-PET-CT, NSCLC-RadGen, TCIA-STS, SAKK, and the SINERGIA melanoma cohort. All cases were manually reviewed to verify PET–mask correspondence, lesion completeness, and PET visibility. Organ labels for the auxiliary head were generated with TotalSegmentator.
Citation
If you use GLOW-FDG in your research, please cite the related evaluation study and the methodological precursor:
@article{fritsak2026generalizing,
title = {Generalizing Beyond Training Data: Performance Assessment of the Best AutoPET III Model on Diverse Cancer Types},
author = {Fritsak, Maksym and Gabrys, Hubert and Rokuss, Maximilian and Christ, Sebastian and Martz, Nicolas and Paunoiu, Alina and Opitz, Isabelle and Stahel, Rolf A. and Huellner, Martin W. and Guckenberger, Matthias and Tanadini-Lang, Stephanie},
year = {2026},
note = {Available at SSRN},
doi = {10.2139/ssrn.6748676},
url = {https://ssrn.com/abstract=6748676}
}
@misc{rokuss2024fdgpsmahitchhikersguide,
title={From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging},
author={Maximilian Rokuss and Balint Kovacs and Yannick Kirchhoff and Shuhan Xiao and Constantin Ulrich and Klaus H. Maier-Hein and Fabian Isensee},
year={2024},
eprint={2409.09478},
archivePrefix={arXiv},
primaryClass={eess.IV},
url={https://arxiv.org/abs/2409.09478},
}
A dedicated GLOW-FDG manuscript is in preparation; this section will be updated once it becomes publicly available.
License
Released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. The model weights may be used, shared, and adapted for non-commercial purposes with attribution; derivative works must be distributed under the same license. Note that some of the underlying training datasets carry their own licenses and data use agreements that may impose additional restrictions.
Acknowledgments
Developed at the Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, in collaboration with University Hospital Zürich (USZ). Built on nnU-Net; auxiliary organ labels generated with TotalSegmentator.
- Downloads last month
- 13