File size: 7,813 Bytes
d3120af
 
 
 
 
 
 
 
 
 
 
7439334
d3120af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7439334
d3120af
 
7439334
 
 
 
 
 
 
d3120af
 
 
 
 
 
 
 
 
 
 
 
 
7439334
 
d3120af
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
---

license: cc-by-nc-sa-4.0
tags:
  - medical-imaging
  - pet-ct
  - segmentation
  - oncology
  - nnunet
  - 3d-segmentation
library_name: nnunet
pipeline_tag: image-segmentation
thumbnail: GLOW-FDG/logo.jpg
---


<p align="center">
  <img src="GLOW-FDG/logo.jpg" alt="GLOW-FDG logo" width="240"/>
</p>

# GLOW-FDG

**G**eneralized cancer **L**esi**O**n **W**hole-body segmentation model for **<sup>18</sup>F-FDG PET/CT**.

GLOW-FDG is an open-source 3D segmentation model for automated whole-body cancer lesion delineation in <sup>18</sup>F-FDG PET/CT. It is built on the [nnU-Net](https://github.com/MIC-DKFZ/nnUNet) framework using the **ResEnc L** architecture and was trained on a curated, multi-institutional corpus of **1,563 FDG-PET/CT scans** spanning lung cancer, head and neck cancer, lymphoma, melanoma, soft tissue sarcoma, prostate cancer, and PET-negative controls. The model was evaluated on **185 external scans** from independent cohorts covering breast cancer, nonmetastatic and oligometastatic lung cancer, head and neck cancer, and metastatic melanoma.

The release contains the **5-fold cross-validation checkpoints** for ensembling.

## Highlights

- Whole-body FDG-PET/CT cancer lesion segmentation across multiple cancer types
- Dual-head design: a primary lesion head and an auxiliary organ-supervision head (spleen, kidneys, liver, urinary bladder, lung, brain, heart, stomach, prostate, parotid and submandibular glands) to suppress physiologic-uptake false positives
- Large-scale multi-modal (CT / MR / PET) MultiTalent-style pretraining followed by task-specific finetuning
- PET/CT misalignment augmentation for robustness to patient motion and registration errors
- Outperforms publicly available FDG-PET/CT benchmarks on lesion detection and segmentation across five external cohorts; performance approaches inter-observer variability between expert radiation oncologists

## Intended Use

GLOW-FDG is intended for **research use** in automated whole-body FDG-PET/CT cancer lesion segmentation and for the extraction of quantitative PET biomarkers such as total tumor burden (TTB) and total lesion glycolysis (TLG). It is **not** a certified medical device and must not be used as the sole basis for clinical decisions.

### Out-of-scope / Limitations

- Trained on standard-dose FDG-PET/CT; behavior on ultra-low-dose acquisitions has not been validated.
- Trained only with FDG; not applicable to other tracers (e.g. PSMA, <sup>68</sup>Ga-DOTATATE).
- Lesions without clear PET visibility may not be reliably detected.
- Inputs must be a co-registered PET/CT pair with SUV-normalized PET.

## Model Details

| | |
|---|---|
| Framework | nnU-Net (3d_fullres, ResEnc L preset) |

| Inputs | 2 channels: CT (HU) and PET (SUV<sub>BW</sub>) |

| Output | Binary lesion segmentation mask (auxiliary organ head used during training only) |

| Target spacing | 3.0 × 2.04 × 2.04 mm |

| Patch size | 192 × 192 × 192 |

| Training | 1,500 epochs, batch size 3, SGD with Nesterov momentum 0.99, LR 1e-2 with polynomial decay |

| Pretraining | MultiTalent-style multi-dataset pretraining on CT/MR/PET, 4,000 epochs, patch 192³, batch 24 |

| Folds | 5-fold cross-validation checkpoints (intended to be ensembled) |



## Repository Contents



```

GLOW-FDG/

  dataset.json           # nnU-Net dataset description

  plans.json             # nnU-Net plans (architecture, preprocessing, etc.)

  fold_0/checkpoint_final.pth

  fold_1/checkpoint_final.pth

  fold_2/checkpoint_final.pth

  fold_3/checkpoint_final.pth

  fold_4/checkpoint_final.pth

```



## Usage



GLOW-FDG runs through the standard nnU-Net v2 inference API.



### 1. Install nnU-Net



```bash

pip install nnunetv2

```



### 2. Download the model



```python

from huggingface_hub import snapshot_download



model_dir = snapshot_download(repo_id="<org>/GLOW-FDG")
# model_dir/GLOW-FDG/ now contains dataset.json, plans.json and fold_0..fold_4

```



### 3. Prepare your data



Each case must contain two co-registered channels following the nnU-Net naming convention:



```

input_folder/
  CASE001_0000.nii.gz   # CT (HU)

  CASE001_0001.nii.gz   # PET (SUV body-weight normalized)
```



PET intensities **must** be converted to body-weight SUV before inference.



### 4. Run inference



```python

import torch

from nnunetv2.inference.predict_from_raw_data import nnUNetPredictor



predictor = nnUNetPredictor(

    tile_step_size=0.5,

    use_gaussian=True,

    use_mirroring=True,

    perform_everything_on_device=True,

    device=torch.device("cuda"),

)



predictor.initialize_from_trained_model_folder(

    f"{model_dir}/GLOW-FDG",

    use_folds=(0, 1, 2, 3, 4),

    checkpoint_name="checkpoint_final.pth",

)



predictor.predict_from_files(

    list_of_lists_or_source_folder="input_folder",

    output_folder_or_list_of_truncated_output_files="output_folder",

    save_probabilities=False,

    overwrite=True,

    num_processes_preprocessing=2,

    num_processes_segmentation_export=2,

)

```

The output is a binary NIfTI mask per case where `1` denotes predicted FDG-avid cancer lesions.

## Training Data

GLOW-FDG was trained on 1,563 FDG-PET/CT scans pooled from public and institutional sources, including AutoPET, HECKTOR, DEEP-PSMA, ACRIN-HNSCC, HN-PET-CT, NSCLC-RadGen, TCIA-STS, SAKK, and the SINERGIA melanoma cohort. All cases were manually reviewed to verify PET–mask correspondence, lesion completeness, and PET visibility. Organ labels for the auxiliary head were generated with [TotalSegmentator](https://github.com/wasserth/TotalSegmentator).

## Citation

If you use GLOW-FDG in your research, please cite the related evaluation study and the methodological precursor:

```bibtex

@article{fritsak2026generalizing,

  title   = {Generalizing Beyond Training Data: Performance Assessment of the Best AutoPET III Model on Diverse Cancer Types},

  author  = {Fritsak, Maksym and Gabrys, Hubert and Rokuss, Maximilian and Christ, Sebastian and Martz, Nicolas and Paunoiu, Alina and Opitz, Isabelle and Stahel, Rolf A. and Huellner, Martin W. and Guckenberger, Matthias and Tanadini-Lang, Stephanie},

  year    = {2026},

  note    = {Available at SSRN},

  doi     = {10.2139/ssrn.6748676},

  url     = {https://ssrn.com/abstract=6748676}

}



@misc{rokuss2024fdgpsmahitchhikersguide,

      title={From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging}, 

      author={Maximilian Rokuss and Balint Kovacs and Yannick Kirchhoff and Shuhan Xiao and Constantin Ulrich and Klaus H. Maier-Hein and Fabian Isensee},

      year={2024},

      eprint={2409.09478},

      archivePrefix={arXiv},

      primaryClass={eess.IV},

      url={https://arxiv.org/abs/2409.09478}, 

}

```

A dedicated GLOW-FDG manuscript is in preparation; this section will be updated once it becomes publicly available.

## License

Released under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. The model weights may be used, shared, and adapted for **non-commercial** purposes with attribution; derivative works must be distributed under the same license. Note that some of the underlying training datasets carry their own licenses and data use agreements that may impose additional restrictions.

## Acknowledgments

Developed at the Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, in collaboration with University Hospital Zürich (USZ). Built on [nnU-Net](https://github.com/MIC-DKFZ/nnUNet); auxiliary organ labels generated with [TotalSegmentator](https://github.com/wasserth/TotalSegmentator).