Update README.md
Browse files
README.md
CHANGED
|
@@ -11,4 +11,9 @@ We offer SAT-Pro, SAT-Nano (both trained on 72 datasets) and another 5 different
|
|
| 11 |
|
| 12 |
Check our [paper](https://github.com/zhaoziheng/SAT/tree/main) for more details, and [github repo](https://github.com/zhaoziheng/SAT/tree/main?tab=readme-ov-file) for usage instruction.
|
| 13 |
|
| 14 |
-
⚠️ Each model should be used with paired checkpoint and text encoder checkpoint.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
Check our [paper](https://github.com/zhaoziheng/SAT/tree/main) for more details, and [github repo](https://github.com/zhaoziheng/SAT/tree/main?tab=readme-ov-file) for usage instruction.
|
| 13 |
|
| 14 |
+
⚠️ Each model should be used with paired checkpoint and text encoder checkpoint.
|
| 15 |
+
|
| 16 |
+
In addition, we provide multiple pretrained encoders at ./Pretrain. Enhanced with multi-modal human anatomy knowledge, they significantly boost the segmentation performance and are potentially beneficial for other tasks:
|
| 17 |
+
- A version pretrained only with the textual knowledge (`textual_only.pth`).
|
| 18 |
+
- A version further pretrained with [SAT-DS](https://github.com/zhaoziheng/SAT-DS/tree/main) (`multimodal_sat_ds.pth`). It can be used to reproduce results in our [paper](https://arxiv.org/abs/2312.17183).
|
| 19 |
+
- A version further pretrained with 10% training data from [CVPR 2025: FOUNDATION MODELS FOR TEXT-GUIDED 3D BIOMEDICAL IMAGE SEGMENTATION](https://www.codabench.org/competitions/5651/) (`multimodal_cvpr25.pth`). It's explicitly optimized for the challenge.
|