Add image-segmentation task category, paper link, and GitHub repository

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +30 -13
README.md CHANGED
@@ -1,12 +1,15 @@
1
  ---
2
  license: mit
 
 
3
  ---
4
 
5
  # Cryo-Bench 🧊
6
 
7
  > **A Benchmark for Evaluating Geospatial Foundation Models on Cryosphere Applications**
8
 
9
- [![Paper](https://img.shields.io/badge/Paper-Coming%20Soon-lightgrey?style=flat-square&logo=arxiv)](https://arxiv.org)
 
10
  [![PANGAEA](https://img.shields.io/badge/Built%20on-PANGAEA-blue?style=flat-square)](https://arxiv.org/abs/2412.04204)
11
  [![License: MIT](https://img.shields.io/badge/License-MIT-green?style=flat-square)](LICENSE)
12
 
@@ -38,21 +41,24 @@ Cryo-Bench includes five benchmark tasks covering key components of the cryosphe
38
 
39
  ---
40
 
41
- The dataset contains the exact training, validation, and test splits used in **Cryo-Bench**, covering the **SICD, GLID, GLD, GSDD, and CaFFe** datasets.
42
 
43
- **πŸ“₯ Download Data**
44
 
45
  - Install the dependency:
46
-
47
  pip install huggingface_hub
 
48
 
49
- - Download all datasets at once:
50
-
51
  python download_data.py
 
52
 
53
- Download specific datasets only:
54
-
55
- - python download_data.py --datasets GLID GLD SICD
 
56
 
57
 
58
  ## πŸ† Benchmark Results
@@ -85,15 +91,26 @@ Table below reports mIoU (↑) for all models evaluated with **frozen encoders**
85
  <p align="center">
86
  <img src="Fig.2.png" width="70%">
87
  </p>
88
- ## πŸ“œ License
89
 
90
- This project is licensed under the [MIT License](LICENSE).
91
 
92
- ---
 
 
 
 
 
 
 
 
 
93
 
 
 
 
94
 
95
  ---
96
 
97
  ## πŸ™ Acknowledgements
98
 
99
- Cryo-Bench builds on the [PANGAEA benchmark](https://github.com/yurujaja/pangaea-bench) and the [RAMEN](https://github.com/nicolashoudre/RAMEN) framework. We thank the developers of DOFA, TerraMind, Prithvi, SatlasNet, and all other foundation models included in this benchmark. We also thank the dataset authors of GSDD, GLID, GLD, SICD, and CaFFe for making their data publicly available.
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - image-segmentation
5
  ---
6
 
7
  # Cryo-Bench 🧊
8
 
9
  > **A Benchmark for Evaluating Geospatial Foundation Models on Cryosphere Applications**
10
 
11
+ [![Paper](https://img.shields.io/badge/Paper-2603.01576-b31b1b.svg)](https://huggingface.co/papers/2603.01576)
12
+ [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/Sk-2103/Cryo-Bench)
13
  [![PANGAEA](https://img.shields.io/badge/Built%20on-PANGAEA-blue?style=flat-square)](https://arxiv.org/abs/2412.04204)
14
  [![License: MIT](https://img.shields.io/badge/License-MIT-green?style=flat-square)](LICENSE)
15
 
 
41
 
42
  ---
43
 
44
+ ## πŸ“₯ Sample Usage (Download Data)
45
 
46
+ The dataset contains the exact training, validation, and test splits used in **Cryo-Bench**, covering the **SICD, GLID, GLD, GSDD, and CaFFe** datasets.
47
 
48
  - Install the dependency:
49
+ ```bash
50
  pip install huggingface_hub
51
+ ```
52
 
53
+ - Download all datasets at once using the script provided in the GitHub repository:
54
+ ```bash
55
  python download_data.py
56
+ ```
57
 
58
+ - Download specific datasets only:
59
+ ```bash
60
+ python download_data.py --datasets GLID GLD SICD
61
+ ```
62
 
63
 
64
  ## πŸ† Benchmark Results
 
91
  <p align="center">
92
  <img src="Fig.2.png" width="70%">
93
  </p>
 
94
 
95
+ ## πŸ“œ Citation
96
 
97
+ If you use this benchmark in your research, please cite:
98
+
99
+ ```bibtex
100
+ @article{kaushik2026cryobench,
101
+ title={Cryo-Bench: Benchmarking Foundation Models for Cryosphere Applications},
102
+ author={Kaushik, Saurabh and Maurya, Lalit and Tellman, Beth},
103
+ journal={arXiv preprint arXiv:2603.01576},
104
+ year={2026}
105
+ }
106
+ ```
107
 
108
+ ## πŸ“œ License
109
+
110
+ This project is licensed under the [MIT License](LICENSE).
111
 
112
  ---
113
 
114
  ## πŸ™ Acknowledgements
115
 
116
+ Cryo-Bench builds on the [PANGAEA benchmark](https://github.com/yurujaja/pangaea-bench) and the [RAMEN](https://github.com/nicolashoudre/RAMEN) framework. We thank the developers of DOFA, TerraMind, Prithvi, SatlasNet, and all other foundation models included in this benchmark. We also thank the dataset authors of GSDD, GLID, GLD, SICD, and CaFFe for making their data publicly available.