Datasets:
Add image-segmentation task category, paper link, and GitHub repository
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,12 +1,15 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
# Cryo-Bench π§
|
| 6 |
|
| 7 |
> **A Benchmark for Evaluating Geospatial Foundation Models on Cryosphere Applications**
|
| 8 |
|
| 9 |
-
[](https://arxiv.org/abs/2412.04204)
|
| 11 |
[](LICENSE)
|
| 12 |
|
|
@@ -38,21 +41,24 @@ Cryo-Bench includes five benchmark tasks covering key components of the cryosphe
|
|
| 38 |
|
| 39 |
---
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
-
**
|
| 44 |
|
| 45 |
- Install the dependency:
|
| 46 |
-
|
| 47 |
pip install huggingface_hub
|
|
|
|
| 48 |
|
| 49 |
-
- Download all datasets at once:
|
| 50 |
-
|
| 51 |
python download_data.py
|
|
|
|
| 52 |
|
| 53 |
-
Download specific datasets only:
|
| 54 |
-
|
| 55 |
-
|
|
|
|
| 56 |
|
| 57 |
|
| 58 |
## π Benchmark Results
|
|
@@ -85,15 +91,26 @@ Table below reports mIoU (β) for all models evaluated with **frozen encoders**
|
|
| 85 |
<p align="center">
|
| 86 |
<img src="Fig.2.png" width="70%">
|
| 87 |
</p>
|
| 88 |
-
## π License
|
| 89 |
|
| 90 |
-
|
| 91 |
|
| 92 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
---
|
| 96 |
|
| 97 |
## π Acknowledgements
|
| 98 |
|
| 99 |
-
Cryo-Bench builds on the [PANGAEA benchmark](https://github.com/yurujaja/pangaea-bench) and the [RAMEN](https://github.com/nicolashoudre/RAMEN) framework. We thank the developers of DOFA, TerraMind, Prithvi, SatlasNet, and all other foundation models included in this benchmark. We also thank the dataset authors of GSDD, GLID, GLD, SICD, and CaFFe for making their data publicly available.
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-segmentation
|
| 5 |
---
|
| 6 |
|
| 7 |
# Cryo-Bench π§
|
| 8 |
|
| 9 |
> **A Benchmark for Evaluating Geospatial Foundation Models on Cryosphere Applications**
|
| 10 |
|
| 11 |
+
[](https://huggingface.co/papers/2603.01576)
|
| 12 |
+
[](https://github.com/Sk-2103/Cryo-Bench)
|
| 13 |
[](https://arxiv.org/abs/2412.04204)
|
| 14 |
[](LICENSE)
|
| 15 |
|
|
|
|
| 41 |
|
| 42 |
---
|
| 43 |
|
| 44 |
+
## π₯ Sample Usage (Download Data)
|
| 45 |
|
| 46 |
+
The dataset contains the exact training, validation, and test splits used in **Cryo-Bench**, covering the **SICD, GLID, GLD, GSDD, and CaFFe** datasets.
|
| 47 |
|
| 48 |
- Install the dependency:
|
| 49 |
+
```bash
|
| 50 |
pip install huggingface_hub
|
| 51 |
+
```
|
| 52 |
|
| 53 |
+
- Download all datasets at once using the script provided in the GitHub repository:
|
| 54 |
+
```bash
|
| 55 |
python download_data.py
|
| 56 |
+
```
|
| 57 |
|
| 58 |
+
- Download specific datasets only:
|
| 59 |
+
```bash
|
| 60 |
+
python download_data.py --datasets GLID GLD SICD
|
| 61 |
+
```
|
| 62 |
|
| 63 |
|
| 64 |
## π Benchmark Results
|
|
|
|
| 91 |
<p align="center">
|
| 92 |
<img src="Fig.2.png" width="70%">
|
| 93 |
</p>
|
|
|
|
| 94 |
|
| 95 |
+
## π Citation
|
| 96 |
|
| 97 |
+
If you use this benchmark in your research, please cite:
|
| 98 |
+
|
| 99 |
+
```bibtex
|
| 100 |
+
@article{kaushik2026cryobench,
|
| 101 |
+
title={Cryo-Bench: Benchmarking Foundation Models for Cryosphere Applications},
|
| 102 |
+
author={Kaushik, Saurabh and Maurya, Lalit and Tellman, Beth},
|
| 103 |
+
journal={arXiv preprint arXiv:2603.01576},
|
| 104 |
+
year={2026}
|
| 105 |
+
}
|
| 106 |
+
```
|
| 107 |
|
| 108 |
+
## π License
|
| 109 |
+
|
| 110 |
+
This project is licensed under the [MIT License](LICENSE).
|
| 111 |
|
| 112 |
---
|
| 113 |
|
| 114 |
## π Acknowledgements
|
| 115 |
|
| 116 |
+
Cryo-Bench builds on the [PANGAEA benchmark](https://github.com/yurujaja/pangaea-bench) and the [RAMEN](https://github.com/nicolashoudre/RAMEN) framework. We thank the developers of DOFA, TerraMind, Prithvi, SatlasNet, and all other foundation models included in this benchmark. We also thank the dataset authors of GSDD, GLID, GLD, SICD, and CaFFe for making their data publicly available.
|