Datasets:
Formats:
webdataset
Size:
1M - 10M
ArXiv:
Tags:
urban-perception
social-media
weibo
image-text-retrieval
instance-segmentation
computational-urban-studies
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -31,17 +31,13 @@ tags:
|
|
| 31 |
|
| 32 |
# 🏙️ Urban-ImageNet
|
| 33 |
|
| 34 |
-
**A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception from
|
| 35 |
|
| 36 |
<p align="center">
|
| 37 |
<a href="https://arxiv.org/abs/2605.09936"><img src="https://img.shields.io/badge/arXiv-2605.09936-b31b1b.svg" alt="arXiv"/></a>
|
| 38 |
<a href="https://github.com/yiasun/dataset-2"><img src="https://img.shields.io/badge/GitHub-yiasun%2Fdataset--2-black?logo=github" alt="GitHub"/></a>
|
| 39 |
<a href="https://huggingface.co/datasets/yiasun/urban-imagenet"><img src="https://img.shields.io/badge/🤗%20HuggingFace-Dataset-yellow" alt="HuggingFace"/></a>
|
| 40 |
-
<img src="https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey" alt="License"/>
|
| 41 |
-
<img src="https://img.shields.io/badge/Images-2M%2B-blue" alt="Scale"/>
|
| 42 |
-
<img src="https://img.shields.io/badge/Cities-24-green" alt="Cities"/>
|
| 43 |
</p>
|
| 44 |
-
|
| 45 |
> Urban-ImageNet fills a critical gap between computer vision and urban studies by treating cities not simply as visual scenes, but as lived, socially produced, and experientially activated spaces.
|
| 46 |
|
| 47 |
---
|
|
@@ -61,7 +57,6 @@ The corpus contains over **2 million** public Weibo image–text pairs collected
|
|
| 61 |
| **T2** | Cross-modal image–text retrieval | Image ↔ Text (bidirectional) |
|
| 62 |
| **T3** | Instance segmentation | Image → Object masks + bounding boxes |
|
| 63 |
|
| 64 |
-
<!-- Replace the path below with the actual uploaded framework figure -->
|
| 65 |

|
| 66 |
*Figure 1: The Urban-ImageNet framework — addressing current limitations in urban perception evaluation. The dataset bridges general-purpose vision benchmarks and domain-specific urban research needs through the HUSIC taxonomy and three unified benchmark tasks.*
|
| 67 |
|
|
@@ -553,10 +548,10 @@ Urban-ImageNet is derived from **public Weibo posts** — posts whose visibility
|
|
| 553 |
If you use Urban-ImageNet in your research, please cite our paper:
|
| 554 |
|
| 555 |
```bibtex
|
| 556 |
-
@
|
| 557 |
title = {Urban-ImageNet: A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception},
|
| 558 |
-
author = {
|
| 559 |
-
|
| 560 |
year = {2026},
|
| 561 |
eprint = {2605.09936},
|
| 562 |
archivePrefix = {arXiv},
|
|
@@ -588,8 +583,8 @@ Urban-ImageNet is designed as a **domain-specific complement** to the following
|
|
| 588 |
|
| 589 |
## License
|
| 590 |
|
| 591 |
-
The dataset is released under **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)**.
|
| 592 |
|
| 593 |
-
You are free to use, share, and adapt this dataset for **non-commercial academic research** with appropriate attribution. Commercial use of any kind is prohibited.
|
| 594 |
|
| 595 |
-
See [LICENSE](https://creativecommons.org/licenses/by-nc/4.0/) for full terms.
|
|
|
|
| 31 |
|
| 32 |
# 🏙️ Urban-ImageNet
|
| 33 |
|
| 34 |
+
**A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception from Social Media Imagery.**
|
| 35 |
|
| 36 |
<p align="center">
|
| 37 |
<a href="https://arxiv.org/abs/2605.09936"><img src="https://img.shields.io/badge/arXiv-2605.09936-b31b1b.svg" alt="arXiv"/></a>
|
| 38 |
<a href="https://github.com/yiasun/dataset-2"><img src="https://img.shields.io/badge/GitHub-yiasun%2Fdataset--2-black?logo=github" alt="GitHub"/></a>
|
| 39 |
<a href="https://huggingface.co/datasets/yiasun/urban-imagenet"><img src="https://img.shields.io/badge/🤗%20HuggingFace-Dataset-yellow" alt="HuggingFace"/></a>
|
|
|
|
|
|
|
|
|
|
| 40 |
</p>
|
|
|
|
| 41 |
> Urban-ImageNet fills a critical gap between computer vision and urban studies by treating cities not simply as visual scenes, but as lived, socially produced, and experientially activated spaces.
|
| 42 |
|
| 43 |
---
|
|
|
|
| 57 |
| **T2** | Cross-modal image–text retrieval | Image ↔ Text (bidirectional) |
|
| 58 |
| **T3** | Instance segmentation | Image → Object masks + bounding boxes |
|
| 59 |
|
|
|
|
| 60 |

|
| 61 |
*Figure 1: The Urban-ImageNet framework — addressing current limitations in urban perception evaluation. The dataset bridges general-purpose vision benchmarks and domain-specific urban research needs through the HUSIC taxonomy and three unified benchmark tasks.*
|
| 62 |
|
|
|
|
| 548 |
If you use Urban-ImageNet in your research, please cite our paper:
|
| 549 |
|
| 550 |
```bibtex
|
| 551 |
+
@article{ou2026urbanimagenet,
|
| 552 |
title = {Urban-ImageNet: A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception},
|
| 553 |
+
author = {Ou, Yiwei and Cheung, Chung Ching and Ang, Jun Yang and Ren, Xiaobin and Sun, Ronggui and Gao, Guansong and Zhao, Kaiqi and Manfredini, Manfredo},
|
| 554 |
+
journal = {arXiv preprint arXiv:2605.09936},
|
| 555 |
year = {2026},
|
| 556 |
eprint = {2605.09936},
|
| 557 |
archivePrefix = {arXiv},
|
|
|
|
| 583 |
|
| 584 |
## License
|
| 585 |
|
| 586 |
+
The dataset is released under **Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)**.
|
| 587 |
|
| 588 |
+
You are free to use, share, and adapt this dataset for **non-commercial academic research** with appropriate attribution, provided that you give appropriate credit and distribute any derivative works under the same license. Commercial use of any kind is prohibited.
|
| 589 |
|
| 590 |
+
See [LICENSE](https://creativecommons.org/licenses/by-nc-sa/4.0/) for full terms.
|