Papers
arxiv:2605.09936

Urban-ImageNet: A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception

Published on May 11
· Submitted by
Yiwei Ou
on May 13
Authors:
,
,
,
,
,
,
,

Abstract

Urban-ImageNet presents a large-scale multi-modal dataset and evaluation benchmark for urban space perception from social media imagery, organized under a hierarchical taxonomy for scene classification, cross-modal retrieval, and instance segmentation tasks.

AI-generated summary

We present Urban-ImageNet, a large-scale multi-modal dataset and evaluation benchmark for urban space perception from user-generated social media imagery. The corpus contains over 2 Million public social media images and paired textual posts collected from Weibo across 61 urban sites in 24 Chinese cities across 2019-2025, with controlled benchmark subsets at 1K, 10K, and 100K scale and a full 2M corpus for large-scale training and evaluation. Urban-ImageNet is organized by HUSIC, a Hierarchical Urban Space Image Classification framework that defines a 10-class taxonomy grounded in urban theory. The taxonomy is designed to distinguish activated and non-activated public spaces, exterior and interior urban environments, accommodation spaces, consumption content, portraits, and non-spatial social-media content. Rather than treating urban imagery as generic scene data, Urban-ImageNet evaluates whether machine perception models can capture spatial, social, and functional distinctions that are central to urban studies. The benchmark supports three tasks within one standardized library: (T1) urban scene semantic classification, (T2) cross-modal image-text retrieval, and (T3) instance segmentation. Our experiments evaluate representative vision, vision-language, and segmentation models, revealing strong performance on supervised scene classification but more challenging behavior in cross-modal retrieval and instance-level urban object segmentation. A multi-scale study further examines how model performance changes as balanced training data increases from 1K, 10K to 100K images. Urban-ImageNet provides a unified, theory-grounded, multi-city benchmark for evaluating how AI systems perceive and interpret contemporary urban spaces across modalities, scales, and task formulations. Dataset and benchmark are available at: huggingface.co/datasets/Yiwei-Ou/Urban-ImageNet and github.com/yiasun/dataset-2.

Community

Paper submitter

We introduce Urban-ImageNet, a large-scale multimodal benchmark for urban space perception built from 2M+ public Weibo image–text pairs collected across 61 commercial sites in 24 Chinese cities from 2019–2025.

General-purpose benchmarks like ImageNet and Places365 identify what is visible in a scene. Urban-ImageNet asks how people inhabit, experience, and socially activate urban space. The dataset is organized by HUSIC, a 10-class taxonomy grounded in the urban theories of Lefebvre, Gehl, and Newman, distinguishing socially activated vs. unoccupied spaces, exterior vs. interior environments, consumption content, and social portraits.

The benchmark supports three unified tasks within one standardized library:
🏷️ T1 Urban scene semantic classification
🔍 T2 Cross-modal image–text retrieval
🎯 T3 Instance segmentation

Balanced 1K / 10K / 100K subsets support controlled scaling-behavior studies, alongside a full 2M-scale corpus for large-scale training. Dataset and code are publicly available on Hugging Face and GitHub.

01-Overall-Framework
02-HUSIC-Framework

06-Data-Collection

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.09936
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.09936 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.09936 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.