Datasets:
Formats:
webdataset
Size:
1M - 10M
ArXiv:
Tags:
urban-perception
social-media
weibo
image-text-retrieval
instance-segmentation
computational-urban-studies
License:
| language: | |
| - zh | |
| - en | |
| license: cc-by-nc-4.0 | |
| size_categories: | |
| - 1M<n<10M | |
| task_categories: | |
| - image-classification | |
| - image-to-text | |
| - image-segmentation | |
| pretty_name: Urban-ImageNet | |
| tags: | |
| - urban-perception | |
| - social-media | |
| - image-text-retrieval | |
| - instance-segmentation | |
| - computational-urban-studies | |
| - chinese-cities | |
| # Urban-ImageNet | |
| [Paper](https://huggingface.co/papers/2605.09936) | [Code](https://github.com/yiasun/dataset-2) | |
| Urban-ImageNet is a large-scale multimodal dataset and benchmark for urban commercial space perception. It contains more than 2 million public Weibo image-text pairs collected from 61 commercial sites in 24 Chinese cities across 2019-2025. The dataset is organized by the HUSIC taxonomy, a 10-class framework for urban commercial imagery, and supports three benchmark tasks: | |
| - **T1 Urban scene semantic classification** | |
| - **T2 Cross-modal image-text retrieval** | |
| - **T3 Instance segmentation** | |
| The release provides balanced 1K, 10K, and 100K subsets for reproducible benchmarking, plus a full unbalanced 2M corpus for large-scale training and scaling behavior studies. | |
| ## Dataset Variants | |
| | Variant | Images | Class Balance | Predefined Split | Intended Use | | |
| |---|---:|---|---|---| | |
| | 1K Dataset | 1,000 | 100 images per class | train/val/test | Quick tests, demos, debugging | | |
| | 10K Dataset | 10,000 | 1,000 images per class | train/val/test | Medium-scale experiments | | |
| | 100K Dataset | 100,000 | 10,000 images per class | train/val/test | Main benchmark split | | |
| | Full Dataset-2M | 2M+ | Natural unbalanced distribution | No predefined split | Large-scale training and custom splitting | | |
| The 1K, 10K, and 100K variants share the same three-task structure: | |
| ```text | |
| 1K Dataset/ | |
| 01 Images with labels/ | |
| train/{HUSIC class name}/*.jpg | |
| val/{HUSIC class name}/*.jpg | |
| test/{HUSIC class name}/*.jpg | |
| 02 Text-Image Pairs/ | |
| train.xlsx | |
| val.xlsx | |
| test.xlsx | |
| 03 Instance Segmentation/ | |
| train.json | |
| val.json | |
| test.json | |
| Visualization of annotation samples/ # optional qualitative examples | |
| ``` | |
| The full corpus uses a flatter structure: | |
| ```text | |
| Full Dataset-2M/ | |
| Images/ | |
| *.jpg | |
| Labels/ | |
| 01 Semantic classification labels.CSV | |
| 02 Text-Image Pairs.CSV | |
| 03 Instance Segmentation labels.CSV | |
| ``` | |
| All released images are privacy-protected and resized to a maximum long edge of 512 px. | |
| ## HUSIC Classes | |
| | ID | Class Label | Group | Meaning | | |
| |---:|---|---|---| | |
| | 0 | Exterior urban spaces with people | Exterior | Outdoor commercial spaces with visible human presence | | |
| | 1 | Exterior urban spaces without people | Exterior | Outdoor architecture or public-realm views without people | | |
| | 2 | Interior urban spaces with people | Interior | Commercial interiors with shoppers, workers, or occupants | | |
| | 3 | Interior urban spaces without people | Interior | Interior commercial spaces focused on design or circulation | | |
| | 4 | Hotel or commercial lodging spaces | Accommodation | Hotel rooms and commercial lodging environments | | |
| | 5 | Private home interiors | Accommodation | Private residential interiors in the broader urban corpus | | |
| | 6 | Food or drink items | Consumption | Food, beverages, dining-table scenes, and restaurant content | | |
| | 7 | Retail products and merchandise | Consumption | Products, merchandise, retail shelves, and display windows | | |
| | 8 | Human-centered portrait | Portrait | Selfies, group photos, and portrait-dominant images | | |
| | 9 | Other non-spatial content | Miscellaneous | Ads, screenshots, memes, maps, animals, and other non-spatial content | | |
| ## Task 1: Urban Scene Semantic Classification | |
| Task 1 uses the `01 Images with labels` folder. Images are arranged in an ImageFolder-style hierarchy: | |
| ```text | |
| train/{class_name}/{image_filename}.jpg | |
| val/{class_name}/{image_filename}.jpg | |
| test/{class_name}/{image_filename}.jpg | |
| ``` | |
| The folder name is the ground-truth HUSIC label. The same label is also available in the `Image Label` column of the text-image pair files. | |
| ## Task 2: Cross-Modal Image-Text Retrieval | |
| Task 2 uses the `02 Text-Image Pairs` files. Each Excel file contains image-level rows that can be joined to image files by the `Image Filename` column. The file stem matches the image filename in `01 Images with labels`. | |
| For example: | |
| ```text | |
| Image Filename = 2668383_2020-01-21_0 | |
| Image file = 2668383_2020-01-21_0.jpg | |
| ``` | |
| The released dataset preserves the original Chinese Weibo text to avoid translation distortion. English text shown in papers, examples, or documentation is illustrative and should not be treated as released ground truth. | |
| ### Text-Image Pair Columns | |
| | Column | Description | Task Role | | |
| |---|---|---| | |
| | Image Label | HUSIC class label for the image | T1 label and T2 category-level text | | |
| | Image Filename | Join key linking spreadsheet rows to image files | Join key | | |
| | Post ID | Anonymized numerical post identifier | Metadata | | |
| | User ID | Anonymized numerical user identifier; original usernames are not released | Metadata | | |
| | Post Time | Original post timestamp | Metadata | | |
| | Post Text | Original Chinese Weibo post text | T2 post-level text | | |
| | City | City associated with the location tag | Metadata | | |
| | Place Tag | Location hashtag or commercial-site place tag | Metadata | | |
| | Posting Tool | Client or posting-source string after metadata minimization | Metadata | | |
| | Mentioned Users | Anonymized, minimized, or empty mentioned-user field | Metadata | | |
| | Extracted Topics | Topic or hashtag terms extracted from the post text | Metadata | | |
| | Extracted Locations | Location mentions extracted from the post text | Metadata | | |
| | Like Count | Public engagement count at collection time | Metadata | | |
| | Repost Count | Public repost count at collection time | Metadata | | |
| | Comment Count | Public comment count at collection time | Metadata | | |
| ### T2 Evaluation Settings | |
| Urban-ImageNet supports two image-text matching settings: | |
| | Setting | Text Query | Ground Truth | Notes | | |
| |---|---|---|---| | |
| | T2-A Category-level retrieval | HUSIC label text, such as `Exterior urban spaces with people` | Images with the same `Image Label` | Easier structured semantic alignment | | |
| | T2-B Post-level retrieval | Original Chinese `Post Text` | Images attached to the same post | Harder, because one post can contain up to 9 images and the text is not always a literal caption | | |
| Task 2 can be used in either direction: | |
| - Image-to-text: input an image, retrieve the matching HUSIC label or post text. | |
| - Text-to-image: input a HUSIC label or post text, retrieve one or more matching images. | |
| For post-level retrieval, one post may map to multiple images. Evaluation should use multi-positive ground truth rather than assuming a one-to-one caption-image relationship. | |
| ## Task 3: Instance Segmentation | |
| Task 3 uses the `03 Instance Segmentation` JSON files. The format is COCO-style and includes: | |
| - `info`: split and annotation metadata | |
| - `categories`: the 10 HUSIC classes | |
| - `images`: image ID, file name, width, height, and `classification_label` | |
| - `annotations`: `category_id`, `detected_label`, `bbox`, `area`, COCO RLE `segmentation`, `iscrowd`, and `detection_score` | |
| Instance pseudo-labels were generated with Grounding DINO and SAM2 using class-specific prompt vocabularies. They are model-generated annotations, not exhaustive human pixel-level labels. Users should account for this distinction when training or evaluating segmentation models. | |
| ## Privacy and Responsible Use | |
| Urban-ImageNet is derived from public Weibo posts. Although the source posts were public, the release uses privacy-protected derivatives: | |
| - Original usernames and account names are removed. | |
| - `Post ID` and `User ID` are opaque numerical identifiers after anonymization/pseudonymization. | |
| - Faces, license plates, QR-code-like regions, and other sensitive visual regions are blurred. | |
| - Released images are resized to a maximum long edge of 512 px. | |
| - The raw high-resolution corpus, larger than 4 TB, is not publicly released. | |
| - Metadata is minimized to support research while reducing re-identification risk. | |
| The dataset is intended for non-commercial academic research in urban perception, computational urban studies, multimodal learning, image classification, image-text retrieval, and segmentation. | |
| Prohibited uses include re-identification, account reconstruction, face recognition, surveillance, social scoring, law-enforcement targeting, commercial profiling, and demographic inference about specific individuals. | |
| ## Limitations and Biases | |
| - The corpus is China-centered and should not be treated as globally representative. | |
| - Weibo users are not representative of all city users. | |
| - Social-media images overrepresent photogenic, popular, and personally meaningful scenes. | |
| - Post text is original Chinese social-media language and contains slang, hashtags, and loose image-text coupling. | |
| - The full 2M corpus is naturally class-imbalanced. | |
| - The 1K, 10K, and 100K subsets are balanced for benchmarking and therefore do not reflect natural class frequencies. | |
| - Task 3 masks are pseudo-labels generated by Grounding DINO and SAM2. | |
| ## Citation | |
| ```bibtex | |
| @misc{urbanimagenet2026, | |
| title = {Urban-ImageNet: A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception}, | |
| author = {Urban-ImageNet Research Team}, | |
| year = {2026}, | |
| note = {Dataset and benchmark for NeurIPS 2026 Evaluations and Datasets Track} | |
| } | |
| ``` |