Datasets:
license: cc-by-nc-sa-4.0
pretty_name: Urban-ImageNet
task_categories:
- image-classification
- image-to-text
- text-to-image
- zero-shot-image-classification
- image-segmentation
modalities:
- image
- text
language:
- zh
- en
size_categories:
- 1M<n<10M
tags:
- urban-perception
- social-media
- weibo
- image-text-retrieval
- instance-segmentation
- computational-urban-studies
- urban-ai
- chinese-cities
- husic
- cross-modal-retrieval
- multi-modal
- scene-classification
- urban-space-perception
🏙️ Urban-ImageNet
A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception from Social Media Imagery.
Urban-ImageNet fills a critical gap between computer vision and urban studies by treating cities not simply as visual scenes, but as lived, socially produced, and experientially activated spaces.
Overview
ImageNet taught models to recognise objects. Urban-ImageNet teaches them to understand how people experience cities.
General-purpose benchmarks such as ImageNet and Places365 identify what is in a scene, but they were never designed to answer the question that matters in urban studies: how do people inhabit, narrate, and socially activate urban space? Urban-ImageNet is a domain-specific complement — a 2-million image–text benchmark drawn from real social media, organised by HUSIC (Hierarchical Urban Space Image Classification), a 10-class taxonomy grounded in the urban theories of Lefebvre, Gehl, and Newman, to capture the spatial, social, and functional distinctions that matter in urban research.
The corpus contains over 2 million public Weibo image–text pairs collected from 61 urban commercial sites across 24 Chinese cities spanning 2019–2025, with controlled benchmark subsets at 1K, 10K, and 100K scale, and a full 2M corpus for large-scale training. The benchmark supports three tasks within one standardised library (Urban-ImageNet-lib):
| # | Task | Input → Output |
|---|---|---|
| T1 | Urban scene semantic classification | Image → HUSIC label (0–9) |
| T2 | Cross-modal image–text retrieval | Image ↔ Text (bidirectional) |
| T3 | Instance segmentation | Image → Object masks + bounding boxes |
Figure 1: The Urban-ImageNet framework — addressing current limitations in urban perception evaluation. The dataset bridges general-purpose vision benchmarks and domain-specific urban research needs through the HUSIC taxonomy and three unified benchmark tasks.
Dataset Variants
Four tiers are released to support model development and scaling-behaviour studies:
| Variant | Total Images | Class Balance | Images per Class | Predefined Split | Storage (512 px) | Primary Use |
|---|---|---|---|---|---|---|
| 1K Dataset | 1,000 | ✅ Balanced | 100 | train / val / test | ~62 MB | Quick tests, demos, debugging |
| 10K Dataset | 10,000 | ✅ Balanced | 1,000 | train / val / test | ~620 MB | Medium-scale experiments |
| 100K Dataset | 100,000 | ✅ Balanced | 10,000 | train / val / test | ~6.15 GB | Main benchmark |
| Full Dataset-2M | 2,000,000+ | ❌ Natural imbalance | Varies | None — custom split | ~120 GB | Large-scale training, scaling studies |
For all balanced tiers the train/val/test split ratio is 80:10:10. All three tasks share identical image files across tiers; only labels and metadata files differ. The 2M corpus provides per-class image counts to support informed use under realistic class imbalance.
File Structure
Balanced Tiers (1K / 10K / 100K)
{Tier} Dataset/
├── 01 Images with labels/ ← Task 1: Scene Classification
│ ├── train/
│ │ ├── Exterior urban spaces with people/
│ │ │ └── *.jpg
│ │ ├── Exterior urban spaces without people/
│ │ │ └── *.jpg
│ │ ├── Food or drink items/
│ │ ├── Hotel or commercial lodging spaces/
│ │ ├── Human-centered portrait/
│ │ ├── Interior urban spaces with people/
│ │ ├── Interior urban spaces without people/
│ │ ├── Other non-spatial content/
│ │ ├── Private home interiors/
│ │ └── Retail products and merchandise/
│ ├── val/ (same structure)
│ └── test/ (same structure)
│
├── 02 Text-Image Pairs/ ← Task 2: Cross-Modal Retrieval
│ ├── train.xlsx
│ ├── val.xlsx
│ └── test.xlsx
│
└── 03 Instance Segmentation/ ← Task 3: Instance Segmentation
├── train.json
├── val.json
├── test.json
└── Visualization of annotation samples/ (qualitative examples, optional)
└── *.jpg
Full Corpus (2M)
Full Dataset-2M/
├── Images/ ← All 2M+ images (flat, no subfolders)
│ └── *.jpg
└── Labels/
├── 01 Semantic classification labels.CSV
├── 02 Text-Image Pairs.CSV
└── 03 Instance Segmentation labels.json
Image format: All released images are JPEG, privacy-protected, and resized to a maximum long edge of 512 px (short edge scaled proportionally). Original usernames, faces, licence plates, and QR codes have been removed or blurred.
The HUSIC Framework
Figure 2: The HUSIC 10-class hierarchical taxonomy. Classes are organised into two primary groups (Spatially Relevant / Non-Spatially Relevant) and five secondary groups. Manual annotation by three trained researchers achieved Cohen's κ = 0.87.
Raw location-tagged social media content is inherently heterogeneous. A user posting under a single hashtag such as #Beijing Sanlitun produces content spanning architectural photography, dining imagery, merchandise displays, selfies, hotel promotion, and noise. Without a principled framework, downstream spatial analyses are confounded by this heterogeneity. HUSIC resolves this by providing a theoretically grounded taxonomy that simultaneously serves as a UGC filtering pipeline and a 10-way classification benchmark.
Theoretical Grounding
HUSIC class boundaries are defined by domain-expert concepts rather than data-driven frequency, drawing on three complementary bodies of urban theory:
- Lefebvre's Production of Space — The distinction between conceived space (design intent) and lived space (social appropriation through use) motivates the with/without people axis within each spatial group, a distinction absent from all existing vision benchmarks.
- Gehl's Public Life Studies — Gehl's finding that social activity is both an indicator and a self-reinforcing generator of successful public space justifies treating activated and non-activated spaces as analytically distinct categories.
- Newman's Spatial Hierarchy — Newman's defensible-space framework, which conceptualises urban environments along a public-to-private gradient, provides the basis for HUSIC's three-tier spatial hierarchy: publicly accessible spaces, transitional semi-public spaces, and privately controlled spaces.
HUSIC Class Definitions
| ID | Class Label | Primary Category | Secondary Group | Description |
|---|---|---|---|---|
| 0 | Exterior urban spaces with people | Spatially Relevant | Urban Exterior | Populated plazas, active streetscapes, occupied public spaces with visible human presence |
| 1 | Exterior urban spaces without people | Spatially Relevant | Urban Exterior | Empty building facades, vacant streets, unpopulated plazas focusing on architectural features |
| 2 | Interior urban spaces with people | Spatially Relevant | Urban Public Interior | Active shopping areas, occupied commercial interiors, indoor events, occupied restaurants |
| 3 | Interior urban spaces without people | Spatially Relevant | Urban Public Interior | Empty retail spaces, vacant corridors, interior design and spatial composition views |
| 4 | Hotel or commercial lodging spaces | Spatially Relevant | Accommodation | Hotel rooms, serviced apartments, Airbnb-style lodging interiors |
| 5 | Private home interiors | Spatially Relevant | Accommodation | Private residential interiors posted in association with nearby urban commercial sites |
| 6 | Food or drink items | Non-Spatially Relevant | Consumption | Plated dishes, beverages, dining-table scenes, food presentations |
| 7 | Retail products and merchandise | Non-Spatially Relevant | Consumption | Fashion items, electronics, cosmetics, product displays, store-window arrangements |
| 8 | Human-centered portrait | Non-Spatially Relevant | Social Portrait | Selfies, group photos, portrait-dominant images with urban backgrounds |
| 9 | Other non-spatial content | Non-Spatially Relevant | Miscellaneous | Advertisements, screenshots, memes, maps, infographics, animal photos |
Rationale: Spatially Relevant Classes (IDs 0–5)
The spatially relevant classes cover the full spectrum of urban environments from public exterior, through public interior, to semi-public and private spaces. The with/without people split within exterior and interior classes is analytically essential:
With people classes enable future research on pedestrian behaviour, social activity patterns, human action recognition, and spatial vitality measurement.
Without people classes isolate architectural and design features that attract attention and generate place resonance, supporting aesthetic perception and visual quality studies.
The accommodation classes (IDs 4–5) capture a phenomenon observed in the data: users frequently post hotel and private-home interiors alongside commercial-district imagery, revealing the influence of urban commercial centres on local short-term and long-term rental markets — a research direction not addressed by any existing benchmark.
Rationale: Non-Spatially Relevant Classes (IDs 6–9)
Non-spatial classes capture consumption and social dimensions of urban life. IDs 6–7 correspond to the two primary material goods consumed in commercial centres (food and merchandise). ID 8 captures social gathering and self-expression. ID 9 is a residual class for filtering noise, and together these classes provide concentrated sub-datasets for consumer behaviour and social pattern studies independent of spatial analysis. In the Urban-ImageNet pipeline, these classes also serve as the primary filtering layer to isolate space-relevant imagery for downstream urban perception tasks.
Task 1: Urban Scene Semantic Classification
Goal: Given an input image, predict its HUSIC label (class ID 0–9).
File format: ImageFolder-style hierarchy under 01 Images with labels/. The subdirectory name is the ground-truth label. Integer labels 0–9 follow lexicographic sort, directly compatible with PyTorch torchvision.datasets.ImageFolder.
from torchvision.datasets import ImageFolder
from torchvision import transforms
dataset = ImageFolder(
root="100K Dataset/01 Images with labels/train",
transform=transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
])
)
# dataset.classes → ['Exterior urban spaces with people', 'Exterior urban spaces without people', ...]
# dataset.class_to_idx → {'Exterior urban spaces with people': 0, ...}
T1 Baseline Results (100K benchmark, 80K/10K/10K split)
| Model | Top-1 Acc. (%) | Macro-F1 |
|---|---|---|
| ResNet-18 | 75.9 | 0.754 |
| ResNet-50 | 79.7 | 0.799 |
| ResNet-152 | 80.5 | 0.804 |
| ViT-B/16 | 79.0 | 0.790 |
| DeiT-B | 80.3 | 0.802 |
| EfficientNet-B4 | 84.9 | 0.849 |
| CLIP ViT-L/14 (zero-shot) | 37.9 | 0.350 |
| CLIP ViT-L/14 (fine-tuned) | 69.1 | 0.675 |
CLIP zero-shot performs poorly because HUSIC labels such as activated exterior space and non-activated interior space are not standard web image categories. Fine-tuning substantially improves CLIP, but it remains below supervised classifiers. The interior-without-people vs. interior-with-people boundary is the most challenging distinction across all models.
Task 2: Cross-Modal Image–Text Retrieval
Goal: Given a text query, retrieve matching images (text-to-image), or given an image, retrieve matching text (image-to-text).
File format: Three Excel spreadsheets (train.xlsx, val.xlsx, test.xlsx) under 02 Text-Image Pairs/. Each row describes one image and its associated Weibo post metadata.
Metadata Schema
| Column | Type | Description | Task 2 Role |
|---|---|---|---|
Image Label |
string | HUSIC class label (e.g., Exterior urban spaces with people) |
T2-A query text (category-level retrieval) |
Image Filename |
string | Join key in UserID_PostTime_Index format |
Primary join key linking spreadsheet to image file |
Post ID |
integer | Anonymised numerical post identifier | Metadata |
User ID |
integer | Anonymised numerical user identifier (original username removed) | Metadata |
Post Time |
string | Original post timestamp | Metadata |
Post Text |
string | Original Weibo post text (Chinese, unmodified) | T2-B query text (post-level retrieval) |
City |
string | City associated with the location tag | Metadata |
Place Tag |
string | Location hashtag or commercial-site place tag | Metadata |
Posting Tool |
string | Client or posting-source string | Metadata |
Mentioned Users |
string | Anonymised or empty mentioned-user field | Metadata |
Extracted Topics |
string | Topic or hashtag terms extracted from post text | Metadata |
Extracted Locations |
string | Location mentions extracted from post text | Metadata |
Like Count |
integer | Public engagement count at collection time | Metadata |
Repost Count |
integer | Public repost count at collection time | Metadata |
Comment Count |
integer | Public comment count at collection time | Metadata |
Image–Filename Join Key
Each image filename follows the pattern {UserID}_{PostTime}_{Index}, for example:
2668383_2020-01-21_0.jpg → User 2668383, post from 2020-01-21, first image (index 0)
2668383_2020-01-21_1.jpg → Same post, second image
2668383_2020-01-21_8.jpg → Same post, ninth (last) image
The Image Filename column in the spreadsheet (without the .jpg extension) directly matches the filename stem of the corresponding image in 01 Images with labels/. This allows joining image files to their associated text metadata using a simple string match.
Example Row (English translation for illustration only; released data retains original Chinese)
| Column | Example Value |
|---|---|
| Image Label | Exterior urban spaces with people |
| Image Filename | 73811347_2023-09-14_2 |
| Post ID | 4945998754351775 |
| User ID | 73811347 |
| Post Time | 2023-09-14 22:24 |
| Post Text | (Original Chinese retained in release) — Illustrative English translation: "Dinner tonight — came to Sanlitun with a group-buying voucher for hot pot skewers. Only ¥58 and we were stuffed! Great atmosphere, good service, tasty food. #BeijingFood #BeijingSanlitun" |
| City | Beijing |
| Place Tag | #Beijing Sanlitun |
| Like Count | 0 |
Note: The released dataset retains original Chinese text in the
Post Textcolumn to preserve linguistic authenticity and avoid translation distortion, which is scientifically important for Task 2 evaluation. English text in this README is for illustrative purposes only.
Two Retrieval Sub-Tasks
Urban-ImageNet supports two complementary retrieval configurations that reflect increasing real-world difficulty:
| Sub-task | Query Text Source | Ground Truth | Difficulty | Notes |
|---|---|---|---|---|
| T2-A: Category-level | Image Label column — HUSIC class name/definition (e.g., Exterior urban spaces with people) |
All images sharing the same Image Label |
Moderate | Structured semantic alignment; good for zero-shot transfer evaluation |
| T2-B: Post-level | Post Text column — original Weibo post narrative |
All images attached to the same post (up to 9 images) | Hard | Informal colloquial language; loose image–text coupling; multi-positive ground truth |
Bidirectional use: Both sub-tasks support either direction:
- Image → Text: given an image, retrieve its HUSIC label or matching post text.
- Text → Image: given a HUSIC label or post text, retrieve the matching image(s). For T2-B, one post may correspond to up to 9 images, so evaluation must use a multi-positive retrieval protocol rather than assuming one-to-one caption–image correspondence.
T2 Baseline Results (10K test set)
| Setting | Model | R@1 | R@5 | R@10 | mAP | MedR |
|---|---|---|---|---|---|---|
| T2-A Category label | CLIP (zero-shot) | 54.2 | 96.5 | 100.0 | 53.3 | 1.5 |
| CLIP (fine-tuned) | 92.7 | 99.8 | 100.0 | 90.7 | 1.0 | |
| BLIP (zero-shot) | 14.9 | 43.6 | 80.0 | 19.8 | 6.2 | |
| BLIP (fine-tuned) | 94.2 | 99.8 | 100.0 | 93.3 | 1.0 | |
| T2-B Post text | CLIP (zero-shot) | 2.6 | 5.4 | 7.0 | 4.5 | 328 |
| CLIP (fine-tuned) | 8.1 | 16.9 | 23.5 | 13.2 | 64 | |
| BLIP (zero-shot) | 0.1 | 0.4 | 1.2 | 0.8 | 477 | |
| BLIP (fine-tuned) | 1.9 | 6.8 | 11.6 | 5.5 | 92 | |
| T2-B Post + label | CLIP (fine-tuned) | 9.3 | 22.8 | 32.3 | 17.0 | 25 |
Category-label retrieval is near-trivial after fine-tuning (≥92% R@1), confirming that HUSIC descriptions provide strong cross-modal signal. Post-text retrieval is substantially harder: Weibo posts are short informal narratives (median 32 characters) rather than image descriptions, and a single post may accompany images spanning multiple HUSIC classes. Against a random-chance baseline of ~0.1% R@1, fine-tuned CLIP achieves 8.1% (76× chance), establishing a concrete baseline for future urban-domain vision–language models.
Figure 3: T2 retrieval results (avg. T2I + I2T). Category-label retrieval (left) is near-trivial after fine-tuning; post-text retrieval (right) remains genuinely challenging, establishing an important open problem for urban-domain vision–language research.
Task 3: Instance Segmentation
Goal: Detect and delineate urban-domain objects within each image using pixel-level instance masks.
File format: Three COCO-compatible JSON files (train.json, val.json, test.json) under 03 Instance Segmentation/.
Annotation JSON Structure
Each JSON file follows the COCO format with the following fields:
{
"info": {
"description": "Urban-ImageNet Instance Segmentation Annotations",
"split": "train",
"version": "1.0"
},
"categories": [ {"id": 0, "name": "Exterior urban spaces with people"}, ... ],
"images": [
{
"id": 0,
"file_name": "2668383_2020-01-21_0.jpg",
"width": 512,
"height": 384,
"classification_label": 0
}, ...
],
"annotations": [
{
"id": 0,
"image_id": 0,
"category_id": 0,
"detected_label": "person",
"detection_score": 0.8732,
"bbox": [x, y, width, height],
"area": 4512,
"segmentation": { "counts": "...", "size": [384, 512] },
"iscrowd": 0
}, ...
]
}
Extended fields beyond standard COCO:
classification_label(inimages): the HUSIC class ID of the image — enables multi-task joint training and evaluation.detected_label(inannotations): the specific object term detected by Grounding DINO (e.g.,"person","retail shelf","escalator").detection_score(inannotations): Grounding DINO confidence score, enabling downstream threshold-based filtering.- Segmentation masks are stored in COCO RLE format (run-length encoding), directly compatible with
pycocotools.
Annotation Pipeline
Annotations were generated using a two-stage automatic pipeline followed by human quality control:
Grounding DINO (text-prompted open-vocabulary object detection) identifies bounding boxes using class-specific vocabulary prompts.
SAM 2 (Segment Anything Model 2) refines each detected box into a pixel-level instance mask.
NMS (Non-Maximum Suppression) removes overlapping detections.
Area filtering removes very small (noise) and very large (full-image) detections.
Human review was applied to the evaluation subset, with stricter confidence thresholds (≥0.50 detection score, ≥0.88 IoU), ensuring reliable ground truth for model comparison.
Training pseudo-labels use more permissive thresholds (≥0.35, ≥0.80). Users should account for the pseudo-label nature of annotations when interpreting segmentation performance.
Per-Class Segmentation Vocabulary
Each HUSIC class uses a tailored vocabulary of 12–20 object terms designed to capture the semantically appropriate instances for that scene type, maximising detection recall while minimising false positives.
| ID | Class | Segmentation Object Terms |
|---|---|---|
| 0 | Exterior urban spaces with people | person · crowd · pedestrian · building façade · lawn · street lamp · glass curtain wall · sky · tree · shrub · fence · road · water · river · vehicle · sculpture · installation · pavement · street signage · fountain |
| 1 | Exterior urban spaces without people | building façade · glass curtain wall · wooden façade · tree · shrub · lawn · sky · pavement · road · water · river · lantern · sculpture · installation · street lamp · signage · fence · bridge · water feature · fountain |
| 2 | Interior urban spaces with people | person · shopper · crowd · retail shelf · escalator · elevator · ceiling · floor tile · glass partition · display case · door · indoor plant · wall · window · handrail · column |
| 3 | Interior urban spaces without people | retail shelf · escalator · indoor corridor · ceiling · floor tile · marble floor · glass partition · display case · wall · column · indoor plant · elevator · door · window · lighting fixture · handrail |
| 4 | Hotel or commercial lodging spaces | hotel bed · furniture · sofa · carpet · marble floor · tile floor · wooden floor · ceiling · bathroom · window · curtain · lamp |
| 5 | Private home interiors | sofa · bed · dining table · floor · ceiling · kitchen · bookshelf · wardrobe · window · lamp · carpet · wall |
| 6 | Food or drink items | food dish · meal plate · dessert · beverage cup · coffee · drink bottle · bowl · chopsticks · spoon · dining table · person · restaurant interior |
| 7 | Retail products and merchandise | fashion clothing · shoes · cosmetics · product package · merchandise · retail shelf · bag · jewelry · electronics · store window · mannequin · person |
| 8 | Human-centered portrait | person · face · building façade · sky · tree · floor · food · animal · vehicle · indoor background |
| 9 | Other non-spatial content | animal · person · vehicle · advertisement poster · text · QR code · screenshot · sculpture · meme · sky · plant · signage · graphic design · logo · map · infographic · chat record |
T3 scope note: Instance segmentation masks are generated for all 10 HUSIC classes. The T3 evaluation benchmark adopts a class-agnostic protocol — treating all detected objects as a single
objectcategory — to produce conservative, architecture-comparable metrics uncorrupted by class-imbalanced pseudo-labels. Per-class AP results are available in the supplementary material of the paper.
T3 Baseline Results (quality-filtered evaluation subset, confidence ≥ 0.50, IoU ≥ 0.88)
| Model | AP | AP₅₀ | AP₇₅ | mIoU | FPS |
|---|---|---|---|---|---|
| Mask R-CNN | 0.267 | 0.472 | 0.276 | 0.629 | 15.4 |
| Cascade Mask R-CNN | 0.290 | 0.495 | 0.299 | 0.635 | 12.7 |
| Mask R-CNN + SAM | 0.373 | 0.563 | 0.378 | — | ~0 |
| Cascade Mask R-CNN + SAM | 0.369 | 0.531 | 0.380 | — | ~0 |
| GT-box SAM (oracle†) | 0.749 | 0.924 | 0.805 | — | — |
†GT-box SAM uses ground-truth bounding boxes as prompts — an oracle upper bound, not a trainable baseline. Adding SAM box-refinement to Mask R-CNN increases AP by ~40% relative (0.267 → 0.373), establishing a strong open-source baseline for future work. The gap between trainable models and the oracle (0.373 vs. 0.749) highlights substantial room for improvement in urban commercial-space instance segmentation.
Figure 4: Task 3 qualitative segmentation examples across HUSIC classes. Colour-coded instance masks from Mask R-CNN, Cascade Mask R-CNN, and Mask R-CNN+SAM. The domain-specific vocabulary enables detection of urban-specific objects (escalators, retail shelves, display cases, street lamps) not well-covered by general segmentation benchmarks.
Urban-ImageNet-lib
Figure 5: Urban-ImageNet-lib architecture — a unified benchmarking framework supporting all three tasks with standardised cross-dataset comparison adapters.
Urban-ImageNet-lib is a Python benchmarking library providing:
- Modular data loaders for all three tasks and all four dataset tiers.
- Standard fine-tuning pipelines for T1 (classification), T2 (retrieval), and T3 (segmentation) baselines.
- Evaluation scripts with metrics matching established benchmarks (T1 ↔ Places365/SUN; T2 ↔ MS-COCO Captions/Flickr30K; T3 ↔ MS-COCO Instance Seg./Cityscapes).
- Cross-dataset adapters enabling direct performance comparison in a unified table.
See the GitHub repository for full installation instructions and usage examples.
Scaling Behaviour
Urban-ImageNet's four-tier design enables systematic study of how classification accuracy and computational cost scale with dataset size. All balanced tiers (1K / 10K / 100K) are strictly class-balanced so that performance differences across tiers are attributable to data quantity alone, without confounding from class imbalance. All models were trained separately on each tier and evaluated on a shared held-out 10K test set.
T1 Scaling: Top-1 Accuracy and Macro-F1
| Model | 1K Acc. (%) | 1K F1 | 10K Acc. (%) | 10K F1 | 100K Acc. (%) | 100K F1 |
|---|---|---|---|---|---|---|
| ResNet-50 | 66.5 | 0.661 | 78.1 | 0.781 | 83.5 | 0.835 |
| ResNet-152 | 67.3 | 0.670 | 79.0 | 0.787 | 83.5 | 0.834 |
| CLIP (fine-tuned) | 70.8 | 0.708 | 78.0 | 0.780 | 82.3 | 0.822 |
| LLaVA-1.5 (fine-tuned) | 76.8 | 0.767 | 81.2 | 0.812 | — † | — † |
† LLaVA-1.5 100K fine-tuning was not completed due to computational constraints (~3,200× slower per sample than ResNet-50; estimated >150 GPU-hours on H100).
All models improve monotonically with scale. The 1K→10K gain (10–12%) consistently exceeds the 10K→100K gain (5%), consistent with standard scaling laws. LLaVA-1.5's stronger language-grounded priors give it an advantage at small scales (76.8% at 1K vs. 66.5–70.8% for others) but it is computationally prohibitive at 100K.
Hierarchical T1 Scaling: Coarser Distinctions Are Easier
HUSIC's hierarchical structure means models can be evaluated at three levels of granularity. At 100K, models substantially exceed their 10-class accuracy when evaluated on coarser distinctions:
| Model | Tier | Spatial/Non-spatial Acc. | Exterior/Interior Acc. | 10-class Acc. |
|---|---|---|---|---|
| ResNet-50 | 1K | 88.7% | 86.7% | 66.5% |
| ResNet-50 | 10K | 92.5% | 92.3% | 78.1% |
| ResNet-50 | 100K | 93.9% | 95.0% | 83.5% |
| ResNet-152 | 100K | 94.2% | 94.7% | 83.5% |
| CLIP (FT) | 100K | 94.0% | 87.5% | 82.3% |
| LLaVA-1.5 (FT) | 10K | 91.9% | 85.4% | 81.2% |
At 100K, spatial vs. non-spatial binary accuracy reaches 94% and exterior vs. interior reaches 95%, confirming that HUSIC captures semantically meaningful hierarchical structure. The gap between coarse (94–95%) and fine-grained (83–85%) accuracy highlights that the activation-level distinctions (e.g., with people vs. without people) remain the hardest sub-problems.
T2-Post Retrieval Scaling
Post-level retrieval difficulty grows naturally as the candidate gallery expands. Fine-tuned CLIP's average R@1 drops from 39.5% on the 1K split (100-image pool) to 8.1% on the 10K split (1,000-image pool), confirming that T2-B is a scalably challenging benchmark.
| Model | 1K split — Avg. R@1 (%) | 1K split — Avg. mAP | 10K split — Avg. R@1 (%) | 10K split — Avg. mAP |
|---|---|---|---|---|
| CLIP (fine-tuned) | 39.5 | 0.501 | 8.1 | 0.132 |
| BLIP-2 (fine-tuned) | 28.1 | 0.392 | 5.0 | 0.094 |
| BLIP (fine-tuned) | 16.6 | 0.283 | 1.9 | 0.055 |
(Avg. = average of T2I and I2T directions; mAP as a fraction 0–1.)
Data Collection and Construction Pipeline
Figure 7: Overview of the Urban-ImageNet dataset construction and annotation pipeline — from Weibo crawling through privacy processing, HUSIC annotation, and multi-task organisation.
Urban-ImageNet was constructed through a five-stage pipeline:
Collection — A Python-based web crawler systematically retrieved all public Weibo posts from location-specific hashtags at 61 major urban commercial sites across 24 Chinese cities, covering 2019–2025. Up to 9 image attachments, post text, and metadata were captured per post, yielding a raw corpus of over 4 TB and 2 million image–text pairs.
Cleaning — Four-stage deduplication and filtering: (i) near-duplicate removal via perceptual hashing (pHash, Hamming distance ≤ 8); (ii) discard of images smaller than 256×256 px; (iii) NSFW filtering via pre-trained classifier; (iv) removal of systematically repeated commercial advertisement posts via post-text hash similarity.
Privacy Protection — Automated face detection, licence-plate recognition, and QR-code detection were applied to all images with all detected regions blurred. Original usernames were stripped and replaced with opaque numerical identifiers. Images were resized to a maximum side length of 512 px. The raw 4 TB corpus is retained securely by the authors and will not be publicly released.
HUSIC Annotation (T1 & T2) — The 100K balanced benchmark set was manually annotated by three trained researchers following a standardised guideline. A shared 3,000-image double-annotation subset yielded Cohen's κ = 0.87 (near-perfect agreement). Disagreements were resolved by majority vote and guideline revision. The annotation process took approximately two years of sustained effort.
Instance Segmentation (T3) — Pseudo-labels were generated using Grounding DINO + SAM 2 with per-class vocabulary prompts, followed by NMS and area filtering. The evaluation subset was reviewed with stricter thresholds and human spot-checks.
Geographic and Site Coverage
Urban-ImageNet covers 61 urban commercial sites across 24 Chinese cities spanning 8 macro-regions, including leading first-tier cities (Beijing, Shanghai, Chengdu, Guangzhou, Shenzhen), second-tier cities, and regional centres. Sites include both enclosed shopping malls and open-block mixed-use commercial precincts. The full site list and geographic distribution map are provided in the paper appendix.
Privacy and Responsible Use
Urban-ImageNet is derived from public Weibo posts — posts whose visibility was explicitly set to "open to all" by the account holder at the time of collection. Although source posts were public, the released dataset applies multiple layers of privacy protection in line with the practice of large-scale street-level datasets (e.g., Google Street View):
| Protection Measure | Implementation |
|---|---|
| Username removal | All original Weibo usernames stripped; User ID is an opaque numerical pseudonym |
| Post identity | Post ID is an anonymised numerical identifier; no account URL or profile data is included |
| Face blurring | Automated face detection applied to all images; detected face regions blurred |
| Licence plate blurring | Automated licence-plate recognition; all plates blurred |
| QR code blurring | Automated QR-code detection; all QR codes blurred; supplemented by manual spot-checks |
| Image resolution | Released at ≤ 512 px long edge; original-resolution corpus (4 TB) not publicly released |
| Text retention | Post Text retains original Chinese to preserve linguistic authenticity for T2; contains no directly identifying information beyond what the original public post disclosed |
| Data minimisation | Only fields necessary for the three benchmark tasks are included in the release |
Data-use agreement: Researchers accessing Urban-ImageNet must agree to a data-use agreement restricting use to non-commercial academic research and prohibiting:
Re-identification of individuals
Facial recognition or biometric profiling
Account or identity reconstruction
Surveillance or social scoring
Law-enforcement targeting
Commercial profiling or demographic inference
Research purpose: Urban-ImageNet is designed to advance evidence-based urban design and planning through improved AI perception of public spaces — serving a clear public good. The authors will monitor dataset use and reserve the right to retract access in cases of misuse.
Limitations and Known Biases
- Geographic bias: The corpus is entirely China-sourced and should not be treated as globally representative of urban commercial spaces.
- Platform bias: Weibo users are not representative of all city residents; the dataset over-represents younger, urban, mobile-connected demographics.
- Visual selection bias: Social media images over-represent photogenic, popular, and personally meaningful scenes; empty or mundane spaces are systematically underrepresented.
- Linguistic bias: Post text is original Chinese social-media language containing slang, emoji, hashtags, and frequently loose image–text coupling.
- Class imbalance in 2M corpus: The full corpus reflects natural posting frequencies and is significantly class-imbalanced; the balanced 1K/10K/100K tiers do not reflect natural class distributions.
- T3 pseudo-labels: Task 3 annotations are model-generated pseudo-labels (Grounding DINO + SAM 2), not exhaustive human pixel-level labels; users should account for this when training or evaluating segmentation models.
- Temporal scope: Posts span 2019–2025; urban commercial environments evolve over time and some sites may have changed significantly.
Related Work
Urban-ImageNet is designed as a domain-specific complement to the following general-purpose benchmarks:
| Benchmark | Task Covered | Relation to Urban-ImageNet |
|---|---|---|
| Places365 | Scene classification | Urban-ImageNet provides theory-grounded, activation-aware sub-categories of Places365 classes |
| SUN Database | Scene classification | Complementary focus on commercial urban spaces with social context |
| MS-COCO Captions | Image–text retrieval | Urban-ImageNet provides authentic first-person social media narratives vs. COCO's objective third-person captions |
| Flickr30K | Image–text retrieval | Urban-ImageNet provides Chinese-language, domain-specific, multi-positive retrieval ground truth |
| MS-COCO Instance Seg. | Instance segmentation | Urban-ImageNet provides domain-specific commercial-space vocabulary (retail shelves, escalators, hotel beds, etc.) |
| Cityscapes | Semantic/instance segmentation | Urban-ImageNet focuses on commercial interior and mixed exterior spaces vs. Cityscapes' driving-scene focus |
Citation
If you use Urban-ImageNet in your research, please cite our paper:
@article{ou2026urbanimagenet,
title = {Urban-ImageNet: A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception},
author = {Ou, Yiwei and Cheung, Chung Ching and Ang, Jun Yang and Ren, Xiaobin and Sun, Ronggui and Gao, Guansong and Zhao, Kaiqi and Manfredini, Manfredo},
journal = {arXiv preprint arXiv:2605.09936},
year = {2026},
eprint = {2605.09936},
archivePrefix = {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2605.09936}
}
Paper: arXiv:2605.09936
Dataset: huggingface.co/datasets/Yiwei-Ou/Urban-ImageNet
Benchmark code: github.com/yiasun/dataset-2
License
The dataset is released under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
You are free to use, share, and adapt this dataset for non-commercial academic research with appropriate attribution, provided that you give appropriate credit and distribute any derivative works under the same license. Commercial use of any kind is prohibited.
See LICENSE for full terms.