File size: 48,071 Bytes
332366c
3e1756a
332366c
 
 
 
 
 
 
162a58d
 
 
332366c
 
 
 
 
 
 
 
 
 
 
 
3e1756a
332366c
ca3adbd
3e1756a
ca3adbd
 
 
332366c
 
ca3adbd
3e1756a
332366c
c844a1a
ca3adbd
3e1756a
 
 
 
 
cbcf8e0
3e1756a
ca3adbd
 
 
3e1756a
ca3adbd
 
 
332366c
ca3adbd
332366c
ca3adbd
 
 
 
 
 
 
 
3e1756a
 
 
ca3adbd
332366c
 
 
ca3adbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332366c
 
ca3adbd
332366c
ca3adbd
332366c
ca3adbd
 
 
 
 
 
332366c
 
ca3adbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bffb10e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca3adbd
 
 
 
bffb10e
 
332366c
bffb10e
332366c
ca3adbd
 
bffb10e
 
 
 
 
 
ca3adbd
 
332366c
 
 
ca3adbd
 
 
 
 
 
 
332366c
ca3adbd
 
 
 
 
 
 
 
 
 
332366c
 
ca3adbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332366c
ca3adbd
332366c
ca3adbd
332366c
ca3adbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332366c
 
ca3adbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332366c
ca3adbd
332366c
ca3adbd
332366c
ca3adbd
332366c
ca3adbd
 
 
 
332366c
ca3adbd
332366c
ca3adbd
 
332366c
ca3adbd
332366c
ca3adbd
 
 
 
 
 
 
 
 
 
 
332366c
ca3adbd
 
 
 
 
 
 
332366c
 
 
ca3adbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332366c
ca3adbd
332366c
ca3adbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bffb10e
ca3adbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bffb10e
 
 
 
 
ca3adbd
 
332366c
 
 
ca3adbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332366c
ca3adbd
332366c
ca3adbd
332366c
ca3adbd
332366c
ca3adbd
 
 
332366c
ca3adbd
 
 
 
 
 
 
 
 
332366c
dc7f909
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332366c
 
ca3adbd
 
332366c
c844a1a
ca3adbd
c844a1a
 
ca3adbd
 
 
 
 
332366c
 
ca3adbd
 
 
 
 
 
 
 
 
c844a1a
ca3adbd
c844a1a
ca3adbd
c844a1a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
---
license: cc-by-nc-sa-4.0
pretty_name: Urban-ImageNet
task_categories:
  - image-classification
  - image-to-text
  - text-to-image
  - zero-shot-image-classification
  - image-segmentation
modalities:
  - image
  - text
language:
  - zh
  - en
size_categories:
  - 1M<n<10M
tags:
  - urban-perception
  - social-media
  - weibo
  - image-text-retrieval
  - instance-segmentation
  - computational-urban-studies
  - urban-ai
  - chinese-cities
  - husic
  - cross-modal-retrieval
  - multi-modal
  - scene-classification
  - urban-space-perception
---


# 🏙️ Urban-ImageNet

**A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception from Social Media Imagery.**

<p align="center">
  <a href="https://arxiv.org/abs/2605.09936"><img src="https://img.shields.io/badge/arXiv-2605.09936-b31b1b.svg" alt="arXiv"/></a>
  <a href="https://github.com/yiasun/dataset-2"><img src="https://img.shields.io/badge/GitHub-yiasun%2Fdataset--2-black?logo=github" alt="GitHub"/></a>
  <a href="https://huggingface.co/datasets/yiasun/urban-imagenet"><img src="https://img.shields.io/badge/🤗%20HuggingFace-Dataset-yellow" alt="HuggingFace"/></a>
</p>

> Urban-ImageNet fills a critical gap between computer vision and urban studies by treating cities not simply as visual scenes, but as lived, socially produced, and experientially activated spaces.

---


## Overview

**ImageNet** taught models to recognise objects. **Urban-ImageNet** teaches them to understand how people *experience* cities.

General-purpose benchmarks such as ImageNet and Places365 identify *what is in a scene*, but they were never designed to answer the question that matters in urban studies: *how do people inhabit, narrate, and socially activate urban space?* Urban-ImageNet is a domain-specific complement — a 2-million image–text benchmark drawn from real social media, organised by **HUSIC** (Hierarchical Urban Space Image Classification), a 10-class taxonomy grounded in the urban theories of Lefebvre, Gehl, and Newman, to capture the spatial, social, and functional distinctions that matter in urban research.

The corpus contains over **2 million** public Weibo image–text pairs collected from **61 urban commercial sites** across **24 Chinese cities** spanning **2019–2025**, with controlled benchmark subsets at 1K, 10K, and 100K scale, and a full 2M corpus for large-scale training. The benchmark supports three tasks within one standardised library (**Urban-ImageNet-lib**):

| #      | Task                                | Input → Output                        |
| ------ | ----------------------------------- | ------------------------------------- |
| **T1** | Urban scene semantic classification | Image → HUSIC label (0–9)             |
| **T2** | Cross-modal image–text retrieval    | Image ↔ Text (bidirectional)          |
| **T3** | Instance segmentation               | Image → Object masks + bounding boxes |

![Urban-ImageNet Framework Overview](Figures/01-Overall-Framework.jpg)
*Figure 1: The Urban-ImageNet framework — addressing current limitations in urban perception evaluation. The dataset bridges general-purpose vision benchmarks and domain-specific urban research needs through the HUSIC taxonomy and three unified benchmark tasks.*

---

## Dataset Variants

Four tiers are released to support model development and scaling-behaviour studies:

| Variant             | Total Images | Class Balance       | Images per Class | Predefined Split    | Storage (512 px) | Primary Use                           |
| ------------------- | -----------: | ------------------- | ---------------: | ------------------- | ---------------- | ------------------------------------- |
| **1K Dataset**      |        1,000 | ✅ Balanced          |              100 | train / val / test  | ~62 MB           | Quick tests, demos, debugging         |
| **10K Dataset**     |       10,000 | ✅ Balanced          |            1,000 | train / val / test  | ~620 MB          | Medium-scale experiments              |
| **100K Dataset**    |      100,000 | ✅ Balanced          |           10,000 | train / val / test  | ~6.15 GB         | **Main benchmark**                    |
| **Full Dataset-2M** |   2,000,000+ | ❌ Natural imbalance |           Varies | None — custom split | ~120 GB          | Large-scale training, scaling studies |

For all balanced tiers the train/val/test split ratio is **80:10:10**. All three tasks share **identical image files** across tiers; only labels and metadata files differ. The 2M corpus provides per-class image counts to support informed use under realistic class imbalance.

---

## File Structure

### Balanced Tiers (1K / 10K / 100K)

```
{Tier} Dataset/
├── 01 Images with labels/          ← Task 1: Scene Classification
│   ├── train/
│   │   ├── Exterior urban spaces with people/
│   │   │   └── *.jpg
│   │   ├── Exterior urban spaces without people/
│   │   │   └── *.jpg
│   │   ├── Food or drink items/
│   │   ├── Hotel or commercial lodging spaces/
│   │   ├── Human-centered portrait/
│   │   ├── Interior urban spaces with people/
│   │   ├── Interior urban spaces without people/
│   │   ├── Other non-spatial content/
│   │   ├── Private home interiors/
│   │   └── Retail products and merchandise/
│   ├── val/   (same structure)
│   └── test/  (same structure)

├── 02 Text-Image Pairs/            ← Task 2: Cross-Modal Retrieval
│   ├── train.xlsx
│   ├── val.xlsx
│   └── test.xlsx

└── 03 Instance Segmentation/       ← Task 3: Instance Segmentation
    ├── train.json
    ├── val.json
    ├── test.json
    └── Visualization of annotation samples/   (qualitative examples, optional)
        └── *.jpg
```

### Full Corpus (2M)

```
Full Dataset-2M/
├── Images/                         ← All 2M+ images (flat, no subfolders)
│   └── *.jpg
└── Labels/
    ├── 01 Semantic classification labels.CSV
    ├── 02 Text-Image Pairs.CSV
    └── 03 Instance Segmentation labels.json
```

> **Image format:** All released images are JPEG, privacy-protected, and resized to a maximum long edge of **512 px** (short edge scaled proportionally). Original usernames, faces, licence plates, and QR codes have been removed or blurred.

---

## The HUSIC Framework

<!-- Replace with actual HUSIC taxonomy figure -->
![HUSIC 10-Class Taxonomy](Figures/02-HUSIC-Framework.png)
*Figure 2: The HUSIC 10-class hierarchical taxonomy. Classes are organised into two primary groups (Spatially Relevant / Non-Spatially Relevant) and five secondary groups. Manual annotation by three trained researchers achieved Cohen's κ = 0.87.*

Raw location-tagged social media content is inherently heterogeneous. A user posting under a single hashtag such as `#Beijing Sanlitun` produces content spanning architectural photography, dining imagery, merchandise displays, selfies, hotel promotion, and noise. Without a principled framework, downstream spatial analyses are confounded by this heterogeneity. **HUSIC** resolves this by providing a theoretically grounded taxonomy that simultaneously serves as a UGC filtering pipeline and a 10-way classification benchmark.

### Theoretical Grounding

HUSIC class boundaries are defined by domain-expert concepts rather than data-driven frequency, drawing on three complementary bodies of urban theory:

- **Lefebvre's Production of Space** — The distinction between *conceived space* (design intent) and *lived space* (social appropriation through use) motivates the *with/without people* axis within each spatial group, a distinction absent from all existing vision benchmarks.
- **Gehl's Public Life Studies** — Gehl's finding that social activity is both an indicator and a self-reinforcing generator of successful public space justifies treating activated and non-activated spaces as analytically distinct categories.
- **Newman's Spatial Hierarchy** — Newman's defensible-space framework, which conceptualises urban environments along a public-to-private gradient, provides the basis for HUSIC's three-tier spatial hierarchy: publicly accessible spaces, transitional semi-public spaces, and privately controlled spaces.

### HUSIC Class Definitions

|   ID | Class Label                              | Primary Category       | Secondary Group       | Description                                                  |
| ---: | ---------------------------------------- | ---------------------- | --------------------- | ------------------------------------------------------------ |
|    0 | **Exterior urban spaces with people**    | Spatially Relevant     | Urban Exterior        | Populated plazas, active streetscapes, occupied public spaces with visible human presence |
|    1 | **Exterior urban spaces without people** | Spatially Relevant     | Urban Exterior        | Empty building facades, vacant streets, unpopulated plazas focusing on architectural features |
|    2 | **Interior urban spaces with people**    | Spatially Relevant     | Urban Public Interior | Active shopping areas, occupied commercial interiors, indoor events, occupied restaurants |
|    3 | **Interior urban spaces without people** | Spatially Relevant     | Urban Public Interior | Empty retail spaces, vacant corridors, interior design and spatial composition views |
|    4 | **Hotel or commercial lodging spaces**   | Spatially Relevant     | Accommodation         | Hotel rooms, serviced apartments, Airbnb-style lodging interiors |
|    5 | **Private home interiors**               | Spatially Relevant     | Accommodation         | Private residential interiors posted in association with nearby urban commercial sites |
|    6 | **Food or drink items**                  | Non-Spatially Relevant | Consumption           | Plated dishes, beverages, dining-table scenes, food presentations |
|    7 | **Retail products and merchandise**      | Non-Spatially Relevant | Consumption           | Fashion items, electronics, cosmetics, product displays, store-window arrangements |
|    8 | **Human-centered portrait**              | Non-Spatially Relevant | Social Portrait       | Selfies, group photos, portrait-dominant images with urban backgrounds |
|    9 | **Other non-spatial content**            | Non-Spatially Relevant | Miscellaneous         | Advertisements, screenshots, memes, maps, infographics, animal photos |

### Design Philosophy: A Foundation Framework, Not a Closed Taxonomy

HUSIC intentionally operates at a **10-class foundation level** rather than providing exhaustive fine-grained subcategories. This is a deliberate design framework for various downstream urban studies.

The primary purpose of these 10 classes is to serve as a **universal filtering and routing layer** for social media imagery: they cleanly partition the semantic space of urban UGC so that researchers can isolate the specific subset relevant to their question, discard noise, and then apply domain-specific analysis on a purified corpus. The downstream subcategorisation — which varies enormously across research questions — is intentionally left open for the research community to define.

Filtering to a single HUSIC class typically eliminates **up to 90% of irrelevant content** before any domain-specific analysis begins, dramatically improving the signal-to-noise ratio regardless of the downstream task. To further support deeper analysis within each class, HUSIC is complemented by **Task 3 instance segmentation**, which provides 12–20 class-specific object labels per HUSIC category. This gives researchers both a clean high-level routing layer and a set of object-level semantic anchors without requiring exhaustive fine-grained annotation at the dataset level.

Some examples of how individual HUSIC classes can seed specialised downstream research:

| HUSIC Class | Research Direction | Possible Next Step |
|-------------|-------------------|--------------------|
| ID 0 — Exterior *with people* | Human behaviour in public space | Pose estimation, action recognition, pedestrian counting; age/gender distribution; temporal activity mapping |
| ID 1 — Exterior *without people* | Architectural perception and design quality | Further classify by style (classical / modernist / parametric), space type (plaza / park / streetscape), or façade material |
| ID 2 — Interior *with people* | Commercial interior vitality | Crowd density, dwell behaviour, wayfinding; how spatial layout influences visitor flow |
| ID 3 — Interior *without people* | Retail design and aesthetic perception | Classify by interior style, lighting, or spatial configuration |
| ID 4 — Hotel / lodging | Urban tourism and short-term rental market | How commercial district proximity influences lodging aesthetics and pricing signals |
| ID 5 — Private home interiors | Urban housing and long-term rental market | How proximity to commercial hubs shapes residential presentation and rental marketing |
| ID 6 — Food or drink | F&B consumption trends | Classify by cuisine, price tier, format; UGC posting frequency as a revealed-preference signal of popularity |
| ID 7 — Retail products | Consumer behaviour and market analysis | Category trends (fashion / electronics / cosmetics); temporal trend tracking |
| ID 8 — Human-centered portrait | Social behaviour and place attachment | Which spatial settings motivate photographic self-documentation; social gathering pattern analysis |
| ID 9 — Other non-spatial | Noise filtering | Exclude from virtually all downstream spatial or commercial analyses |

### Rationale: Spatially Relevant Classes (IDs 0–5)

The spatially relevant classes cover the full spectrum of urban environments from public exterior, through public interior, to semi-public and private spaces. The **with/without people** split within exterior and interior classes is analytically essential:

- *With people* classes (IDs 0, 2) enable research on pedestrian behaviour, social activity patterns, human action recognition, and spatial vitality measurement. With pose estimation and action recognition techniques, these images can reveal how different user groups occupy public space and what kinds of activities different spatial configurations support.
- *Without people* classes (IDs 1, 3) isolate architectural and design elements independent of human activity — supporting aesthetic perception studies, visual quality assessment, and design feature analysis without the confound of human occlusion.

The accommodation classes (IDs 4–5) capture a phenomenon consistently observed in the corpus: users post hotel and private-home interiors alongside commercial-district imagery, reflecting how urban commercial centres influence both short-term (hotel / Airbnb) and long-term (residential) rental markets in their surroundings — a research direction not addressed by any existing benchmark.

### Rationale: Non-Spatially Relevant Classes (IDs 6–9)

Non-spatial classes capture the consumption and social dimensions of urban life equally present in social media posts about commercial districts, even when images contain no spatial content.

- **ID 6 — Food or drink items:** UGC food images carry implicit preference signals — frequency of posting reflects popularity and memorability without requiring explicit ratings. Useful for F&B market research and restaurant analytics.
- **ID 7 — Retail products and merchandise:** Product images reveal consumer preferences and brand visibility. Because posts originate from people who visited specific commercial sites, the corpus carries an implicit spatial anchor useful for retail market analysis.
- **ID 8 — Human-centered portrait:** Self-portraits and group photos document the social occasions that spaces enable — revealing which settings activate gathering, self-expression, and place attachment. Analysing when and where people choose to photograph themselves is itself evidence of spatial vitality.
- **ID 9 — Other non-spatial content:** A residual noise class — advertisements, screenshots, memes, infographics. Minimal relevance for urban or commercial research; its isolation as a discrete class makes downstream filtering reliable and auditable.

---

## Task 1: Urban Scene Semantic Classification

**Goal:** Given an input image, predict its HUSIC label (class ID 0–9).

**File format:** ImageFolder-style hierarchy under `01 Images with labels/`. The subdirectory name is the ground-truth label. Integer labels 0–9 follow lexicographic sort, directly compatible with PyTorch `torchvision.datasets.ImageFolder`.

```python
from torchvision.datasets import ImageFolder
from torchvision import transforms

dataset = ImageFolder(
    root="100K Dataset/01 Images with labels/train",
    transform=transforms.Compose([
        transforms.Resize(224),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
    ])
)
# dataset.classes → ['Exterior urban spaces with people', 'Exterior urban spaces without people', ...]
# dataset.class_to_idx → {'Exterior urban spaces with people': 0, ...}
```

### T1 Baseline Results (100K benchmark, 80K/10K/10K split)

| Model                      | Top-1 Acc. (%) |  Macro-F1 |
| -------------------------- | -------------: | --------: |
| ResNet-18                  |           75.9 |     0.754 |
| ResNet-50                  |           79.7 |     0.799 |
| ResNet-152                 |           80.5 |     0.804 |
| ViT-B/16                   |           79.0 |     0.790 |
| DeiT-B                     |           80.3 |     0.802 |
| EfficientNet-B4            |       **84.9** | **0.849** |
| CLIP ViT-L/14 (zero-shot)  |           37.9 |     0.350 |
| CLIP ViT-L/14 (fine-tuned) |           69.1 |     0.675 |

> CLIP zero-shot performs poorly because HUSIC labels such as *activated exterior space* and *non-activated interior space* are not standard web image categories. Fine-tuning substantially improves CLIP, but it remains below supervised classifiers. The interior-without-people vs. interior-with-people boundary is the most challenging distinction across all models.

---

## Task 2: Cross-Modal Image–Text Retrieval

**Goal:** Given a text query, retrieve matching images (text-to-image), or given an image, retrieve matching text (image-to-text).

**File format:** Three Excel spreadsheets (`train.xlsx`, `val.xlsx`, `test.xlsx`) under `02 Text-Image Pairs/`. Each row describes one image and its associated Weibo post metadata.

### Metadata Schema

| Column                | Type    | Description                                                  | Task 2 Role                                            |
| --------------------- | ------- | ------------------------------------------------------------ | ------------------------------------------------------ |
| `Image Label`         | string  | HUSIC class label (e.g., `Exterior urban spaces with people`) | **T2-A query text** (category-level retrieval)         |
| `Image Filename`      | string  | Join key in `UserID_PostTime_Index` format                   | **Primary join key** linking spreadsheet to image file |
| `Post ID`             | integer | Anonymised numerical post identifier                         | Metadata                                               |
| `User ID`             | integer | Anonymised numerical user identifier (original username removed) | Metadata                                               |
| `Post Time`           | string  | Original post timestamp                                      | Metadata                                               |
| `Post Text`           | string  | Original Weibo post text (Chinese, unmodified)               | **T2-B query text** (post-level retrieval)             |
| `City`                | string  | City associated with the location tag                        | Metadata                                               |
| `Place Tag`           | string  | Location hashtag or commercial-site place tag                | Metadata                                               |
| `Posting Tool`        | string  | Client or posting-source string                              | Metadata                                               |
| `Mentioned Users`     | string  | Anonymised or empty mentioned-user field                     | Metadata                                               |
| `Extracted Topics`    | string  | Topic or hashtag terms extracted from post text              | Metadata                                               |
| `Extracted Locations` | string  | Location mentions extracted from post text                   | Metadata                                               |
| `Like Count`          | integer | Public engagement count at collection time                   | Metadata                                               |
| `Repost Count`        | integer | Public repost count at collection time                       | Metadata                                               |
| `Comment Count`       | integer | Public comment count at collection time                      | Metadata                                               |

### Image–Filename Join Key

Each image filename follows the pattern `{UserID}_{PostTime}_{Index}`, for example:

```
2668383_2020-01-21_0.jpg   →  User 2668383, post from 2020-01-21, first image (index 0)
2668383_2020-01-21_1.jpg   →  Same post, second image
2668383_2020-01-21_8.jpg   →  Same post, ninth (last) image
```

The `Image Filename` column in the spreadsheet (without the `.jpg` extension) directly matches the filename stem of the corresponding image in `01 Images with labels/`. This allows joining image files to their associated text metadata using a simple string match.

### Example Row (English translation for illustration only; released data retains original Chinese)

| Column         | Example Value                                                |
| -------------- | ------------------------------------------------------------ |
| Image Label    | `Exterior urban spaces with people`                          |
| Image Filename | `73811347_2023-09-14_2`                                      |
| Post ID        | `4945998754351775`                                           |
| User ID        | `73811347`                                                   |
| Post Time      | `2023-09-14 22:24`                                           |
| Post Text      | *(Original Chinese retained in release)**Illustrative English translation: "Dinner tonight — came to Sanlitun with a group-buying voucher for hot pot skewers. Only ¥58 and we were stuffed! Great atmosphere, good service, tasty food. #BeijingFood #BeijingSanlitun"* |
| City           | `Beijing`                                                    |
| Place Tag      | `#Beijing Sanlitun`                                         |
| Like Count     | `0`                                                          |

> **Note:** The released dataset retains **original Chinese text** in the `Post Text` column to preserve linguistic authenticity and avoid translation distortion, which is scientifically important for Task 2 evaluation. English text in this README is for illustrative purposes only.

### Two Retrieval Sub-Tasks

Urban-ImageNet supports two complementary retrieval configurations that reflect increasing real-world difficulty:

| Sub-task                 | Query Text Source                                            | Ground Truth                                          | Difficulty | Notes                                                        |
| ------------------------ | ------------------------------------------------------------ | ----------------------------------------------------- | ---------- | ------------------------------------------------------------ |
| **T2-A: Category-level** | `Image Label` column — HUSIC class name/definition (e.g., `Exterior urban spaces with people`) | All images sharing the same `Image Label`             | Moderate   | Structured semantic alignment; good for zero-shot transfer evaluation |
| **T2-B: Post-level**     | `Post Text` column — original Weibo post narrative           | All images attached to the same post (up to 9 images) | Hard       | Informal colloquial language; loose image–text coupling; multi-positive ground truth |

**Bidirectional use:** Both sub-tasks support either direction:

- **Image → Text**: given an image, retrieve its HUSIC label or matching post text.
- **Text → Image**: given a HUSIC label or post text, retrieve the matching image(s). For T2-B, one post may correspond to up to **9 images**, so evaluation must use a multi-positive retrieval protocol rather than assuming one-to-one caption–image correspondence.

### T2 Baseline Results (10K test set)

| Setting                 | Model             |      R@1 |      R@5 |      R@10 |      mAP |    MedR |
| ----------------------- | ----------------- | -------: | -------: | --------: | -------: | ------: |
| **T2-A Category label** | CLIP (zero-shot)  |     54.2 |     96.5 |     100.0 |     53.3 |     1.5 |
|                         | CLIP (fine-tuned) |     92.7 |     99.8 |     100.0 |     90.7 |     1.0 |
|                         | BLIP (zero-shot)  |     14.9 |     43.6 |      80.0 |     19.8 |     6.2 |
|                         | BLIP (fine-tuned) | **94.2** | **99.8** | **100.0** | **93.3** | **1.0** |
| **T2-B Post text**      | CLIP (zero-shot)  |      2.6 |      5.4 |       7.0 |      4.5 |     328 |
|                         | CLIP (fine-tuned) |  **8.1** | **16.9** |  **23.5** | **13.2** |  **64** |
|                         | BLIP (zero-shot)  |      0.1 |      0.4 |       1.2 |      0.8 |     477 |
|                         | BLIP (fine-tuned) |      1.9 |      6.8 |      11.6 |      5.5 |      92 |
| **T2-B Post + label**   | CLIP (fine-tuned) |  **9.3** | **22.8** |  **32.3** | **17.0** |  **25** |

> Category-label retrieval is near-trivial after fine-tuning (≥92% R@1), confirming that HUSIC descriptions provide strong cross-modal signal. Post-text retrieval is substantially harder: Weibo posts are short informal narratives (median 32 characters) rather than image descriptions, and a single post may accompany images spanning multiple HUSIC classes. Against a random-chance baseline of ~0.1% R@1, fine-tuned CLIP achieves 8.1% (76× chance), establishing a concrete baseline for future urban-domain vision–language models.


![T2 Retrieval Results](Figures/03-Task2_Results.jpg)
*Figure 3: T2 retrieval results (avg. T2I + I2T). Category-label retrieval (left) is near-trivial after fine-tuning; post-text retrieval (right) remains genuinely challenging, establishing an important open problem for urban-domain vision–language research.*

---

## Task 3: Instance Segmentation

**Goal:** Detect and delineate urban-domain objects within each image using pixel-level instance masks.

**File format:** Three COCO-compatible JSON files (`train.json`, `val.json`, `test.json`) under `03 Instance Segmentation/`.

### Annotation JSON Structure

Each JSON file follows the COCO format with the following fields:

```json
{
  "info": {
    "description": "Urban-ImageNet Instance Segmentation Annotations",
    "split": "train",
    "version": "1.0"
  },
  "categories": [ {"id": 0, "name": "Exterior urban spaces with people"}, ... ],
  "images": [
    {
      "id": 0,
      "file_name": "2668383_2020-01-21_0.jpg",
      "width": 512,
      "height": 384,
      "classification_label": 0
    }, ...
  ],
  "annotations": [
    {
      "id": 0,
      "image_id": 0,
      "category_id": 0,
      "detected_label": "person",
      "detection_score": 0.8732,
      "bbox": [x, y, width, height],
      "area": 4512,
      "segmentation": { "counts": "...", "size": [384, 512] },
      "iscrowd": 0
    }, ...
  ]
}
```

**Extended fields beyond standard COCO:**

- `classification_label` (in `images`): the HUSIC class ID of the image — enables multi-task joint training and evaluation.
- `detected_label` (in `annotations`): the specific object term detected by Grounding DINO (e.g., `"person"`, `"retail shelf"`, `"escalator"`).
- `detection_score` (in `annotations`): Grounding DINO confidence score, enabling downstream threshold-based filtering.
- Segmentation masks are stored in **COCO RLE format** (run-length encoding), directly compatible with `pycocotools`.

### Annotation Pipeline

Annotations were generated using a two-stage automatic pipeline followed by human quality control:

1. **Grounding DINO** (text-prompted open-vocabulary object detection) identifies bounding boxes using class-specific vocabulary prompts.

2. **SAM 2** (Segment Anything Model 2) refines each detected box into a pixel-level instance mask.

3. **NMS** (Non-Maximum Suppression) removes overlapping detections.

4. **Area filtering** removes very small (noise) and very large (full-image) detections.

5. **Human review** was applied to the evaluation subset, with stricter confidence thresholds (≥0.50 detection score, ≥0.88 IoU), ensuring reliable ground truth for model comparison.

   Training pseudo-labels use more permissive thresholds (≥0.35, ≥0.80). Users should account for the pseudo-label nature of annotations when interpreting segmentation performance.

### Per-Class Segmentation Vocabulary

Each HUSIC class uses a tailored vocabulary of 12–20 object terms designed to capture the semantically appropriate instances for that scene type, maximising detection recall while minimising false positives.

|   ID | Class                                | Segmentation Object Terms                                    |
| ---: | ------------------------------------ | ------------------------------------------------------------ |
|    0 | Exterior urban spaces with people    | person · crowd · pedestrian · building façade · lawn · street lamp · glass curtain wall · sky · tree · shrub · fence · road · water · river · vehicle · sculpture · installation · pavement · street signage · fountain |
|    1 | Exterior urban spaces without people | building façade · glass curtain wall · wooden façade · tree · shrub · lawn · sky · pavement · road · water · river · lantern · sculpture · installation · street lamp · signage · fence · bridge · water feature · fountain |
|    2 | Interior urban spaces with people    | person · shopper · crowd · retail shelf · escalator · elevator · ceiling · floor tile · glass partition · display case · door · indoor plant · wall · window · handrail · column |
|    3 | Interior urban spaces without people | retail shelf · escalator · indoor corridor · ceiling · floor tile · marble floor · glass partition · display case · wall · column · indoor plant · elevator · door · window · lighting fixture · handrail |
|    4 | Hotel or commercial lodging spaces   | hotel bed · furniture · sofa · carpet · marble floor · tile floor · wooden floor · ceiling · bathroom · window · curtain · lamp |
|    5 | Private home interiors               | sofa · bed · dining table · floor · ceiling · kitchen · bookshelf · wardrobe · window · lamp · carpet · wall |
|    6 | Food or drink items                  | food dish · meal plate · dessert · beverage cup · coffee · drink bottle · bowl · chopsticks · spoon · dining table · person · restaurant interior |
|    7 | Retail products and merchandise      | fashion clothing · shoes · cosmetics · product package · merchandise · retail shelf · bag · jewelry · electronics · store window · mannequin · person |
|    8 | Human-centered portrait              | person · face · building façade · sky · tree · floor · food · animal · vehicle · indoor background |
|    9 | Other non-spatial content            | animal · person · vehicle · advertisement poster · text · QR code · screenshot · sculpture · meme · sky · plant · signage · graphic design · logo · map · infographic · chat record |

> **T3 scope note:** Instance segmentation masks are generated for all 10 HUSIC classes. The T3 evaluation benchmark adopts a **class-agnostic protocol** — treating all detected objects as a single `object` category — to produce conservative, architecture-comparable metrics uncorrupted by class-imbalanced pseudo-labels. Per-class AP results are available in the supplementary material of the paper.

### T3 Baseline Results (quality-filtered evaluation subset, confidence ≥ 0.50, IoU ≥ 0.88)

| Model                    |        AP |      AP₅₀ |      AP₇₅ |  mIoU |  FPS |
| ------------------------ | --------: | --------: | --------: | ----: | ---: |
| Mask R-CNN               |     0.267 |     0.472 |     0.276 | 0.629 | 15.4 |
| Cascade Mask R-CNN       |     0.290 |     0.495 |     0.299 | 0.635 | 12.7 |
| **Mask R-CNN + SAM**     | **0.373** |     0.563 | **0.378** |     — |   ~0 |
| Cascade Mask R-CNN + SAM |     0.369 | **0.531** |     0.380 |     — |   ~0 |
| GT-box SAM (oracle†)     |     0.749 |     0.924 |     0.805 |     — |    — |

> †GT-box SAM uses ground-truth bounding boxes as prompts — an oracle upper bound, not a trainable baseline. Adding SAM box-refinement to Mask R-CNN increases AP by ~40% relative (0.267 → 0.373), establishing a strong open-source baseline for future work. The gap between trainable models and the oracle (0.373 vs. 0.749) highlights substantial room for improvement in urban commercial-space instance segmentation.

<!-- Replace with actual T3 qualitative figure -->
![T3 Qualitative Segmentation Examples](Figures/04-Task3_Examples.jpg)
*Figure 4: Task 3 qualitative segmentation examples across HUSIC classes. Colour-coded instance masks from Mask R-CNN, Cascade Mask R-CNN, and Mask R-CNN+SAM. The domain-specific vocabulary enables detection of urban-specific objects (escalators, retail shelves, display cases, street lamps) not well-covered by general segmentation benchmarks.*

---

## Urban-ImageNet-lib

<!-- Replace with actual lib architecture figure -->
![Urban-ImageNet-lib Architecture](Figures/05-Benchmark.jpg)
*Figure 5: Urban-ImageNet-lib architecture — a unified benchmarking framework supporting all three tasks with standardised cross-dataset comparison adapters.*

**Urban-ImageNet-lib** is a Python benchmarking library providing:

- Modular data loaders for all three tasks and all four dataset tiers.
- Standard fine-tuning pipelines for T1 (classification), T2 (retrieval), and T3 (segmentation) baselines.
- Evaluation scripts with metrics matching established benchmarks (T1 ↔ Places365/SUN; T2 ↔ MS-COCO Captions/Flickr30K; T3 ↔ MS-COCO Instance Seg./Cityscapes).
- Cross-dataset adapters enabling direct performance comparison in a unified table.


See the [GitHub repository](https://github.com/yiasun/dataset-2) for full installation instructions and usage examples.

---

## Scaling Behaviour

Urban-ImageNet's four-tier design enables systematic study of how classification accuracy and computational cost scale with dataset size. All balanced tiers (1K / 10K / 100K) are strictly class-balanced so that performance differences across tiers are attributable to data quantity alone, without confounding from class imbalance. All models were trained separately on each tier and evaluated on a **shared held-out 10K test set**.

### T1 Scaling: Top-1 Accuracy and Macro-F1

| Model | 1K Acc. (%) | 1K F1 | 10K Acc. (%) | 10K F1 | 100K Acc. (%) | 100K F1 |
|-------|------------:|------:|-------------:|-------:|--------------:|--------:|
| ResNet-50 | 66.5 | 0.661 | 78.1 | 0.781 | 83.5 | 0.835 |
| ResNet-152 | 67.3 | 0.670 | 79.0 | 0.787 | 83.5 | 0.834 |
| CLIP (fine-tuned) | 70.8 | 0.708 | 78.0 | 0.780 | 82.3 | 0.822 |
| LLaVA-1.5 (fine-tuned) | 76.8 | 0.767 | 81.2 | 0.812 | — † | — † |

> † LLaVA-1.5 100K fine-tuning was not completed due to computational constraints (~3,200× slower per sample than ResNet-50; estimated >150 GPU-hours on H100).

All models improve monotonically with scale. The **1K→10K gain (10–12%) consistently exceeds the 10K→100K gain (5%)**, consistent with standard scaling laws. LLaVA-1.5's stronger language-grounded priors give it an advantage at small scales (76.8% at 1K vs. 66.5–70.8% for others) but it is computationally prohibitive at 100K.

### Hierarchical T1 Scaling: Coarser Distinctions Are Easier

HUSIC's hierarchical structure means models can be evaluated at three levels of granularity. At 100K, models substantially exceed their 10-class accuracy when evaluated on coarser distinctions:

| Model | Tier | Spatial/Non-spatial Acc. | Exterior/Interior Acc. | 10-class Acc. |
|-------|------|-------------------------:|-----------------------:|--------------:|
| ResNet-50 | 1K | 88.7% | 86.7% | 66.5% |
| ResNet-50 | 10K | 92.5% | 92.3% | 78.1% |
| ResNet-50 | 100K | 93.9% | **95.0%** | 83.5% |
| ResNet-152 | 100K | 94.2% | 94.7% | 83.5% |
| CLIP (FT) | 100K | 94.0% | 87.5% | 82.3% |
| LLaVA-1.5 (FT) | 10K | 91.9% | 85.4% | 81.2% |

At 100K, spatial vs. non-spatial binary accuracy reaches 94% and exterior vs. interior reaches 95%, confirming that HUSIC captures semantically meaningful hierarchical structure. The gap between coarse (94–95%) and fine-grained (83–85%) accuracy highlights that the activation-level distinctions (e.g., *with people* vs. *without people*) remain the hardest sub-problems.

### T2-Post Retrieval Scaling

Post-level retrieval difficulty grows naturally as the candidate gallery expands. Fine-tuned CLIP's average R@1 drops from **39.5%** on the 1K split (100-image pool) to **8.1%** on the 10K split (1,000-image pool), confirming that T2-B is a scalably challenging benchmark.

| Model | 1K split — Avg. R@1 (%) | 1K split — Avg. mAP | 10K split — Avg. R@1 (%) | 10K split — Avg. mAP |
|-------|------------------------:|--------------------:|-------------------------:|---------------------:|
| CLIP (fine-tuned) | 39.5 | 0.501 | 8.1 | 0.132 |
| BLIP-2 (fine-tuned) | 28.1 | 0.392 | 5.0 | 0.094 |
| BLIP (fine-tuned) | 16.6 | 0.283 | 1.9 | 0.055 |

*(Avg. = average of T2I and I2T directions; mAP as a fraction 0–1.)*

---

## Data Collection and Construction Pipeline

<!-- Replace with actual pipeline figure -->
![Dataset Construction Pipeline](Figures/06-Data-Collection.png)
*Figure 6: Overview of the Urban-ImageNet dataset construction and annotation pipeline — from Weibo crawling through privacy processing, HUSIC annotation, and multi-task organisation.*

Urban-ImageNet was constructed through a five-stage pipeline:

1. **Collection** — A Python-based web crawler systematically retrieved all public Weibo posts from location-specific hashtags at 61 major urban commercial sites across 24 Chinese cities, covering 2019–2025. Up to 9 image attachments, post text, and metadata were captured per post, yielding a raw corpus of over **4 TB** and **2 million** image–text pairs.

2. **Cleaning** — Four-stage deduplication and filtering: (i) near-duplicate removal via perceptual hashing (pHash, Hamming distance ≤ 8); (ii) discard of images smaller than 256×256 px; (iii) NSFW filtering via pre-trained classifier; (iv) removal of systematically repeated commercial advertisement posts via post-text hash similarity.

3. **Privacy Protection** — Automated face detection, licence-plate recognition, and QR-code detection were applied to all images with all detected regions blurred. Original usernames were stripped and replaced with opaque numerical identifiers. Images were resized to a maximum side length of **512 px**. The raw 4 TB corpus is retained securely by the authors and will not be publicly released.

4. **HUSIC Annotation (T1 & T2)** — The 100K balanced benchmark set was manually annotated by three trained researchers following a standardised guideline. A shared 3,000-image double-annotation subset yielded **Cohen's κ = 0.87** (near-perfect agreement). Disagreements were resolved by majority vote and guideline revision. The annotation process took approximately **two years** of sustained effort.

5. **Instance Segmentation (T3)** — Pseudo-labels were generated using Grounding DINO + SAM 2 with per-class vocabulary prompts, followed by NMS and area filtering. The evaluation subset was reviewed with stricter thresholds and human spot-checks.

---

## Geographic and Site Coverage

Urban-ImageNet covers **61 urban commercial sites** across **24 Chinese cities** spanning **8 macro-regions**, including all four first-tier cities (Beijing, Shanghai, Guangzhou, Shenzhen), leading new first-tier cities (Chengdu, Hangzhou, Nanjing, Wuhan, Xi'an, Chongqing), and a range of second-tier regional centres. Sites span three spatial typologies: enclosed malls, open-air pedestrian precincts, and mixed-typology developments. The full site list is provided in the [paper appendix](https://arxiv.org/abs/2605.09936). 

<!-- Replace with your actual geographic distribution figure -->
![Geographic Distribution of Urban-ImageNet Collection Cities](Figures/07-Geo-distribution.png)
*Figure 7: Geographic distribution of Urban-ImageNet's 24 collection cities. Marker size is proportional to the number of collected image–text pairs per city; colour encodes macro-region.*

---

## Privacy and Responsible Use

Urban-ImageNet is derived from **public Weibo posts** — posts whose visibility was explicitly set to "open to all" by the account holder at the time of collection. Although source posts were public, the released dataset applies multiple layers of privacy protection in line with the practice of large-scale street-level datasets (e.g., Google Street View):

| Protection Measure     | Implementation                                               |
| ---------------------- | ------------------------------------------------------------ |
| Username removal       | All original Weibo usernames stripped; `User ID` is an opaque numerical pseudonym |
| Post identity          | `Post ID` is an anonymised numerical identifier; no account URL or profile data is included |
| Face blurring          | Automated face detection applied to all images; detected face regions blurred |
| Licence plate blurring | Automated licence-plate recognition; all plates blurred      |
| QR code blurring       | Automated QR-code detection; all QR codes blurred; supplemented by manual spot-checks |
| Image resolution       | Released at ≤ 512 px long edge; original-resolution corpus (4 TB) not publicly released |
| Text retention         | `Post Text` retains original Chinese to preserve linguistic authenticity for T2; contains no directly identifying information beyond what the original public post disclosed |
| Data minimisation      | Only fields necessary for the three benchmark tasks are included in the release |

**Data-use agreement:** Researchers accessing Urban-ImageNet must agree to a data-use agreement restricting use to **non-commercial academic research** and prohibiting:

- Re-identification of individuals

- Facial recognition or biometric profiling

- Account or identity reconstruction

- Surveillance or social scoring

- Law-enforcement targeting

- Commercial profiling or demographic inference

  **Research purpose:** Urban-ImageNet is designed to advance evidence-based urban design and planning through improved AI perception of public spaces — serving a clear public good. The authors will monitor dataset use and reserve the right to retract access in cases of misuse.

---

## Limitations and Known Biases

- **Geographic bias:** The corpus is entirely China-sourced and should not be treated as globally representative of urban commercial spaces.
- **Platform bias:** Weibo users are not representative of all city residents; the dataset over-represents younger, urban, mobile-connected demographics.
- **Visual selection bias:** Social media images over-represent photogenic, popular, and personally meaningful scenes; empty or mundane spaces are systematically underrepresented.
- **Linguistic bias:** Post text is original Chinese social-media language containing slang, emoji, hashtags, and frequently loose image–text coupling.
- **Class imbalance in 2M corpus:** The full corpus reflects natural posting frequencies and is significantly class-imbalanced; the balanced 1K/10K/100K tiers do not reflect natural class distributions.
- **T3 pseudo-labels:** Task 3 annotations are model-generated pseudo-labels (Grounding DINO + SAM 2), not exhaustive human pixel-level labels; users should account for this when training or evaluating segmentation models.
- **Temporal scope:** Posts span 2019–2025; urban commercial environments evolve over time and some sites may have changed significantly.

---

## Related Work

Urban-ImageNet is designed as a **domain-specific complement** to the following general-purpose benchmarks:

| Benchmark                                                    | Task Covered                   | Relation to Urban-ImageNet                                   |
| ------------------------------------------------------------ | ------------------------------ | ------------------------------------------------------------ |
| [Places365](http://places2.csail.mit.edu/)                   | Scene classification           | Urban-ImageNet provides theory-grounded, activation-aware sub-categories of Places365 classes |
| [SUN Database](https://3dvision.princeton.edu/projects/2010/SUN/) | Scene classification           | Complementary focus on commercial urban spaces with social context |
| [MS-COCO Captions](https://cocodataset.org/)                 | Image–text retrieval           | Urban-ImageNet provides authentic first-person social media narratives vs. COCO's objective third-person captions |
| [Flickr30K](http://shannon.cs.illinois.edu/DenotationGraph/) | Image–text retrieval           | Urban-ImageNet provides Chinese-language, domain-specific, multi-positive retrieval ground truth |
| [MS-COCO Instance Seg.](https://cocodataset.org/)            | Instance segmentation          | Urban-ImageNet provides domain-specific commercial-space vocabulary (retail shelves, escalators, hotel beds, etc.) |
| [Cityscapes](https://www.cityscapes-dataset.com/)            | Semantic/instance segmentation | Urban-ImageNet focuses on commercial interior and mixed exterior spaces vs. Cityscapes' driving-scene focus |

---


## Citation

If you use Urban-ImageNet in your research, please cite our paper:

```bibtex
@article{ou2026urbanimagenet,
  title   = {Urban-ImageNet: A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception},
  author  = {Ou, Yiwei and Cheung, Chung Ching and Ang, Jun Yang and Ren, Xiaobin and Sun, Ronggui and Gao, Guansong and Zhao, Kaiqi and Manfredini, Manfredo},
  journal = {arXiv preprint arXiv:2605.09936},
  year    = {2026},
  eprint  = {2605.09936},
  archivePrefix = {arXiv},
  primaryClass  = {cs.CV},
  url     = {https://arxiv.org/abs/2605.09936}
}
```

**Paper:** [arXiv:2605.09936](https://arxiv.org/abs/2605.09936)  
**Dataset:** [huggingface.co/datasets/Yiwei-Ou/Urban-ImageNet](https://huggingface.co/datasets/Yiwei-Ou/Urban-ImageNet)  
**Benchmark code:** [github.com/yiasun/dataset-2](https://github.com/yiasun/dataset-2)

---

## License

The dataset is released under **Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)**.

You are free to use, share, and adapt this dataset for **non-commercial academic research** with appropriate attribution, provided that you give appropriate credit and distribute any derivative works under the same license. Commercial use of any kind is prohibited.

See [LICENSE](https://creativecommons.org/licenses/by-nc-sa/4.0/) for full terms.