File size: 7,953 Bytes
acef5d1 01eab7d acef5d1 01eab7d acef5d1 01eab7d acef5d1 01eab7d acef5d1 01eab7d a46366b 01eab7d 318cb2e 01eab7d 318cb2e 01eab7d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 | ---
license: cc-by-4.0
task_categories:
- video-classification
- text-generation
language:
- en
tags:
- human-activity-recognition
- multimodal
- sensor-data
- privacy-preserving
- IMU
- depth
- infrared
- thermal
- skeleton
- radar
- mmwave
- HAR
pretty_name: "CUHK-S"
size_categories:
- 100K<n<1M
viewer: false
---
# CUHK-S: A Privacy-Preserving Multimodal Dataset for Human Action Recognition
[](https://www.arxiv.org/abs/2512.07136)
[](https://siyang-jiang.github.io/CUHK-X/)
## Dataset Description
CUHK-S is a **privacy-preserving subset** of the [CUHK-X](https://siyang-jiang.github.io/CUHK-X/) dataset, a large-scale multimodal benchmark for Human Action Recognition (HAR), Understanding (HAU), and Reasoning (HARn). CUHK-X was accepted at **MobiSys 2026**.
Compared to the full CUHK-X dataset, CUHK-S:
- **Removes all RGB video** to prevent facial identification
- **Downscales** all visual modalities to 320 × 240
- **Selects 18 out of 30** participants while preserving full action coverage (40 categories)
## Dataset Summary
| Attribute | Value |
|-------------------|------------------------------------------------|
| Participants | 18 (selected from 30 in CUHK-X) |
| Action Categories | 40 |
| Modalities | 6 (Depth, IR, Thermal, IMU, Radar, Skeleton) |
| Visual Resolution | 320 × 240 |
| Total Size | ~146 GB (18 zip files, one per participant) |
## Modalities
| Modality | Format | Description |
|------------|-------------|-------------------------------------------------|
| Depth | PNG (color) | Colorized depth maps from Vzense NYX 650 |
| IR | PNG | Infrared images, robust to lighting changes |
| Thermal | PNG | Heat signature from thermal camera |
| IMU | CSV | 5-sensor accelerometer/gyroscope/magnetometer |
| Radar | Binary | mmWave radar point cloud (TI Radar) |
| Skeleton | JSON/CSV | 3D joint positions from pose estimation |
> **Note**: RGB video is intentionally excluded from CUHK-S to protect participant privacy.
## Dataset Structure
Each participant's data is packaged as a zip file: `CUHK-S_userN-userN.zip`
```
CUHK-S/
├── HAR/ # Human Action Recognition task
│ └── data/
│ ├── Depth_Color/ # Colorized depth frames (.png)
│ ├── IR/ # Infrared frames (.png)
│ ├── Thermal/ # Thermal imaging frames (.png)
│ ├── Skeleton/ # Skeleton pose data
│ │ └── {action}/{user}/{session}/
│ │ ├── predictions/ # Keypoint JSON (.json) + overlay images (.jpg)
│ │ └── visualizations/
│ ├── IMU/ # IMU sensor data (CSV)
│ │ └── {action}/{user}/{session}/
│ │ ├── up(LA+RA+C).csv # Upper-body IMU (Left Arm + Right Arm + Chest)
│ │ └── down(LL+RL).csv # Lower-body IMU (Left Leg + Right Leg)
│ └── Radar/ # mmWave radar data (CSV)
│ └── {action}/{user}/{session}/
│ └── radar_output_T{timestamp}.csv
│
├── HAU/ # Human Action Understanding task
│ └── data/
│ ├── Depth/ # Visual modality clips as .mp4 video
│ ├── IR/
│ └── Thermal/
│ └── {user}/{session}/
│ └── {Modality}.mp4
│
├── HARn/ # Human Action next-step Reasoning task
│ └── data/
│ ├── Depth/ # Video clips as .mp4
│ └── IR/
│ └── {action}/{user}/{session}/
│ └── Depth.mp4
│
└── source_data/ # Raw source frames (with timestamps)
└── data/
├── Depth_Color/ # Timestamped raw frames (.png)
├── IR/
├── Thermal/
├── Skeleton/
├── IMU/
└── Radar/
└── {user}/{session}/
└── {Modality}_{timestamp}_{frameId}.png
```
**Path naming convention:**
| Level | Meaning | Example |
|-------|---------|---------|
| `{action}` | Action category with numeric prefix | `10_Stir_drinks` |
| `{user}` | Participant ID | `user1` |
| `{session}` | Scene–Environment–Trial index | `2-1-1` (Scene 2, Indoor, Trial 1) |
- **HAR**: Singular well-defined actions organized by action category, for traditional classification tasks
- **HAU**: Sequential action clips organized by user/session, for temporal and contextual understanding
- **HARn**: Sequential action clips organized by action/user/session, for next-action reasoning
- **source_data**: Original raw frames with full timestamps, before any task-level processing
## IMU Sensor Layout
Five IMU sensors are placed on the body:
| Sensor | Position | Channels (per sensor) |
|--------|------------|-------------------------------------------|
| WTLA | Left Arm | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
| WTC | Chest | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
| WTRA | Right Arm | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
| WTRL | Right Leg | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
| WTLL | Left Leg | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
## Benchmarks & Tasks
| Task | Type | Metrics |
|-------------------------|-----------------|----------------------------------|
| Action Recognition | Classification | Accuracy, F1, Precision, Recall |
| Action Selection | Multiple Choice | Accuracy |
| Action Captioning | Generation | BLEU, METEOR |
| Emotion Analysis | Classification | Accuracy |
| Sequential Reordering | Ordering | Accuracy |
| Next Action Reasoning | Reasoning | Accuracy |
## Citation
If you use CUHK-S in your research, please cite:
```bibtex
@inproceedings{jiang2026cuhkx,
title={CUHK-X: A Large-Scale Multimodal Dataset and Benchmark for Human Action Recognition, Understanding and Reasoning},
author={Jiang, Siyang and others},
booktitle={Proceedings of ACM MobiSys},
year={2026}
}
```
## Ethics & Privacy
We obtained approval from an Institutional Review Board (IRB) to conduct this study and collect data from human subjects.
**Privacy measures in CUHK-S:**
- No RGB video is included to prevent facial identification
- All visual modalities are downscaled to 320 × 240
- Participants are identified only by numeric IDs (e.g., user1, user2)
- No personally identifiable information is linked to individual records
- IMU, Radar, and Skeleton modalities do not capture visual appearance
## License
Code is released under the MIT License. The dataset is available for non-commercial research under a Data Use Agreement (DUA) and is not redistributable. Our derived annotations/splits are released under CC BY 4.0.
**Note**: This dataset is designed for research and educational purposes. Please ensure compliance with your institution's ethics guidelines when using human activity data.
## Contact
- **Email**: syjiang [AT] ie.cuhk.edu.hk
- **Project Page**: [https://siyang-jiang.github.io/CUHK-X/](https://siyang-jiang.github.io/CUHK-X/)
- **Lab**: [CUHK AIoT Lab](https://aiot.ie.cuhk.edu.hk)
|