leohocs commited on
Commit
152ba83
·
verified ·
1 Parent(s): 9bbe692

Upload 5 files

Browse files
Files changed (6) hide show
  1. .gitattributes +1 -0
  2. README.md +154 -0
  3. actors.db +0 -0
  4. quickstart.ipynb +259 -0
  5. requirements.txt +2 -0
  6. scenarios.db +3 -0
.gitattributes CHANGED
@@ -77,3 +77,4 @@ bvhs_retarget/20231126_007_324.bvh filter=lfs diff=lfs merge=lfs -text
77
  bvhs_retarget/20231126_007_325.bvh filter=lfs diff=lfs merge=lfs -text
78
  bvhs_retarget/20231126_007_326.bvh filter=lfs diff=lfs merge=lfs -text
79
  bvhs_retarget/20231126_007_328.bvh filter=lfs diff=lfs merge=lfs -text
 
 
77
  bvhs_retarget/20231126_007_325.bvh filter=lfs diff=lfs merge=lfs -text
78
  bvhs_retarget/20231126_007_326.bvh filter=lfs diff=lfs merge=lfs -text
79
  bvhs_retarget/20231126_007_328.bvh filter=lfs diff=lfs merge=lfs -text
80
+ scenarios.db filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ language:
4
+ - en
5
+ pretty_name: "InterAct Dataset: Two-Person Multimodal"
6
+ tags:
7
+ - motion-capture
8
+ - motion-generation
9
+ - motion-models
10
+ - social-robotics
11
+ - computer-vision
12
+ size_categories:
13
+ - 1K<n<10K
14
+ ---
15
+ # InterAct Dataset
16
+
17
+ InterAct is a multi-modal two-person interaction dataset for research in human motion, facial expressions, and speech. For details, please refer to [our webpage](https://hku-cg.github.io/interact/).
18
+
19
+ ## Quick Start
20
+
21
+ A Quick Start Jupyter notebook is provided at `quickstart.ipynb`. It covers examples for:
22
+
23
+ 1. Querying the scenario and actor databases
24
+ 2. Finding actor pairs for a recording session
25
+ 3. Loading performance data (BVH, face parameters, audio)
26
+ 4. Visualizing face blendshapes over time
27
+ 5. Loading both actors in a two-person interaction
28
+
29
+ ## Repository Structure
30
+
31
+ ### Database Files
32
+
33
+ #### `scenarios.db`
34
+ SQLite database containing scenario metadata with the following tables:
35
+
36
+ - **scenarios**: Contains scenario definitions
37
+ - `id` (INTEGER): Scenario ID (used in filenames)
38
+ - `relationship_id` (INTEGER): FK to relationships table
39
+ - `primary_emotion_id` (INTEGER): FK to emotions table
40
+ - `character_setup` (TEXT): Character context description
41
+ - `scenario` (TEXT): Scenario description
42
+
43
+ - **relationships**: Relationship types between actors (e.g., "architect / contractor", "boss / subordinate")
44
+ - `id` (INTEGER): Relationship ID
45
+ - `name` (VARCHAR): Relationship description
46
+
47
+ - **emotions**: Primary emotion categories (e.g., "admiration", "anger", "amusement")
48
+ - `id` (INTEGER): Emotion ID
49
+ - `name` (VARCHAR): Emotion name
50
+
51
+ #### `actors.db`
52
+ SQLite database containing actor and session information:
53
+
54
+ - **actors**: Actor metadata
55
+ - `actor_id` (TEXT): Three-digit actor ID (e.g., "001", "002")
56
+ - `gender` (TEXT): "male" or "female"
57
+
58
+ - **sessions**: Recording session information
59
+ - `date` (TEXT): Session date in YYYYMMDD format
60
+ - `male_id` (TEXT): Actor ID of the male participant
61
+ - `female_id` (TEXT): Actor ID of the female participant
62
+
63
+ ---
64
+
65
+ ### Data Directories
66
+
67
+ Motion and facial data are provided here at **30 fps**. The performance data files follow this naming convention:
68
+ ```
69
+ <date>_<actor_id>_<scenario_id>.<extension>
70
+ ```
71
+ Example: `20231119_001_051.bvh` = recorded on 2023-11-19, actor 001, scenario 51
72
+
73
+ #### `bvhs/`
74
+ BVH motion capture files of the performances.
75
+
76
+ #### `bvhs_retarget/`
77
+ Retargeted BVH files for use in `body_to_render.blend`.
78
+
79
+ #### `face_ict/`
80
+ Facial blendshape parameters in ICT-FaceKit format (shape: `(N, 55)`). Suitable for training models and rendering with `face_ict_to_render.blend`.
81
+
82
+ #### `face_arkit/`
83
+ Facial blendshape parameters in ARKit format (shape: `(N, 51)`). Used in `body_to_render.blend` for full body visualization.
84
+
85
+ #### `face_ict_templates/`
86
+ Base mesh templates in ICT-FaceKit topology, named by actor ID (e.g., `001.obj`). Useful for training models.
87
+
88
+ #### `wav/`
89
+ Audio recordings from each actor in each performance.
90
+
91
+ #### `body_renders/`
92
+ Pre-rendered full-body visualizations (body + face + audio) as MP4 videos. These files use a different naming convention since they contain both actors:
93
+ ```
94
+ <date>_<scenario_id>.mp4
95
+ ```
96
+ Example: `20231119_051.mp4` = scenario 51 recorded on 2023-11-19
97
+
98
+ #### `lip_acc/`
99
+ Additional 1-hour facial dataset with attention to accuracy of lip shapes and pronunciation. Only one actor (006) was captured in this dataset, and the `scenario_id` of these files correspond to the order of the sentences in `lip_acc_sentences.txt`. Useful for fine-tuning.
100
+
101
+ ---
102
+
103
+ ### Scripts (`scripts/`)
104
+
105
+ #### Blender Files
106
+
107
+ - **`body_to_render.blend`**: Blender project for rendering full-body (face+body) visualizations. Contains pre-configured character rigs mapped to actor IDs. The "composite scene in dataset" script reads job files, composites both actors with BVH body motion from `bvhs_retarget/` and ARKit face blendshapes from `face_arkit/`. The "render all scenes" script renders MKV videos to `body_renders_noaudio/`.
108
+
109
+ - **`face_ict_to_render.blend`**: Blender project for rendering face-only visualizations using ICT-FaceKit topology. Contains pre-configured actor mesh scenes (`mesh-001`, `mesh-002`, etc.) and a "composite scenes and render" script that reads job files, loads blendshape animations from `face_ict/`, and renders 1080x1080 PNG sequences at 30fps using EEVEE. Output goes to `face_renders_noaudio/`.
110
+
111
+ #### Conversion Scripts
112
+
113
+ - **`face_ict_to_arkit.py`**: Converts ICT-FaceKit blendshape parameters (55 blendshapes) to ARKit format (51 blendshapes). Merges certain blendshape pairs and removes unused indices.
114
+
115
+ - **`face_ict_to_vertices.py`**: Converts ICT blendshape parameters to vertex sequences using the blendshape basis matrix. Outputs per-frame vertex positions as numpy arrays with shape `(N, V*3)`, where coordinates are packed contiguously per vertex: `[v1x, v1y, v1z, v2x, v2y, v2z, ...]`.
116
+
117
+ #### Render Utilities
118
+
119
+ - **`render_add_audio.py`**: Combines rendered video with audio tracks. Supports both face renders (single actor) and body renders (mixed audio from both actors).
120
+
121
+ #### Data Files
122
+
123
+ - **`blendshape_ict.npy`**: ICT-FaceKit blendshape basis matrix used for converting blendshape parameters to vertex offsets, used in `face_ict_to_vertices.py`.
124
+
125
+ #### Job Files
126
+
127
+ We recommend using a job file and splitting the rendering into batches, as opposed to rendering all scenarios in one go.
128
+
129
+ - **`example_body_render_job.txt`**: Example job file listing scenes to render in body format (`<date>_<scenario_id>`).
130
+ - **`example_face_render_job.txt`**: Example job file listing scenes to render in face format (`<date>_<actor_id>_<scenario_id>`).
131
+
132
+ ## Errata
133
+
134
+ - The face files for `20240126_006_034` is unavailable due to a conversion issue. When rendering the scene in `body_to_render.blend`, the female face blendshape animations are not applied.
135
+
136
+ ## Acknowledgements
137
+
138
+ `body_to_render.blend` is based on the visualization Blender project kindly provided by the [BEAT dataset](https://pantomatrix.github.io/BEAT/) authors.
139
+
140
+ If you used InterAct as part of your research, please cite as following:
141
+
142
+ ```bibtex
143
+ @article{ho2025interact,
144
+ title={InterAct: A Large-Scale Dataset of Dynamic, Expressive and Interactive Activities between Two People in Daily Scenarios},
145
+ author={Ho, Leo and Huang, Yinghao and Qin, Dafei and Shi, Mingyi and Tse, Wangpok and Liu, Wei and Yamagishi, Junichi and Komura, Taku},
146
+ journal={Proceedings of the ACM on Computer Graphics and Interactive Techniques},
147
+ volume={8},
148
+ number={4},
149
+ pages={1--27},
150
+ year={2025},
151
+ publisher={ACM New York, NY},
152
+ doi={10.1145/3747871}
153
+ }
154
+ ```
actors.db ADDED
Binary file (20.5 kB). View file
 
quickstart.ipynb ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# InterAct Dataset - Quick Start\n",
8
+ "\n",
9
+ "This notebook demonstrates how to load and explore the InterAct dataset."
10
+ ]
11
+ },
12
+ {
13
+ "cell_type": "code",
14
+ "execution_count": null,
15
+ "metadata": {},
16
+ "outputs": [],
17
+ "source": [
18
+ "import sqlite3\n",
19
+ "import numpy as np\n",
20
+ "import os"
21
+ ]
22
+ },
23
+ {
24
+ "cell_type": "markdown",
25
+ "metadata": {},
26
+ "source": [
27
+ "## 1. Loading the Databases\n",
28
+ "\n",
29
+ "The dataset includes two SQLite databases:\n",
30
+ "- `scenarios.db` - scenario metadata (relationships, emotions, descriptions)\n",
31
+ "- `actors.db` - actor info and recording sessions"
32
+ ]
33
+ },
34
+ {
35
+ "cell_type": "code",
36
+ "execution_count": null,
37
+ "metadata": {},
38
+ "outputs": [],
39
+ "source": [
40
+ "# Connect to databases\n",
41
+ "scenarios_db = sqlite3.connect('scenarios.db')\n",
42
+ "actors_db = sqlite3.connect('actors.db')"
43
+ ]
44
+ },
45
+ {
46
+ "cell_type": "code",
47
+ "execution_count": null,
48
+ "metadata": {},
49
+ "outputs": [],
50
+ "source": [
51
+ "# View available relationships\n",
52
+ "relationships = scenarios_db.execute('SELECT * FROM relationships').fetchall()\n",
53
+ "print(f\"Total relationships: {len(relationships)}\")\n",
54
+ "print(\"Sample relationships:\")\n",
55
+ "for r in relationships[:5]:\n",
56
+ " print(f\" {r[0]}: {r[1]}\")"
57
+ ]
58
+ },
59
+ {
60
+ "cell_type": "code",
61
+ "execution_count": null,
62
+ "metadata": {},
63
+ "outputs": [],
64
+ "source": [
65
+ "# View available emotions\n",
66
+ "emotions = scenarios_db.execute('SELECT * FROM emotions').fetchall()\n",
67
+ "print(f\"Total emotions: {len(emotions)}\")\n",
68
+ "print(\"Sample emotions:\")\n",
69
+ "for e in emotions[:5]:\n",
70
+ " print(f\" {e[0]}: {e[1]}\")"
71
+ ]
72
+ },
73
+ {
74
+ "cell_type": "code",
75
+ "execution_count": null,
76
+ "metadata": {},
77
+ "outputs": [],
78
+ "source": [
79
+ "# Query scenarios by emotion (e.g., find all \"anger\" scenarios)\n",
80
+ "anger_scenarios = scenarios_db.execute('''\n",
81
+ " SELECT s.id, r.name, e.name, s.scenario \n",
82
+ " FROM scenarios s\n",
83
+ " JOIN relationships r ON s.relationship_id = r.id\n",
84
+ " JOIN emotions e ON s.primary_emotion_id = e.id\n",
85
+ " WHERE e.name = 'anger'\n",
86
+ " LIMIT 3\n",
87
+ "''').fetchall()\n",
88
+ "\n",
89
+ "print(\"Scenarios with 'anger' emotion:\")\n",
90
+ "for s in anger_scenarios:\n",
91
+ " print(f\"\\nScenario {s[0]} ({s[1]} / {s[2]}):\")\n",
92
+ " print(f\" {s[3][:100]}...\")"
93
+ ]
94
+ },
95
+ {
96
+ "cell_type": "markdown",
97
+ "metadata": {},
98
+ "source": [
99
+ "## 2. Finding Actor Pairs\n",
100
+ "\n",
101
+ "Each recording session has one male and one female actor. The `sessions` table maps dates to actor pairs."
102
+ ]
103
+ },
104
+ {
105
+ "cell_type": "code",
106
+ "execution_count": null,
107
+ "metadata": {},
108
+ "outputs": [],
109
+ "source": [
110
+ "# View all sessions\n",
111
+ "sessions = actors_db.execute('SELECT * FROM sessions').fetchall()\n",
112
+ "print(\"Recording sessions:\")\n",
113
+ "print(\"Date | Male | Female\")\n",
114
+ "print(\"-\" * 28)\n",
115
+ "for s in sessions:\n",
116
+ " print(f\"{s[0]} | {s[1]} | {s[2]}\")"
117
+ ]
118
+ },
119
+ {
120
+ "cell_type": "code",
121
+ "execution_count": null,
122
+ "metadata": {},
123
+ "outputs": [],
124
+ "source": [
125
+ "# View all actors\n",
126
+ "actors = actors_db.execute('SELECT * FROM actors').fetchall()\n",
127
+ "print(\"Actors:\")\n",
128
+ "for a in actors:\n",
129
+ " print(f\" {a[0]}: {a[1]}\")"
130
+ ]
131
+ },
132
+ {
133
+ "cell_type": "code",
134
+ "execution_count": null,
135
+ "metadata": {},
136
+ "outputs": [],
137
+ "source": [
138
+ "def get_actor_pair(date):\n",
139
+ " \"\"\"Get the male and female actor IDs for a given recording date.\"\"\"\n",
140
+ " result = actors_db.execute(\n",
141
+ " 'SELECT male_id, female_id FROM sessions WHERE date = ?', \n",
142
+ " (date,)\n",
143
+ " ).fetchone()\n",
144
+ " return result\n",
145
+ "\n",
146
+ "# Example: get actors for a specific date\n",
147
+ "date = '20231119'\n",
148
+ "male_id, female_id = get_actor_pair(date)\n",
149
+ "print(f\"Session {date}: male={male_id}, female={female_id}\")"
150
+ ]
151
+ },
152
+ {
153
+ "cell_type": "markdown",
154
+ "metadata": {},
155
+ "source": [
156
+ "## 3. Loading Performance Data\n",
157
+ "\n",
158
+ "Performance files follow the naming convention: `<date>_<actor_id>_<scenario_id>.<ext>`"
159
+ ]
160
+ },
161
+ {
162
+ "cell_type": "code",
163
+ "execution_count": null,
164
+ "metadata": {},
165
+ "outputs": [],
166
+ "source": [
167
+ "# Example performance\n",
168
+ "date = '20231119'\n",
169
+ "actor_id = '001'\n",
170
+ "scenario_id = '051'\n",
171
+ "\n",
172
+ "# File paths\n",
173
+ "bvh_path = f'bvhs/{date}_{actor_id}_{scenario_id}.bvh'\n",
174
+ "face_ict_path = f'face_ict/{date}_{actor_id}_{scenario_id}.npy'\n",
175
+ "face_arkit_path = f'face_arkit/{date}_{actor_id}_{scenario_id}.npy'\n",
176
+ "wav_path = f'wav/{date}_{actor_id}_{scenario_id}.wav'\n",
177
+ "\n",
178
+ "print(f\"BVH: {bvh_path}\")\n",
179
+ "print(f\"Face ICT: {face_ict_path}\")\n",
180
+ "print(f\"Face ARKit: {face_arkit_path}\")\n",
181
+ "print(f\"Audio: {wav_path}\")"
182
+ ]
183
+ },
184
+ {
185
+ "cell_type": "code",
186
+ "execution_count": null,
187
+ "metadata": {},
188
+ "outputs": [],
189
+ "source": [
190
+ "# Load face blendshape parameters\n",
191
+ "face_ict = np.load(face_ict_path)\n",
192
+ "face_arkit = np.load(face_arkit_path)\n",
193
+ "\n",
194
+ "print(f\"Face ICT shape: {face_ict.shape}\") # (N, 55) - N frames, 55 ICT blendshapes\n",
195
+ "print(f\"Face ARKit shape: {face_arkit.shape}\") # (N, 51) - N frames, 51 ARKit blendshapes\n",
196
+ "\n",
197
+ "n_frames = face_ict.shape[0]\n",
198
+ "duration_sec = n_frames / 30 # 30 fps\n",
199
+ "print(f\"\\nFrames: {n_frames}\")\n",
200
+ "print(f\"Duration: {duration_sec:.1f} seconds\")"
201
+ ]
202
+ },
203
+ {
204
+ "cell_type": "markdown",
205
+ "metadata": {},
206
+ "source": "## 4. Loading Both Actors in an Interaction\n\nFor two-person interaction research, load data from both actors in a scene."
207
+ },
208
+ {
209
+ "cell_type": "code",
210
+ "execution_count": null,
211
+ "metadata": {},
212
+ "outputs": [],
213
+ "source": "def load_interaction(date, scenario_id):\n \"\"\"Load face and audio data for both actors in an interaction.\"\"\"\n male_id, female_id = get_actor_pair(date)\n \n data = {}\n for actor_id, role in [(male_id, 'male'), (female_id, 'female')]:\n prefix = f'{date}_{actor_id}_{scenario_id}'\n data[role] = {\n 'actor_id': actor_id,\n 'face_ict': np.load(f'face_ict/{prefix}.npy'),\n 'face_arkit': np.load(f'face_arkit/{prefix}.npy'),\n 'wav_path': f'wav/{prefix}.wav',\n 'bvh_path': f'bvhs/{prefix}.bvh',\n }\n \n return data\n\n# Load an interaction\ninteraction = load_interaction('20231119', '051')\n\nprint(\"Male actor:\", interaction['male']['actor_id'])\nprint(f\" Face shape: {interaction['male']['face_ict'].shape}\")\nprint(\"\\nFemale actor:\", interaction['female']['actor_id'])\nprint(f\" Face shape: {interaction['female']['face_ict'].shape}\")"
214
+ },
215
+ {
216
+ "cell_type": "markdown",
217
+ "metadata": {},
218
+ "source": "## 5. Basic Visualization\n\nPlot face blendshape values over time."
219
+ },
220
+ {
221
+ "cell_type": "code",
222
+ "execution_count": null,
223
+ "metadata": {},
224
+ "outputs": [],
225
+ "source": "import matplotlib.pyplot as plt\n\n# Plot jawOpen blendshape over time\ntime = np.arange(n_frames) / 30 # Convert to seconds\n\nfig, ax = plt.subplots(figsize=(12, 3))\n\n# ARKit blendshape index (see body_to_render.blend for full list)\nax.plot(time, face_arkit[:, 24]) # 24 = jawOpen\nax.set_ylabel('jawOpen')\nax.set_ylim(0, 1)\nax.set_xlabel('Time (seconds)')\nax.set_title(f'Face Blendshape: {date}_{actor_id}_{scenario_id}')\n\nplt.tight_layout()\nplt.show()"
226
+ },
227
+ {
228
+ "cell_type": "code",
229
+ "execution_count": null,
230
+ "metadata": {},
231
+ "outputs": [],
232
+ "source": "# Compare jaw movement between both actors\nfig, ax = plt.subplots(figsize=(12, 4))\n\nn_frames = interaction['male']['face_arkit'].shape[0]\ntime = np.arange(n_frames) / 30\n\nax.plot(time, interaction['male']['face_arkit'][:, 24], label='Male jawOpen', alpha=0.7)\nax.plot(time, interaction['female']['face_arkit'][:, 24], label='Female jawOpen', alpha=0.7)\n\nax.set_xlabel('Time (seconds)')\nax.set_ylabel('jawOpen')\nax.legend()\nax.set_title('Jaw Movement Comparison - Two-Person Interaction')\nplt.tight_layout()\nplt.show()"
233
+ },
234
+ {
235
+ "cell_type": "code",
236
+ "execution_count": null,
237
+ "metadata": {},
238
+ "outputs": [],
239
+ "source": [
240
+ "# Clean up\n",
241
+ "scenarios_db.close()\n",
242
+ "actors_db.close()"
243
+ ]
244
+ }
245
+ ],
246
+ "metadata": {
247
+ "kernelspec": {
248
+ "display_name": "Python 3",
249
+ "language": "python",
250
+ "name": "python3"
251
+ },
252
+ "language_info": {
253
+ "name": "python",
254
+ "version": "3.10.0"
255
+ }
256
+ },
257
+ "nbformat": 4,
258
+ "nbformat_minor": 4
259
+ }
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ numpy
2
+ matplotlib
scenarios.db ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c785bc85833127894f33b6c2b6b0855d4230a79cb2d4c099fddf21a6445fcd4
3
+ size 294912