metadata
license: cc-by-nc-4.0
configs:
- config_name: default
data_files:
- split: train
path: viewer/train_metadata.csv.gz
- split: test_seen
path: viewer/test_seen_metadata.csv
- split: test_unseen
path: viewer/test_unseen_metadata.csv
GestureHYDRA: Semantic Co-speech Gesture Synthesis via Hybrid Modality Diffusion Transformer and Cascaded-Synchronized Retrieval-Augmented Generation.ICCV 2025
Dataset description
- The Streamer dataset we proposed consists of 281 anchors and 20,969 clips of data.
- The training set contains 19,051 clips of data and 269 anchors.
- The seen test set contains 920 clips of data, where the anchor ID has appeared in the training set.
- The unseen test set contains 998 clips of data, where the anchor ID has never appeared in the training set.
Dataset structure
The dataset contains three folders: train, test_seen, and test_unseen. Let's train train as an example for introduction.
-train/
├── audios/
│ ├── {anchor_id}/
│ │ └── {video_name_md5}/
│ │ └── {start_time}_{end_time}.wav
├── gestures/
│ ├── {anchor_id}/
│ │ └── {video_name_md5}/
│ │ └── {start_time}_{end_time}.pkl
The audio data in the audios folder corresponds one-to-one with the human motion data in the gestures folder.
For the Hugging Face Dataset Viewer, this repository also includes lightweight
CSV manifests under viewer/. They index each clip in the zip archives so the
viewer can render a table without scanning the full raw payload.
Data contained in the pkl file:
- width, height: the video width and height
- center: the center point of the video
- batch_size: the sequence length
- camera_transl: the displacement of the camera
- focal_length: the pixel focal length of a camera
- body_pose_axis: (bs, 21x3)
- jaw_pose: (bs,3)
- betas: (1,10)
- global_orient: (bs,3)
- transl: (bs,3)
- left_hand_pose: (bs,15x3)
- right_hand_pose: (bs,15x3)
- leye_pose: (bs,3)
- reye_pose: (bs,3)
- pose_embedding: (bs,32)