landmarks listlengths 0 150 | label stringlengths 1 16 |
|---|---|
[
[
[
0.49682939052581787,
0.24751250445842743,
-0.031214740127325058
],
[
0.463817834854126,
0.2089957892894745,
0.02372199483215809
],
[
0.48878902196884155,
0.20998777449131012,
0.014220388606190681
],
[
0.4751298725605011,
... | abdomen |
[[[0.5599666237831116,0.23935569822788239,-0.03431493043899536],[0.5258199572563171,0.18397730588912(...TRUNCATED) | abdomen |
[[[0.5044227242469788,0.28829124569892883,-0.02391599863767624],[0.4658014476299286,0.24371230602264(...TRUNCATED) | abdomen |
[[[0.49351581931114197,0.3005449175834656,-0.024019114673137665],[0.45419880747795105,0.255862861871(...TRUNCATED) | abdomen |
[[[0.5172531008720398,0.2917517125606537,-0.02277723141014576],[0.4864901304244995,0.259898841381073(...TRUNCATED) | abdomen |
[[[0.4900825023651123,0.2532680332660675,-0.03134879469871521],[0.4488320052623749,0.208414196968078(...TRUNCATED) | able |
[[[0.4710204005241394,0.27346479892730713,-0.03778359666466713],[0.4281454384326935,0.22461496293544(...TRUNCATED) | able |
[[[0.5375120639801025,0.27765893936157227,-0.02483951300382614],[0.503443717956543,0.230970144271850(...TRUNCATED) | able |
[[[0.4926683008670807,0.2913806140422821,-0.02496476098895073],[0.4572211503982544,0.235818341374397(...TRUNCATED) | able |
[[[0.5153008103370667,0.2902928590774536,-0.02279064431786537],[0.4845837652683258,0.261693894863128(...TRUNCATED) | able |
End of preview. Expand in Data Studio
This dataset contains video samples from the WLASL (Warm-up Language Activity Sign Language) dataset, processed to extract pose, facial, and hand landmarks using MediaPipe and structured for use with the PoseFormer model.
The dataset is derived from the "Voxel51/WLASL" dataset available on the Hugging Face Hub. The original dataset consists of videos of individuals performing sign language gestures.
This processed version focuses on extracting and organizing skeletal and facial landmark data from these videos. Each sample in this dataset is represented by:
landmarks: A NumPy array (converted to a list for the Hugging Facedatasetslibrary) containing the extracted landmarks. The structure of this array is designed to be compatible with PoseFormer, typically of shape(T, J, 3), whereTis the number of frames (capped at 150 per video),Jis the total number of landmarks extracted (a combination of pose, face, and hand landmarks), and3represents the (x, y, z) coordinates of each landmark.- Face Landmarks: Specific key facial landmarks are extracted.
- Pose Landmarks: Upper body pose landmarks are included.
- Hand Landmarks: Landmarks for both left and right hands are captured.
label: The corresponding sign language gesture label for each video sample. If a label is not available in the original dataset, it defaults to "unknown".
This dataset is derived from the WLASL dataset, licensed under the Computational Use of Data Agreement (C-UDA). It is intended for academic and non-commercial computational use only
- Downloads last month
- 80