How2Sign

A Large-scale Multimodal Dataset for Continuous American Sign Language


Publication


How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language
Amanda Duarte, Shruti Palaskar, Lucas Ventura, Deepti Ghadiyaram, Kenneth DeHaan,
Florian Metze, Jordi Torres, and Xavier Giró-i-Nieto
CVPR, 2021
[PDF] [BibTeX] [1' video]
@inproceedings{Duarte_CVPR2021,
        author    = {Duarte, Amanda and Palaskar, Shruti and Ventura, Lucas and Ghadiyaram, Deepti and DeHaan, Kenneth and Metze, Florian and Torres, Jordi and Giro-i-Nieto, Xavier},
        title     = {How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language},
        booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
        year      = {2021}
  }

About


We introduce How2Sign, a multimodal and multiview continuous American Sign Language (ASL) dataset, consisting of a parallel corpus of more than 80 hours of sign language videos and a set of corresponding modalities including speech, English transcripts, and depth.
A three-hour subset was further recorded in the Panoptic studio enabling detailed 3D pose estimation.


The How2Sign Dataset


Samples of the data you will find on the How2Sign dataset.
A image



Download

Dataset Release Timeline: The dataset is expected to be released in June of 2021.
If you would like to receive a notification when the data is released, please contact the authors at: amanda.duarte[AT]upc.edu.

Disclaimer

The How2Sign dataset was collected as a tool for research, however, it is worth noting that the dataset may have unintended biases (including those of a societal, gender or racial nature).


Contact

Contact email for any queries: amanda.duarte[AT]upc.edu.