We introduce the Bystander Affect Detection dataset – a dataset of videos of bystander reactions to videos of failures. This dataset includes 2452 human reactions to failure, collected in contexts that approximate “in-the-wild” data collection – including natural variances in webcam quality, lighting, and background.
Our video dataset may be requested for use in related research projects. As the dataset contains facial video data of our participants, access can be requested along with the presentation of a research protocol or data use agreement that protects participants.
This project is part of a collaborative research effort between Cornell Tech (PI: Associate Professor Wendy Ju) and Accenture Labs.
Read our paper here: link.
Request access to the BAD dataset here: link.
The BAD Dataset covers:
We provide a preview of the dataset by sharing the individual reaction videos to one stimulus video. This sample contains 53 reactions to QID106, which was a video of a man playing guitar with a part of the guitar breaking off. The full BAD dataset contains the reaction videos to all 46 stimulus videos.
Download the dataset preview here: link.
We are currently in the process of creating the Robot Fail Database of robot failure videos as a resource for human-robot interaction research. If you are an HRI researcher and think you may have materials to contribute, please follow this link to the submission form. Contributing researchers will be given access to the dataset, full credit for their videos, and an HRI Transparency Champion digital diploma.