MoCap recordings for the INGREDIBLE database were carried out with the aid of two professional actresses from the theatre company Derezo. The actresses were located in different rooms, and only able to see each other’s avatars. In order to make the database more widely usable, the database utilizes two different MoCap suits and systems to collect data: Art-track and Moven. We have two main reasons for requiring a MoCap database: firstly, the database will be used to develop feature analysis tools able to recognize users’ gestures; secondly, MoCap recordings are necessary for the animation of a virtual agent. We also recorded synchronised videos of the two actresses while interacting in order to annotate their movements and find cues of dynamic coupling.
There are several types of recordings in the database. Recordings were either non-interactive or interactive. Non-interactive means that the actresses did not interact, and so no avatar was displayed in front of them. Instead, they were asked to perform a series of predefined gestures, repeating each one with variations in three dimensions. The modification criteria were amplitude (narrow, medium, wide), speed (slow, medium, fast), and fluidity (staccato, medium, fluid). In the interactive approach, the actresses communicated with each other through their human-size avatars, which were displayed on a screen in front of them. They were introduced to this environment by being encouraged to interact freely for as long as they wished. These first recordings often provided us with very interesting and spontaneous data, but the actresses rapidly grew bored without a prescribed task to perform. To add artistic, gestural, and expressive details to the interactions (according to the requirements of the project), we defined two interacting situations: 1.) imitation and 2.) bodily emotional dialogue.
The resulting dataset consists of 114 different recordings (see the table), 57 captured by each suit, with more than 150 hours of recording and 27 GB of data. The database stores recorded Art-track and Moven data converted to the .bvh format. It also holds Art-track raw data in .txt files and Moven .mvn data.
The database contains some limitations (e.g. only two participants, both participants are female, both participants are actresses), so as a part of our future work, more recordings would be necessary. At present, the videos are being annotated by a team of psychologists aiming to extract cues of dynamic coupling.
The database will be available for download soon.