Recognizing Complex Mental States with Deep Hierarchical Features for Human-Robot Interaction
IEEE/RSJ International Conference on Intelligent Robots and Systems,
pages 4065--4070,
doi: 10.1109/IROS.2015.7353951
- Sep 2015
The use of emotional states for Human-Robot Interaction (HRI) has attracted considerable attention in recent years. One of the most challenging tasks is to recognize the spontaneous expression of emotions, especially in an HRI scenario. Every person has a different way to express emotions, and this is aggravated by the complexity of interaction with different subjects, multimodal information and different environments. We propose a deep neural model which is able to deal with these characteristics and which is applied in recognition of complex mental states. Our system is able to learn and extract deep spatial and temporal features and to use them to classify emotions in sequences. To evaluate the system, the CAM3D corpus is used. This corpus is composed of videos recorded from different subjects and in different indoor environments. Each video contains the recording of the upper-body part of the subject expressing one of twelve complex mental states. Our system is able to recognize spontaneous complex mental states from different subjects and can be used in such an HRI scenario.
@InProceedings{BW15, author = {Barros, Pablo and Wermter, Stefan}, title = {Recognizing Complex Mental States with Deep Hierarchical Features for Human-Robot Interaction}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems}, editors = {}, number = {}, volume = {}, pages = {4065--4070}, year = {2015}, month = {Sep}, publisher = {}, doi = {10.1109/IROS.2015.7353951}, }