A Deep Neural Model for Emotion-Driven Multimodal Attention

German I. Parisi , Pablo Barros , Haiyan Wu , Guochun Yang , Zhenghan Li , Xun Liu , Stefan Wermter
AAAI Spring Symposium Series, pages 482--485, - Mar 2017
Associated documents :  
The ability of the brain to integrate multimodal information is crucial for providing a coherent perceptual experience, with perception being modulated by the interplay of different cortical and subcortical regions in the brain. Recent research has shown that affective stimuli play an important role in attentional mechanisms, with behavioral studies supporting that the focus of attention in a region of the visual field is increased when affective stimuli are present. This work proposes a deep neural model that learns to locate and recognize emotional expressions modulated by an attentional mechanism. Our model consists of two hierarchies of convolutional neural networks: one to learn the location of emotional expressions in a cluttered scene, and the other to recognize the type of emotion. We present experimental results processing facial and body motion cues, showing that our model for emotion-driven attention improves the accuracy of emotion expression recognition.

 

@InProceedings{PBWYLLW17, 
 	 author =  {Parisi, German I. and Barros, Pablo and Wu, Haiyan and Yang, Guochun and Li, Zhenghan and Liu, Xun and Wermter, Stefan},  
 	 title = {A Deep Neural Model for Emotion-Driven Multimodal Attention}, 
 	 booktitle = {AAAI Spring Symposium Series},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {482--485},
 	 year = {2017},
 	 month = {Mar},
 	 publisher = {},
 	 doi = {}, 
 }