Emotion-modulated attention improves expression recognition: A deep learning model

Neurocomputing, Volume 253, pages 104--114, doi: 10.1016/j.neucom.2017.01.096 - Mar 2017 Open Access
Associated documents :  
Spatial attention in humans and animals involves the visual pathway and the superior colliculus, which integrate multimodal information. Recent research has shown that affective stimuli play an important role in attentional mechanisms, and behavioral studies show that the focus of attention in a given region of the visual field is increased when affective stimuli are present. This work proposes a neurocomputational model that learns to attend to emotional expressions and to modulate emotion recognition. Our model consists of a deep architecture which implements convolutional neural networks to learn the location of emotional expressions in a cluttered scene. We performed a number of experiments for detecting regions of interest, based on emotion stimuli, and show that the attention model improves emotion expression recognition when used as emotional attention modulator. Finally, we analyze the internal representations of the learned neural filters and discuss their role in the performance of our model.

 

@Article{BPWW17, 
 	 author =  {Barros, Pablo and Parisi, German I. and Wermter, Stefan and Weber, Cornelius},  
 	 title = {Emotion-modulated attention improves expression recognition: A deep learning model}, 
 	 journal = {Neurocomputing},
 	 number = {},
 	 volume = {253},
 	 pages = {104--114},
 	 year = {2017},
 	 month = {Mar},
 	 publisher = {},
 	 doi = {10.1016/j.neucom.2017.01.096}, 
 }