Facial Expression Editing with Continuous Emotion Labels

2019 14th IEEE International Conference on Automatic Face Gesture Recognition (FG 2019), pages 1--8, doi: 10.1109/FG.2019.8756558 - May 2019
Associated documents :  
<p> Recently deep generative models have achieved impressive results in the field of automated facial expression editing. However, the approaches presented so far presume a discrete representation of human emotions and are therefore limited in the modelling of non-discrete emotional expressions. To overcome this limitation, we explore how continuous emotion representations can be used to control automated expression editing. We propose a deep generative model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels. One dimension represents an emotion's valence, the other represents its degree of arousal. We demonstrate the functionality of our model with a quantitative analysis using classifier networks as well as with a qualitative analysis. </p>

 

@InProceedings{LBSW19, 
 	 author =  {Lindt, Alexandra and Barros, Pablo and Siqueira, Henrique and Wermter, Stefan},  
 	 title = {Facial Expression Editing with Continuous Emotion Labels}, 
 	 booktitle = {2019 14th IEEE International Conference on Automatic Face Gesture Recognition (FG 2019)},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {1--8},
 	 year = {2019},
 	 month = {May},
 	 publisher = {},
 	 doi = {10.1109/FG.2019.8756558}, 
 }