Multimodal Target Speech Separation with Voice and Face References

INTERSPEECH2020, doi: ARXIV:2005.08335 - Oct 2020 Open Access
Associated documents :  
Target speech separation refers to isolating target speech from a multi-speaker mixture signal by conditioning on auxiliary information about the target speaker. Different from the mainstream audio-visual approaches which usually require simultaneous visual streams as additional input, e.g. the corresponding lip movement sequences, in our approach we propose the novel use of a single face profile of the target speaker to separate expected clean speech. We exploit the fact that the image of a face contains information about the person's speech sound. Compared to using a simultaneous visual sequence, a face image is easier to obtain by pre-enrollment or on websites, which enables the system to generalize to devices without cameras. To this end, we incorporate face embeddings extracted from a pretrained model for face recognition into the speech separation, which guide the system in predicting a target speaker mask in the time-frequency domain. The experimental results show that a pre-enrolled face image is able to benefit separating expected speech signals. Additionally, face information is complementary to voice reference and we show that further improvement can be achieved when combing both face and voice embeddings.

 

@InProceedings{QWW20, 
 	 author =  {Qu, Leyuan and Weber, Cornelius and Wermter, Stefan},  
 	 title = {Multimodal Target Speech Separation with Voice and Face References}, 
 	 booktitle = {INTERSPEECH2020},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2020},
 	 month = {Oct},
 	 publisher = {ISCA},
 	 doi = {ARXIV:2005.08335}, 
 }