Variational autoencoder for speech enhancement with a noise-aware encoder

IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) doi: 10.1109/ICASSP39728.2021.9414060 - Jun 2021
Associated documents :  
Recently, a generative variational autoencoder (VAE) has been proposed for speech enhancement to model speech statistics. However, this approach only uses clean speech in the training phase, making the estimation particularly sensitive to noise presence, especially in low signal-to-noise ratios (SNRs). To increase the robustness of the VAE, we propose to include noise information in the training phase by using a noise-aware encoder trained on noisy-clean speech pairs. We evaluate our approach on real recordings of different noisy environments and acoustic conditions using two different noise datasets. We show that our proposed noise-aware VAE outperforms the standard VAE in terms of overall distortion without increasing the number of model parameters. At the same time, we demonstrate that our model is capable of generalizing to unseen noise conditions better than a supervised feedforward deep neural network (DNN). Furthermore, we demonstrate the robustness of the model performance to a reduction of the noisy-clean speech training data size.

 

@InProceedings{FCWG21, 
 	 author =  {Fang, Huajian and Carbajal, Guillaume and Wermter, Stefan and Gerkmann, Timo},  
 	 title = {Variational autoencoder for speech enhancement with a noise-aware encoder}, 
 	 booktitle = {IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2021},
 	 month = {Jun},
 	 publisher = {},
 	 doi = {10.1109/ICASSP39728.2021.9414060}, 
 }