Integrating Statistical Uncertainty into Neural Network-based Speech Enhancement

IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, pages 386-390, doi: 10.1109/ICASSP43922.2022.9747642 - May 2022
Associated documents :  
Speech enhancement in the time-frequency domain is often performed by estimating a multiplicative mask to extract clean speech. However, most neural network-based methods perform point estimation, i.e., their output consists of a single mask. In this paper, we study the benefits of modeling uncertainty in neural network-based speech enhancement. For this, our neural network is trained to map a noisy spectrogram to the Wiener filter and its associated variance, which quantifies uncertainty, based on the maximum a posteriori (MAP) inference of spectral coefficients. By estimating the distribution instead of the point estimate, one can model the uncertainty associated with each estimate. We further propose to use the estimated Wiener filter and its uncertainty to build an approximate MAP (A-MAP) estimator of spectral magnitudes, which in turn is combined with the MAP inference of spectral coefficients to form a hybrid loss function to jointly reinforce the estimation. Experimental results on different datasets show that the proposed method can not only capture the uncertainty associated with the estimated filters, but also yield a higher enhancement performance over comparable models that do not take uncertainty into account.

 

@InProceedings{FPWG22a, 
 	 author =  {Fang, Huajian and Peer, Tal and Wermter, Stefan and Gerkmann, Timo},  
 	 title = {Integrating Statistical Uncertainty into Neural Network-based Speech Enhancement}, 
 	 booktitle = {IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {386-390},
 	 year = {2022},
 	 month = {May},
 	 publisher = {},
 	 doi = {10.1109/ICASSP43922.2022.9747642}, 
 }