Replay to Remember: Continual Layer-Specific Fine-Tuning for German Speech Recognition

Artificial Neural Networks and Machine Learning – ICANN 2023, pages 489–500, doi: 10.1007/978-3-031-44195-0_40 - Sep 2023 Open Access
Associated documents :  
While Automatic Speech Recognition (ASR) models have shown significant advances with the introduction of unsupervised or self-supervised training techniques, these improvements are still only limited to a subsection of languages and speakers. Transfer learning enables the adaptation of large-scale multilingual models to not only low-resource languages but also to more specific speaker groups. However, fine-tuning on data from new domains is usually accompanied by a decrease in performance on the original domain. Therefore, in our experiments, we examine how well the performance of large-scale ASR models can be approximated for smaller domains, with our own dataset of German Senior Voice Commands (SVC-de), and how much of the general speech recognition performance can be preserved by selectively freezing parts of the model during training. To further increase the robustness of the ASR model to vocabulary and speakers outside of the fine-tuned domain, we apply Experience Replay [20] for continual learning. By adding only a fraction of data from the original domain, we are able to reach Word-Error-Rates (WERs) below 5% on the new domain, while stabilizing performance for general speech recognition at acceptable WERs.

 

@InProceedings{PW23, 
 	 author =  {Pekarek-Rosin, Theresa and Wermter, Stefan},  
 	 title = {Replay to Remember: Continual Layer-Specific Fine-Tuning for German Speech Recognition}, 
 	 booktitle = {Artificial Neural Networks and Machine Learning – ICANN 2023},
 	 journal = {},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {489–500},
 	 year = {2023},
 	 month = {Sep},
 	 publisher = {},
 	 doi = {10.1007/978-3-031-44195-0_40}, 
 }