Emphasizing unseen words: New vocabulary acquisition for end-to-end speech recognition
Neural Networks,
Volume 161,
pages 494--504,
doi: 10.1016/j.neunet.2023.01.027
- Feb 2023
Due to the dynamic nature of human language, automatic speech recognition (ASR) systems need to continuously acquire new vocabulary. Out-Of-Vocabulary (OOV) words, such as trending words and new named entities, pose problems to modern ASR systems that require long training times to adapt their large numbers of parameters. Different from most previous research focusing on language model post-processing, we tackle this problem on an earlier processing level and eliminate the bias in acoustic modeling to recognize OOV words acoustically. We propose to generate OOV words using text-to-speech systems and to rescale losses to encourage neural networks to pay more attention to OOV words. Specifically, we enlarge the classification loss used for training neural networks parameters of utterances containing OOV words (sentence-level), or rescale the gradient used for back-propagation for OOV words (word-level), when fine-tuning a previously trained model on synthetic audio. To overcome catastrophic forgetting, we also explore the combination of loss rescaling and model regularization, i.e. L2 regularization and elastic weight consolidation (EWC). Compared with previous methods that just fine-tune synthetic audio with EWC, the experimental results on the LibriSpeech benchmark reveal that our proposed loss rescaling approach can achieve significant improvement on the recall rate with only a slight decrease on word error rate. Moreover, word-level rescaling is more stable than utterance-level rescaling and leads to higher recall rates and precision rates on OOV word recognition. Furthermore, our proposed combined loss rescaling and weight consolidation methods can support continual learning of an ASR system.
@Article{QWW23, author = {Qu, Leyuan and Weber, Cornelius and Wermter, Stefan}, title = {Emphasizing unseen words: New vocabulary acquisition for end-to-end speech recognition}, journal = {Neural Networks}, number = {}, volume = {161}, pages = {494--504}, year = {2023}, month = {Feb}, publisher = {Elsevier}, doi = {10.1016/j.neunet.2023.01.027}, }