Interactive Language Understanding with Multiple Timescale Recurrent Neural Networks

Artificial Neural Networks and Machine Learning - ICANN 2014, Editors: Stefan Wermter and Cornelius Weber and W{\l}odzis{\l}aw Duch and Timo Honkela and Petia Koprinkova-Hristova and Sven Magg and Günther Palm and Alessandro E.P. Villa, Volume 8681, pages 193--200, doi: 10.1007/978-3-319-11179-7_25 - Sep 2014
Associated documents :  
Natural language processing in the human brain is complex and dynamic. Models for understanding, how the brain’s architecture acquires language, need to take into account the temporal dynamics of verbal utterances as well as of action and visual embodied perception. We propose an architecture based on three Multiple Timescale Recurrent Neural Networks (MTRNNs) interlinked in a cell assembly that learns verbal utterances grounded in dynamic proprioceptive and visual information. Results show that the architecture is able to describe novel dynamic actions with correct novel utterances, and they also indicate that multi-modal integration allows for a disambiguation of concepts.

 

@InProceedings{HW14, 
 	 author =  {Heinrich, Stefan and Wermter, Stefan},  
 	 title = {Interactive Language Understanding with Multiple Timescale Recurrent Neural Networks}, 
 	 booktitle = {Artificial Neural Networks and Machine Learning - ICANN 2014},
 	 editors = {Stefan Wermter and Cornelius Weber and W{\l}odzis{\l}aw Duch and Timo Honkela and Petia Koprinkova-Hristova and Sven Magg and Günther Palm and Alessandro E.P. Villa},
 	 number = {},
 	 volume = {8681},
 	 pages = {193--200},
 	 year = {2014},
 	 month = {Sep},
 	 publisher = {Springer International Publishing, CH},
 	 doi = {10.1007/978-3-319-11179-7_25}, 
 }