Crossmodal language grounding, learning, and teaching

Proceedings of the NIPS2016 Workshop on Cognitive Computation (CoCo@NIPS2016) pages 62--68, - Dec 2016
Associated documents :  
The human brain as one of the most complex dynamic systems enables us to communicate and externalise information by natural language. Despite extensive research, human-like communication with interactive robots is not yet possible, because we have not yet fully understood the mechanistic characteristics of the crossmodal binding between language, actions, and visual sensation that enable humans to acquire and use natural language. In this position paper we present visionand action-embodied language learning research as part of a project investigating multi-modal learning. Our research endeavour includes to develop a) a cortical neural-network model that learns to ground language into crossmodal embodied perception and b) a knowledge-based teaching framework to bootstrap and scale-up the language acquisition to a level of language development in children of age up to two years. We embed this approach of internally grounding embodied experience and externally teaching abstract experience into the developmental robotics paradigm, by means of developing and employing a neurorobot that is capable of multisensory perception and interaction. The proposed research contributes to designing neuroscientific experiments on discovering crossmodal integration particularly in language processing and to constructing future robotic companions capable of natural communication.

 

@InProceedings{HWWXLL16, 
 	 author =  {Heinrich, Stefan and Weber, Cornelius and Wermter, Stefan and Xie, Ruobing and Lin, Yankei and Liu, Zhiyuan},  
 	 title = {Crossmodal language grounding, learning, and teaching}, 
 	 booktitle = {Proceedings of the NIPS2016 Workshop on Cognitive Computation (CoCo@NIPS2016)},
 	 number = {},
 	 volume = {},
 	 pages = {62--68},
 	 year = {2016},
 	 month = {Dec},
 	 publisher = {},
 	 doi = {}, 
 }