Grounding Neural Robot Language in Action

Stefan Wermter , Cornelius Weber , Mark I. Elshaw , Vittorio Gallese , F. Pulvermüller
Biomimetic Neural Learning for Intelligent Robots pages 162--181, doi: 10.1007/11521082_10 - Jan 2005
Associated documents :  
In this paper we describe two models for neural grounding of robotic language processing in actions. These models are inspired by concepts of the mirror neuron system in order to produce learning by imitation by combining high-level vision, language and motor command inputs. The models learn to perform and recognise three behaviours, 'go', 'pick' and 'lift'. The first single-layer model uses an adapted Helmholtz machine wake-sleep algorithm to act like a Kohonen self-organising network that receives all inputs into a single layer. In contrast, the second, hierarchical model has two layers. In the lower level hidden layer the Helmholtz machine wake-sleep algorithm is used to learn the relationship between action and vision, while the upper layer uses the Kohonen self-organising approach to combine the output of the lower hidden layer and the language input. On the hidden layer of the single-layer model, the action words are represented on non-overlapping regions and any neuron in each region accounts for a corresponding sensory-motor binding. In the hierarchical model rather separate sensory- and motor representations on the lower lever are bound to corresponding sensory-motor pairings via the top level that organises according to the language input.

 

@InCollection{WWEGP05, 
 	 author =  {Wermter, Stefan and Weber, Cornelius and Elshaw, Mark I. and Gallese, Vittorio and Pulvermüller, F.},  
 	 title = {Grounding Neural Robot Language in Action}, 
 	 booktitle = {Biomimetic Neural Learning for Intelligent Robots},
 	 number = {},
 	 volume = {},
 	 pages = {162--181},
 	 year = {2005},
 	 month = {Jan},
 	 publisher = {Springer},
 	 doi = {10.1007/11521082_10}, 
 }