Towards Multimodal Neural Robot Learning

Stefan Wermter , Cornelius Weber , Mark I. Elshaw , Christo Panchev , Harry Erwin , F. Pulvermüller
Robotics and Autonomous Systems Journal, Volume 47, Number 2--3, pages 171--175, doi: 10.1016/S0921-8890(04)00047-8 - Jun 2004
Associated documents :  
Learning by multimodal observation of vision and language offers a potentially powerful paradigm for robot learning. Recent experiments have shown that ‘mirror’ neurons are activated when an action is being performed, perceived, or verbally referred to. Different input modalities are processed by distributed cortical neuron ensembles for leg, arm and head actions. In this overview paper we consider this evidence from mirror neurons by integrating motor, vision and language representations in a learning robot.

 

@Article{WWEPEP04, 
 	 author =  {Wermter, Stefan and Weber, Cornelius and Elshaw, Mark I. and Panchev, Christo and Erwin, Harry and Pulvermüller, F.},  
 	 title = {Towards Multimodal Neural Robot Learning}, 
 	 journal = {Robotics and Autonomous Systems Journal},
 	 number = {2--3},
 	 volume = {47},
 	 pages = {171--175},
 	 year = {2004},
 	 month = {Jun},
 	 publisher = {Elsevier},
 	 doi = {10.1016/S0921-8890(04)00047-8}, 
 }