Using Natural Language Feedback in a Neuro-inspired Integrated Multimodal Robotic Architecture

Johannes Twiefel , Xavier Hinaut , Marcelo Borghetti , Erik Strahl , Stefan Wermter
Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages 52--57, - Aug 2016
Associated documents :  
In this paper we present a multi-modal human robot interaction architecture which is able to combine infor- mation coming from different sensory inputs, and can generate feedback for the user which helps to teach him/her implicitly how to interact with the robot. The system combines vision, speech and language with inference and feedback. The system environment consists of a Nao robot which has to learn objects situated on a table only by understanding absolute and relative object locations uttered by the user and afterwards points on a desired object to show what it has learned. The results of a user study and performance test show the usefulness of the feedback produced by the system and also justify the usage of the system in a real-world applications, as its classification accuracy of multi-modal input is around 80.8%. In the experiments, the system was able to detect inconsistent input coming from different sensory modules in all cases and could generate useful feedback for the user from this information.

 

@InProceedings{THBSW16, 
 	 author =  {Twiefel, Johannes and Hinaut, Xavier and Borghetti, Marcelo and Strahl, Erik and Wermter, Stefan},  
 	 title = {Using Natural Language Feedback in a Neuro-inspired Integrated Multimodal Robotic Architecture}, 
 	 booktitle = {Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {52--57},
 	 year = {2016},
 	 month = {Aug},
 	 publisher = {},
 	 doi = {}, 
 }