Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning

International Joint Conference on Neural Networks (IJCNN), pages 5115--5122, doi: 10.1109/IJCNN.2018.8489237 - Jul 2018
Associated documents :  
Interactive reinforcement learning (IRL) extends traditional reinforcement learning (RL) by allowing an agent to interact with parent-like trainers during a task. In this paper, we present an IRL approach using dynamic audio-visual input in terms of vocal commands and hand gestures as feedback. Our architecture integrates multi-modal information to provide robust commands from multiple sensory cues along with a confidence value indicating the trustworthiness of the feedback. The integration process also considers the case in which the two modalities convey incongruent information. Additionally, we modulate the influence of sensory-driven feedback in the IRL task using goal-oriented knowledge in terms of contextual affordances. We implement a neural network architecture to predict the effect of performed actions with different objects to avoid failed-states, i.e., states from which it is not possible to accomplish the task. In our experimental setup, we explore the interplay of multimodal feedback and task-specific affordances in a robot cleaning scenario. We compare the learning performance of the agent under four different conditions: traditional RL, multi-modal IRL, and each of these two setups with the use of contextual affordances. Our experiments show that the best performance is obtained by using audio-visual feedback with affordancemodulated IRL. The obtained results demonstrate the importance of multi-modal sensory processing integrated with goal-oriented knowledge in IRL tasks.

 

@InProceedings{CPW18, 
 	 author =  {Cruz, Francisco and Parisi, German I. and Wermter, Stefan},  
 	 title = {Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning}, 
 	 booktitle = {International Joint Conference on Neural Networks (IJCNN)},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {5115--5122},
 	 year = {2018},
 	 month = {Jul},
 	 publisher = {},
 	 doi = {10.1109/IJCNN.2018.8489237}, 
 }