From Semantics to Execution: Integrating Action Planning With Reinforcement Learning for Robotic Causal Problem-Solving

Frontiers in Robotics and AI Volume 6, pages 123, doi: doi.org/10.3389/frobt.2019.00123 - Nov 2019 Open Access
Associated documents :  
Reinforcement learning is generally accepted to be an appropriate and successful method to learn robot control. Symbolic action planning is useful to resolve causal dependencies and to break a causally complex problem down into a sequence of simpler high-level actions. A problem with the integration of both approaches is that action planning is based on discrete high-level action-and state spaces, whereas reinforcement learning is usually driven by a continuous reward function. Recent advances in model-free reinforcement learning, specifically, universal value function approximators and hindsight experience replay, have focused on goal-independent methods based on sparse rewards that are only given at the end of a rollout, and only if the goal has been fully achieved. In this article, we build on these novel methods to facilitate the integration of action planning with model-free reinforcement learning. Specifically, the paper demonstrates how the reward-sparsity can serve as a bridge between the high-level and low-level state-and action spaces. As a result, we demonstrate that the integrated method is able to solve robotic tasks that involve non-trivial causal dependencies under noisy conditions, exploiting both data and knowledge.

 

@Article{ENW19, 
 	 author =  {Eppe, Manfred and Nguyen, D.H. Phuong and Wermter, Stefan},  
 	 title = {From Semantics to Execution: Integrating Action Planning With Reinforcement Learning for Robotic Causal Problem-Solving}, 
 	 journal = {Frontiers in Robotics and AI},
 	 number = {},
 	 volume = {6},
 	 pages = {123},
 	 year = {2019},
 	 month = {Nov},
 	 publisher = {Frontiers},
 	 doi = {doi.org/10.3389/frobt.2019.00123}, 
 }