Mixed-Reality Deep Reinforcement Learning for a Reach-to-grasp Task

Artificial Neural Networks and Machine Learning – ICANN 2019 pages 611-623, doi: 10.1007/978-3-030-30487-4_47 - Sep 2019
Associated documents :  
Deep Reinforcement Learning (DRL) has become successful across various robotic applications. However, DRL methods are not sample-efficient and require long learning times. We present an approach for online continuous deep reinforcement learning for a reach-to-grasp task in a mixed-reality environment: A human places targets for the robot in a physical environment; DRL for reaching these targets is carried out in simulation before actual actions are carried out in the physical environment. We extend previous work on a modified Deep Deterministic Policy Gradient (DDPG) algorithm with an architecture for online learning and evaluate different strategies to accelerate learning while ensuring learning stability. Our approach provides a neural inverse kinematics solution that increases over time its performance regarding the execution time while focusing on those areas of the Cartesian space where targets are often placed by the human operator, thus enabling efficient learning. We evaluate reward shaping and augmented targets as strategies for accelerating deep reinforcement learning and analyze the learning stability.

 

@InProceedings{BZKW19, 
 	 author =  {Beik-Mohammadi, Hadi and Zamani, Mohammad Ali and Kerzel, Matthias and Wermter, Stefan},  
 	 title = {Mixed-Reality Deep Reinforcement Learning for a Reach-to-grasp Task}, 
 	 booktitle = {Artificial Neural Networks and Machine Learning – ICANN 2019},
 	 number = {},
 	 volume = {},
 	 pages = {611-623},
 	 year = {2019},
 	 month = {Sep},
 	 publisher = {},
 	 doi = {10.1007/978-3-030-30487-4_47}, 
 }