Action Selection Methods in a Robotic Reinforcement Learning Scenario

2018 IEEE Latin American Conference on Computational Intelligence (LA-CCI), pages 1-6, doi: 10.1109/LA-CCI.2018.8625243 - Nov 2018
Associated documents :  
Reinforcement learning allows an agent to learn a new task while autonomously exploring its environment. For this aim, the agent chooses an action to perform among the available ones for a certain state. Nonetheless, a common problem for a reinforcement learning agent is to find a proper balance between exploration and exploitation of actions in order to achieve an optimal behavior. This paper compares multiple approaches to the exploration/exploitation dilemma in reinforcement learning and, moreover, it implements an exemplary reinforcement learning task within the domain of domestic robotics to show the performance of different exploration policies on it. We perform the domestic task using ?-greedy, softmax, VDBE, and VDBE-Softmax with online and offline temporal-difference learning. The obtained results show that the agent is able to collect larger and faster reward by using the VDBE-Softmax exploration strategy with both Q-learning and SARSA.

 

@InProceedings{CWFWW18, 
 	 author =  {Cruz, Francisco and Wueppen, Peter and Fazrie, Alvin and Weber, Cornelius and Wermter, Stefan},  
 	 title = {Action Selection Methods in a Robotic Reinforcement Learning Scenario}, 
 	 booktitle = {2018 IEEE Latin American Conference on Computational Intelligence (LA-CCI)},
 	 journal = {},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {1-6},
 	 year = {2018},
 	 month = {Nov},
 	 publisher = {},
 	 doi = {10.1109/LA-CCI.2018.8625243}, 
 }