Training Agents with Interactive Reinforcement Learning and Contextual Affordances
IEEE Transactions on Cognitive and Developmental Systems,
Volume 8,
Number 4,
pages 271--284,
doi: 10.1109/TCDS.2016.2543839
- Dec 2016
In the future, robots will be used more extensively as assistants in home scenarios and must be able to acquire expertise from trainers by learning through interaction.
One promising approach is interactive reinforcement learning (IRL) where an external trainer advises an apprentice on actions to speed up the learning process.
In this paper we present an IRL approach for the domestic task of cleaning a table and compare three different learning methods using simulated robots: reinforcement learning (RL), RL with contextual affordances to avoid failed states, and the previously trained robot serving as a trainer to a second apprentice robot.
We then demonstrate that the use of IRL leads to different performance with various levels of interaction and consistency of feedback.
Our results show that the simulated robot completes the task with RL, although working slowly and with a low rate of success.
With RL and contextual affordances fewer actions are needed and can reach higher rates of success. For good performance with IRL it is essential to consider the level of consistency of feedback since inconsistencies can cause considerable delay in the learning process.
In general, we demonstrate that interactive feedback provides an advantage for the robot in most of the learning cases.
@Article{CMWW16, author = {Cruz, Francisco and Magg, Sven and Weber, Cornelius and Wermter, Stefan}, title = {Training Agents with Interactive Reinforcement Learning and Contextual Affordances}, journal = {IEEE Transactions on Cognitive and Developmental Systems}, number = {4}, volume = {8}, pages = {271--284}, year = {2016}, month = {Dec}, publisher = {IEEE}, doi = {10.1109/TCDS.2016.2543839}, }