Robot Docking Based on Omnidirectional Vision and Reinforcement Learning
Knowledge-Based Systems,
Volume 19,
Number 5,
pages 324--332,
doi: 10.1016/j.knosys.2005.11.018
- Sep 2006
We present a system for visual robotic docking using an
omnidirectional camera coupled with the actor critic reinforcement
learning algorithm. The system enables a PeopleBot robot to locate
and approach a table so that it can pick an object from it using the
pan-tilt camera mounted on the robot. We use a staged approach to
solve this problem as there are distinct sub tasks and different
sensors used. Starting with random wandering of the robot until the
table is located via a landmark, and then a network trained via
reinforcement allows the robot to turn to and approach the table.
Once at the table the robot is to pick the object from it. We argue that
our approach has a lot of potential allowing the learning of robot
control for navigation removing the need for internal maps of the
environment. This is achieved by allowing the robot to learn
couplings between motor actions and the position of a landmark.
@Article{MWW06, author = {Muse, David and Weber, Cornelius and Wermter, Stefan}, title = {Robot Docking Based on Omnidirectional Vision and Reinforcement Learning}, journal = {Knowledge-Based Systems}, number = {5}, volume = {19}, pages = {324--332}, year = {2006}, month = {Sep}, publisher = {Elsevier}, doi = {10.1016/j.knosys.2005.11.018}, }