Robot Docking Based on Omnidirectional Vision and Reinforcement Learning
Research and Development in Intelligent Systems XXII - International Conference on Innovative Techniques and Applications of Artificial Intelligence,
Editors: Macintosh, Ann; Ellis, Richard; Allen, Tony,
pages 23--26,
- 2005
We present a system for visual robotic docking using an
omnidirectional camera coupled with the actor critic reinforcement
learning algorithm. The system enables a PeopleBot robot to locate
and approach a table so that it can pick an object from it using the
pan-tilt camera mounted on the robot. We use a staged approach to
solve this problem as there are distinct sub tasks and different
sensors used. Starting with random wandering of the robot until the
table is located via a landmark, and then a network trained via
reinforcement allows the robot to turn to and approach the table.
Once at the table the robot is to pick the object from it. We argue that
our approach has a lot of potential allowing the learning of robot
control for navigation removing the need for internal maps of the
environment. This is achieved by allowing the robot to learn
couplings between motor actions and the position of a landmark.
@InProceedings{MWW05, author = {Muse, David and Weber, Cornelius and Wermter, Stefan}, title = {Robot Docking Based on Omnidirectional Vision and Reinforcement Learning}, booktitle = {Research and Development in Intelligent Systems XXII - International Conference on Innovative Techniques and Applications of Artificial Intelligence}, editors = {Macintosh, Ann; Ellis, Richard; Allen, Tony}, number = {}, volume = {}, pages = {23--26}, year = {2005}, month = {}, publisher = {Springer}, doi = {}, }