Learning Robot Vision for Assisted Living
Robotic Vision: Technologies for Machine Leaning and Vision Applications,
Editors: Garcia-Rodriguez, Jose; Cazorla, Miguel,
pages 257--280,
doi: 10.4018/978-1-4666-2672-0.ch015
- 2013
This chapter presents an overview of a typical scenario of Ambient Assisted Living (AAL) in which a
robot navigates to a person for conveying information. Indoor robot navigation is a challenging task
due to the complexity of real-home environments and the need of online learning abilities to adjust for
dynamic conditions. A comparison between systems with different sensor typologies shows that visionbased systems promise to provide good performance and a wide scope of usage at reasonable cost.
Moreover, vision-based systems can perform different tasks simultaneously by applying different algorithms to the input data stream thus enhancing the flexibility of the system. The authors introduce the
state of the art of several computer vision methods for realizing indoor robotic navigation to a person
and human-robot interaction. A case study has been conducted in which a robot, which is part of an
AAL system, navigates to a person and interacts with her. The authors evaluate this test case and give
an outlook on the potential of learning robot vision in ambient homes.
@InCollection{YTVMWCW13, author = {Yan, Wenjie and Torta, Elena and van der Pol, David and Meins, Nils and Weber, Cornelius and Cuijpers, Raymond H. and Wermter, Stefan}, title = {Learning Robot Vision for Assisted Living}, booktitle = {Robotic Vision: Technologies for Machine Leaning and Vision Applications}, editors = {Garcia-Rodriguez, Jose; Cazorla, Miguel}, number = {}, volume = {}, pages = {257--280}, year = {2013}, month = {}, publisher = {IGI Global}, doi = {10.4018/978-1-4666-2672-0.ch015}, }