A Camera-Direction Dependent Visual-Motor Coordinate Transformation for a Visually Guided Neural Robot
Knowledge-Based Systems,
Volume 19,
Number 5,
pages 348--355,
doi: 10.1016/j.knosys.2005.11.020
- Sep 2006
Objects of interest are represented in the brain simultaneously in different frames of reference. Knowing the positions of ones head
and eyes, for example, one can compute the body-centred position of an object from its perceived coordinates on the retinae. We propose
a simple and fully trained attractor network which computes head-centred coordinates given eye position and a perceived retinal object
position. We demonstrate this system on artificial data and then apply it within a fully neurally implemented control system which visually guides a simulated robot to a table for grasping an object. The integrated system has as input a primitive visual system with a what
where pathway which localises the target object in the visual field. The coordinate transform network considers the visually perceived
object position and the camera pan-tilt angle and computes the target position in a body-centred frame of reference. This position is
used by a reinforcement-trained network to dock a simulated PeopleBot robot at a table for reaching the object. Hence, neurally computing coordinate transformations by an attractor network has biological relevance and technical use for this important class of
computations.
@Article{WMEW06, author = {Weber, Cornelius and Muse, David and Elshaw, Mark I. and Wermter, Stefan}, title = {A Camera-Direction Dependent Visual-Motor Coordinate Transformation for a Visually Guided Neural Robot}, journal = {Knowledge-Based Systems}, number = {5}, volume = {19}, pages = {348--355}, year = {2006}, month = {Sep}, publisher = {Elsevier}, doi = {10.1016/j.knosys.2005.11.020}, }