Learning Coordinated Eye and Head Movements: Unifying Principles and Architectures
Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience,
doi: 10.3389/conf.fncom.2010.51.00065
- Sep 2010
Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. The minimum-variance principle suggested by Harris and Wolpert [1] considers additive white noise on the neural command signal whose variance is proportional to the power of the signal. Minimizing the variance of the final eye position driven by such a command signal leads to biologically plausible eye movement characteristics. A second optimality principle based on the minimum effort rule has also been suggested, which leads to realistic behavior [2]. In both studies, however, the neural substrate of the underlying computations is left unspecified.
At the same time, several models of the neural substrate underlying the generation of such movements have been suggested. These include feedback models that use a gaze error feedback and perform feedback control to direct the gaze to the desired orientation [3,4] and feedforward models that rely on the dynamics of their burst generator model neurons to generate the control signal [5,6]. These models do not include a biologically-plausible learning rule as a mechanism of optimization.
Here, we unify the two methodologies by introducing an open-loop neural controller with a biologically plausible adaptation mechanism that minimizes a proposed cost function. The cost function consists of two terms, one regarding the visual error integrated over time, and the other penalizing large control signals in terms of the weight values of the neural architecture. The latter term reflects the effect of signal-dependent noise [1]. The adaptation mechanism uses local weight adaptation rules to gradually optimize the behavior with respect to the cost function.
Simulations show that the characteristics of the coordinated eye and head movements generated by our model match the experimental data in many aspects, including the relationships between amplitude, duration and peak velocity in the head-restrained case and the contribution of eye and head to total gaze shift in the head-free conditions. The proposed model is not restricted to eye and head movements, and it may be a suitable model for learning various ballistic movements.
References
[1] Harris, C. M., and Wolpert, D. M. Signal-dependent noise determines motor planning. Nature: 394: 780-784, 1998.
[2] Kardamakis, A. A., and Moschovakis, A. K. Optimal control of gaze shifts. J Neurosci 29: 7723-7730, 2009.
[3] Guitton D.,Munoz D. P., and Galiana H. L. Gaze control in the cat: studies and modeling of the coupling between orienting eye and head movements in different behavioral tasks. J Neurophysiol
64: 509-531, 1990.
[4] Goossens H. H. L. M., Van Opstal A. J. Human eye-head coordination in two dimensions under different sensorimotor conditions. Exp Brain Res 114: 542-560, 1997.
[5] Freedman, E. G. Interactions between eye and head control signals can account for movement kinematics. Biol Cybern 84: 453-462, 2001.
[6] Kardamakis, A. A., Grantyn A., and Moschovakis A. K. Neural network simulations of the primate oculomotor system. V. Eye-head gaze shifts. Biol Cybern 102: 209-225, 2010.
@InProceedings{SWT10, author = {Saeb, Sohrab and Weber, Cornelius and Triesch, Jochen}, title = {Learning Coordinated Eye and Head Movements: Unifying Principles and Architectures}, booktitle = {Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience}, editors = {}, number = {}, volume = {}, pages = {}, year = {2010}, month = {Sep}, publisher = {Frontiers}, doi = {10.3389/conf.fncom.2010.51.00065}, }