A self-organizing neural network architecture for learning human-object interactions
Neurocomputing,
Volume 307,
pages 14--24,
doi: 10.1016/j.neucom.2018.04.015
- May 2018
The visual recognition of transitive actions comprising human-object interactions is a key component for
artificial systems operating in natural environments. This challenging task requires jointly the recognition of articulated body actions as well as the extraction of semantic elements from the scene such as
the identity of the manipulated objects. In this paper, we present a self-organizing neural network for
the recognition of human-object interactions from RGB-D videos. Our model consists of a hierarchy of
Grow-When-Required (GWR) networks that learn prototypical representations of body motion patterns
and objects, accounting for the development of action-object mappings in an unsupervised fashion. We
report experimental results on a dataset of daily activities collected for the purpose of this study as well
as on a publicly available benchmark dataset. In line with neurophysiological studies, our self-organizing
architecture exhibits higher neural activation for congruent action-object pairs learned during training
sessions with respect to synthetically created incongruent ones. We show that our unsupervised model
shows competitive classification results on the benchmark dataset with respect to strictly supervised approaches
@Article{MPW18a, author = {Mici, Luiza and Parisi, German I. and Wermter, Stefan}, title = {A self-organizing neural network architecture for learning human-object interactions}, journal = {Neurocomputing}, number = {}, volume = {307}, pages = {14--24}, year = {2018}, month = {May}, publisher = {Elsevier}, doi = {10.1016/j.neucom.2018.04.015}, }