Crossmodal Pattern Discrimination in Humans and Robots: A Visuo-Tactile Case Study
The quality of crossmodal perception hinges on two factors: The accuracy of the
independent unimodal perception and the ability to integrate information from different
sensory systems. In humans, the ability for cognitively demanding crossmodal perception
diminishes from young to old age. Here, we propose a new approach to research
to which degree the different factors contribute to crossmodal processing and the
age-related decline by replicating a medical study on visuo-tactile crossmodal pattern
discrimination utilizing state-of-the-art tactile sensing technology and artificial neural
networks (ANN). We implemented two ANN models to specifically focus on the relevance
of early integration of sensory information during the crossmodal processing stream as a
mechanism proposed for efficient processing in the human brain. Applying an adaptive
staircase procedure, we approached comparable unimodal classification performance
for both modalities in the human participants as well as the ANN. This allowed us
to compare crossmodal performance between and within the systems, independent
of the underlying unimodal processes. Our data show that unimodal classification
accuracies of the tactile sensing technology are comparable to humans. For crossmodal
discrimination of the ANN the integration of high-level unimodal features on earlier
stages of the crossmodal processing stream shows higher accuracies compared to
the late integration of independent unimodal classifications. In comparison to humans,
the ANN show higher accuracies than older participants in the unimodal as well as the
crossmodal condition, but lower accuracies than younger participants in the crossmodal
task. Taken together, we can show that state-of-the-art tactile sensing technology is
able to perform a complex tactile recognition task at levels comparable to humans.
For crossmodal processing, human inspired early sensory integration seems to improve
the performance of artificial neural networks. Still, younger participants seem to employ
more efficient crossmodal integration mechanisms than modeled in the proposed ANN.
Our work demonstrates how collaborative research in neuroscience and embodied
artificial neurocognitive models can help to derive models to inform the design of future
neurocomputational architectures.
@Article{RGKHFWZGH20, author = {Ruppel, Philipp and Görner, Michael and Kerzel, Matthias and Hendrich, Norman and Feldheim, Jan and Wermter, Stefan and Zhang, Jianwei and Gerloff, Christian and Higgen, Focko L.}, title = {Crossmodal Pattern Discrimination in Humans and Robots: A Visuo-Tactile Case Study}, journal = {Frontiers in Robotics and AI}, number = {}, volume = {}, pages = {}, year = {2020}, month = {Dec}, publisher = {}, doi = {10.3389/frobt.2020.540565}, }