Cross-Modal Learning: Adaptivity, Prediction and Interaction

Jianwei Zhang , Stefan Wermter , Fuchun Sun , Changshui Zhang , Andreas K. Engel , Brigitte Röder , Xiaolan Fu , Gui Xue
doi: 10.3389/978-2-88971-885-6 - Dec 2021 Open Access
Associated documents :  
The purpose of this Research Topic is to reflect and discuss links between neuroscience, psychology, computer science and robotics with regards to the topic of cross-modal learning which has, in recent years, emerged as a new area of interdisciplinary research. The term cross-modal learning refers to the synergistic synthesis of information from multiple sensory modalities such that the learning that occurs within any individual sensory modality can be enhanced with information from one or more other modalities. Cross-modal learning is a crucial component of adaptive behavior in a continuously changing world, and examples are ubiquitous, such as: learning to grasp and manipulate objects; learning to walk; learning to read and write; learning to understand language and its referents; etc. In all these examples, visual, auditory, somatosensory or other modalities have to be integrated, and learning must be cross-modal. In fact, the broad range of acquired human skills are cross-modal, and many of the most advanced human capabilities, such as those involved in social cognition, require learning from the richest combinations of cross-modal information. In contrast, even the very best systems in Artificial Intelligence (AI) and robotics have taken only tiny steps in this direction. Building a system that composes a global perspective from multiple distinct sources, types of data, and sensory modalities is a grand challenge of AI, yet it is specific enough that it can be studied quite rigorously and in such detail that the prospect for deep insights into these mechanisms is quite plausible in the near term. Cross-modal learning is a broad, interdisciplinary topic that has not yet coalesced into a single, unified field. Instead, there are many separate fields, each tackling the concerns of cross-modal learning from its own perspective, with currently little overlap. We anticipate an accelerating trend towards integration of these areas and we intend to contribute to that integration. By focusing on cross-modal learning, the proposed Research Topic can bring together recent progress in artificial intelligence, robotics, psychology and neuroscience.


 	 author =  {Zhang, Jianwei and Wermter, Stefan and Sun, Fuchun and Zhang, Changshui and Engel, Andreas K. and Röder, Brigitte and Fu, Xiaolan and Xue, Gui},  
 	 title = {Cross-Modal Learning: Adaptivity, Prediction and Interaction}, 
 	 number = {},
 	 volume = {},
 	 year = {2021},
 	 month = {Dec},
 	 publisher = {Frontiers Media SA},
 	 doi = {10.3389/978-2-88971-885-6},