Editorial: Cross-Modal Learning: Adaptivity, Prediction and Interaction
Crossmodal learning has in recent years emerged as a new area of interdisciplinary research. The term crossmodal learning refers to the synergistic synthesis of information from multiple sensory modalities such that the learning that occurs within any individual sensory modality can be enhanced with information from one or more other modalities. Crossmodal learning is a crucial component of adaptive behavior in a continuously changing world, and examples are ubiquitous, such as: learning to grasp and manipulate objects; learning to walk; learning to read and write; learning to understand language and its referents; etc. In all these examples, visual, auditory, somatosensory or other modalities have to be integrated, and learning must be crossmodal. In fact, the broad range of acquired human skills are crossmodal, and many of the most advanced human capabilities, such as those involved in social cognition, require learning from the richest combinations of crossmodal information. In contrast, even the very best systems in Artificial Intelligence (AI) and robotics have taken only tiny steps in this direction. Building a system that composes a global perspective from multiple distinct sources, types of data, and sensory modalities is a grand challenge of AI, yet it is specific enough that it can be studied quite rigorously and in such detail that the prospect for deep insights into these mechanisms is quite plausible in the near term. Crossmodal learning is a broad, interdisciplinary topic that has not yet coalesced into a single, unified field. Instead, there are many separate fields, each tackling the concerns of crossmodal learning from its own perspective, with currently little overlap. By focusing on crossmodal learning, this Research Topic brings together recent studies demonstrating avenues of progress in artificial intelligence, robotics, psychology and neuroscience.
Several articles of this Research Topic review recent developments in this emerging field and, thus, are well-suited to provide the reader with an overview and with a compact introduction to several aspects of particular interest.
@Article{ZWSZERFX22, author = {Zhang, Jianwei and Wermter, Stefan and Sun, Fuchun and Zhang, Changshui and Engel, Andreas K. and Röder, Brigitte and Fu, Xiaolan and Xue, Gui}, title = {Editorial: Cross-Modal Learning: Adaptivity, Prediction and Interaction}, journal = {Frontiers in Neurorobotics}, number = {}, volume = {}, pages = {}, year = {2022}, month = {Apr}, publisher = {}, doi = {10.3389/fnbot.2022.889911}, }