Spatial relation learning in complementary scenarios with deep neural networks

Frontiers in Neurorobotics, doi: 10.3389/fnbot.2022.844753 - Jul 2022 Open Access
Associated documents :  
A cognitive agent performing in the real world needs to learn relevant concepts about its environment (e.g., objects, color, and shapes) and react accordingly. In addition to learning the concepts, it needs to learn relations between the concepts, in particular spatial relations between objects. In this paper, we propose three approaches that allow a cognitive agent to learn spatial relations. First, using an embodied model, the agent learns to reach toward an object based on simple instructions involving left-right relations. Since the level of realism and its complexity does not permit large-scale and diverse experiences in this approach, we devise as a second approach a simple visual dataset for geometric feature learning and show that recent reasoning models can learn directional relations in different frames of reference. Yet, embodied and simple simulation approaches together still do not provide sufficient experiences. To close this gap, we thirdly propose utilizing knowledge bases for disembodied spatial relation reasoning. Since the three approaches (i.e., embodied learning, learning from simple visual data, and use of knowledge bases) are complementary, we conceptualize a cognitive architecture that combines these approaches in the context of spatial relation learning.

 

@Article{LYOLWLW22, 
 	 author =  {Lee, Jae Hee and Yao, Yuan and Özdemir, Ozan and Li, Mengdi and Weber, Cornelius and Liu, Zhiyuan and Wermter, Stefan},  
 	 title = {Spatial relation learning in complementary scenarios with deep neural networks}, 
 	 journal = {Frontiers in Neurorobotics},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2022},
 	 month = {Jul},
 	 publisher = {},
 	 doi = {10.3389/fnbot.2022.844753}, 
 }