Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning

IJCAI Workshop on Spatio-Temporal Reasoning and Learning, - Jul 2022
Associated documents :  
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <meta http-equiv="Content-Style-Type" content="text/css"> <title></title> <meta name="Generator" content="Cocoa HTML Writer"> <meta name="CocoaVersion" content="2113.4"> <style type="text/css"> p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Optima; color: #000000; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} </style> </head> <body> <p class="p1"><span class="s1">Spatial reasoning poses a particular challenge for intelligent agents and is at the same time a prerequisite for their successful interaction and communication in the physical world. One such reasoning task is to describe the position of a target object with respect to the intrinsic orientation of some reference object via <i>relative directions</i>. In this paper, we introduce GRiD-A-3D, a novel diagnostic visual question-answering (VQA) dataset based on abstract objects. Our dataset allows for a fine-grained analysis of end-to-end VQA models' capabilities to ground relative directions. At the same time, model training requires considerably fewer computational resources compared with existing datasets, yet yields a comparable or even higher performance. Along with the new dataset, we provide a thorough evaluation based on two widely known end-to-end VQA architectures trained on GRiD-A-3D. We demonstrate that within a few epochs, the subtasks required to reason over relative directions, such as recognizing and locating objects in a scene and estimating their intrinsic orientations, are learned in the order in which relative directions are intuitively processed.</span></p> </body> </html>

 

@InProceedings{AKLWW22, 
 	 author =  {Ahrens, Kyra and Kerzel, Matthias and Lee, Jae Hee and Weber, Cornelius and Wermter, Stefan},  
 	 title = {Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning}, 
 	 booktitle = {IJCAI Workshop on Spatio-Temporal Reasoning and Learning},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2022},
 	 month = {Jul},
 	 publisher = {},
 	 doi = {}, 
 }