Bootstrapping Knowledge Graphs From Images and Text

Jiayuan Mao , Yuan Yao , Stefan Heinrich , Tobias Hinz , Cornelius Weber , Stefan Wermter , Zhiyuan Liu , Maosong Sun
Frontiers in Neurorobotics Volume 13, Number 93, doi: 10.3389/fnbot.2019.00093 - Nov 2019 Open Access
Associated documents :  
The problem of generating structured Knowledge Graphs (KGs) is difficult and open but relevant to a range of tasks related to decision making and information augmentation. A promising approach is to study generating KGs as a relational representation of inputs (e.g., textual paragraphs or natural images), where nodes represent the entities and edges represent the relations. This procedure is naturally a mixture of two phases: extracting primary relations from input, and completing the KG with reasoning. In this paper, we propose a hybrid KG builder that combines these two phases in a unified framework and generates KGs from scratch. Specifically, we employ a neural relation extractor resolving primary relations from input and a differentiable inductive logic programming (ILP) model that iteratively completes the KG. We evaluate our framework in both textual and visual domains and achieve comparable performance on relation extraction datasets based on Wikidata and the Visual Genome. The framework surpasses neural baselines by a noticeable gap in reasoning out dense KGs and overall performs particularly well for rare relations.

 

@Article{MYHHWWLS19, 
 	 author =  {Mao, Jiayuan and Yao, Yuan and Heinrich, Stefan and Hinz, Tobias and Weber, Cornelius and Wermter, Stefan and Liu, Zhiyuan and Sun, Maosong},  
 	 title = {Bootstrapping Knowledge Graphs From Images and Text}, 
 	 journal = {Frontiers in Neurorobotics},
 	 number = {93},
 	 volume = {13},
 	 pages = {},
 	 year = {2019},
 	 month = {Nov},
 	 publisher = {},
 	 doi = {10.3389/fnbot.2019.00093}, 
 }