Visual Distant Supervision for Scene Graph Generation
International Conference on Computer Vision (ICCV),
pages 15816-15826,
- Oct 2021
Scene graph generation aims to identify objects and their
relations in images, providing structured image representations that can facilitate numerous applications in computer
vision. However, scene graph models usually require supervised learning on large quantities of labeled data with
intensive human annotation. In this work, we propose visual distant supervision, a novel paradigm of visual relation learning, which can train scene graph models without
any human-labeled data. The intuition is that by aligning
commonsense knowledge bases and images, we can automatically create large-scale labeled data to provide distant
supervision for visual relation learning. To alleviate the
noise in distantly labeled data, we further propose a framework that iteratively estimates the probabilistic relation labels and eliminates the noisy ones. Comprehensive experimental results show that our distantly supervised model
outperforms strong weakly supervised and semi-supervised
baselines. By further incorporating human-labeled data in
a semi-supervised fashion, our model outperforms state-ofthe-art fully supervised models by a large margin (e.g., 8.3
micro- and 7.8 macro-recall@50 improvements for predicate classification in Visual Genome evaluation). We make
the data and code for this paper publicly available at
https://github.com/thunlp/VisualDS.
@InProceedings{YZHLWLWS21, author = {Yao, Yuan and Zhang, Ao and Han, Xu and Li, Mengdi and Weber, Cornelius and Liu, Zhiyuan and Wermter, Stefan and Sun, Maosong}, title = {Visual Distant Supervision for Scene Graph Generation}, booktitle = {International Conference on Computer Vision (ICCV)}, editors = {}, number = {}, volume = {}, pages = {15816-15826}, year = {2021}, month = {Oct}, publisher = {IEEE/CVF }, doi = {}, }