Learning objects from RGB-D sensors using point cloud-based neural networks.
Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN),
pages 439-444,
- Jan 2015
In this paper we present a scene understanding approach for assistive
robotics based on learning to recognize different objects from RGB-D
devices. Using the depth information it is possible to compute descriptors
that capture the geometrical relations among the points that constitute an
object or extract features from multiple viewpoints. We developed a framework
for testing different neural models that receive this depth information
as input. Also, we propose a novel approach using three-dimensional RGB-D
information as input to Convolutional Neural Networks. We found F1-scores
greater than 0.9 for the majority of the objects tested, showing that the
adopted approach is effective as well for classification.
@InProceedings{BBPW15,
author = {Borghetti, Marcelo and Barros, Pablo and Parisi, German I. and Wermter, Stefan},
title = {Learning objects from RGB-D sensors using point cloud-based neural networks.},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN)},
journal = {None},
editors = {}
number = {}
volume = {}
pages = {439-444},
year = {2015},
month = {Jan},
publisher = {None},
doi = {}
}