Efficient Facial Feature Learning with Wide Ensemble-based Convolutional Neural Networks
The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20),
pages 1--1,
doi: 10.1609/aaai.v34i04.6037
- Feb 2020
<p>
Ensemble methods, traditionally built with independently trained de-correlated models, have proven to be efficient methods for reducing the remaining residual generalization error, which results in robust and accurate methods for real-world applications. In the context of deep learning, however, training an ensemble of deep networks is costly and generates high redundancy which is inefficient. In this paper, we present experiments on Ensembles with Shared Representations (ESRs) based on convolutional networks to demonstrate, quantitatively and qualitatively, their data processing efficiency and scalability to large-scale datasets of facial expressions. We show that redundancy and computational load can be dramatically reduced by varying the branching level of the ESR without loss of diversity and generalization power, which are both important for ensemble performance. Experiments on large-scale datasets suggest that ESRs reduce the remaining residual generalization error on the AffectNet and FER+ datasets, reach human-level performance, and outperform state-of-the-art methods on facial expression recognition in the wild using emotion and affect concepts.
</p>
@InProceedings{SMW20, author = {Siqueira, Henrique and Magg, Sven and Wermter, Stefan}, title = {Efficient Facial Feature Learning with Wide Ensemble-based Convolutional Neural Networks}, booktitle = {The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)}, editors = {}, number = {}, volume = {}, pages = {1--1}, year = {2020}, month = {Feb}, publisher = {}, doi = {10.1609/aaai.v34i04.6037}, }