Effectiveness of Feature Extraction in Neural Network Architectures for Novelty Detection
Proceedings of the International Conference on Artificial Neural Networks,
pages 976--981,
doi: 10.1049/cp:19991239
- Sep 1999
This paper examines the performance of seven neural
network architectures in classifying and detecting novel
events contained within data collected from turbine sensors.
Several different multi-layer perceptrons were built and
trained using back propagation, conjugate gradient and
Quasi-Newton training algorithms. In addition, Linear
networks, Radial Basis Function networks, Probabilistic
networks and Kohonen self organising feature maps were
also built and trained, with the objective of discovering the
most appropriate architecture. Because of the large input set
involved in practice, feature extraction is examined to reduce
the input features, the techniques considered being stepwise
linear regression and a genetic algorithm. The results of
these experiments have demonstrated an improvement in
classification performance for multi layer perceptrons,
Kohonen and probabilistic networks, using both genetic
algorithms and stepwise linear regression over other
architectures considered in this work. In addition, linear
regression also performed better than a genetic algorithm for
feature extraction. For classification problems involving a
clear two class structure we consider a synthesis of stepwise
linear regression with any of the architectures listed above to
offer demonstrable improvements in performance for
important real world tasks.
@InProceedings{AWM99, author = {Addison, J. F. Dale and Wermter, Stefan and MacIntyre, J.}, title = {Effectiveness of Feature Extraction in Neural Network Architectures for Novelty Detection}, booktitle = {Proceedings of the International Conference on Artificial Neural Networks}, editors = {}, number = {}, volume = {}, pages = {976--981}, year = {1999}, month = {Sep}, publisher = {}, doi = {10.1049/cp:19991239}, }