Neural Networks with Fixed Binary Random Projections Improve Accuracy in Classifying Noisy Data

Zijin Yang, Achim Schilling, Andreas Maier, Patrick Krauss
Friedrich-Alexander Universität Erlangen-Nürnberg, Lehrstuhl für Machine Intelligence

Abstract

The trend of Artificial Neural Networks becoming "bigger" and "deeper" persists. Training these networks using back-propagation is considered biologically implausible and a time-consuming task. Hence, we investigate how far we can go with fixed binary random projections (BRPs), an approach which reduces the number of trainable parameters using localized receptive fields and binary weights. Evaluating this approach on the MNIST dataset we discovered that contrary to models with fully-trained dense weights, models using fixed localized sparse BRPs yield equally good performance in terms of accuracy, saving 98% computations when generating the hidden representation for the input. Furthermore, we discovered that using BRPs leads to a more robust performance - up to 56% better compared to dense models - in terms of classifying noisy inputs.

Postersession 2, Neural Networks in General

Paper:

Full Video

Wir benutzen Cookies auf dieser Webseite. Durch Deinen Besuch stimmst Du dem zu.