We present a new method to train the members of a committee of one-hidden-layer neural nets. Instead of training various nets on subsets of the training data we preprocess the training data for each individual model such that the corresponding errors are decor related. On the MNIST digit recognition benchmark set we obtain a recognition error rate of 0.39%, using a committee of 25 one-hidden-layer neural nets, which is on par with state-of-the-art recognition rates of more complicated systems. © 2011 IEEE.
|Title of host publication
|Proceedings of the International Conference on Document Analysis and Recognition, ICDAR
|Number of pages
|Published - Dec 2 2011