Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

Esam Qaralleh, Gheith Abandah, Fuad Tarek Jamour

Research output: Contribution to journalArticlepeer-review

Abstract

Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.
Original languageEnglish (US)
Pages (from-to)533-542
Number of pages10
JournalJournal of Software Engineering and Applications
Volume06
Issue number10
DOIs
StatePublished - Oct 4 2013

Bibliographical note

KAUST Repository Item: Exported on 2020-10-01

Fingerprint

Dive into the research topics of 'Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words'. Together they form a unique fingerprint.

Cite this