Abstract:
Recent developments in the field of artificial neural networks have provided
potential applications in various fields. Artificial neural nets are inspired from the
studies of biological nervous system and composed of many simple nonlinear
computational elements called neurons which are connected by links of variable
weights. These weights are adjusted by a learning process and finally settle down
to a set of weights that realizes the task at hand. The most popular learning
algorithm for feed-forward connectionist networks is Back-propagation algorithm'
This a.lgorithmdefines sumof squared errors measured at the output layer as error
function and updates weights to minimizethis error by steepest descent method.
. Majordrawb~cks of this algorithm are its slow convergence and possibility of
getting stuck1in local minima.These drawbacks hinder its widespread applications
,
in real-world .problems. Several methods have been proposed to improve its
convergence. But these methods require someadditional parameters to be adjusted
and fast increase of somelearning rate parameter might cause unstable behavior in
the learning process and needs mo're complicated training procedure. Having
considered all these problems, a newerror function expressed as exponential of the.
sum of squared errors measured at the output layer is defined in the proposed
research. Weight update using this modification varies the learning rate parameter
dynamically during training as opposed to constant learning rate parameter used in
. standard Back-propagation. This adaptation of learning rate during learning is
found to significantly improve the convergence speed of Back-propagation
algorithm.