next up previous contents
Next: The XOR program - Up: Backpropagation Previous: Acknowledgments and Thanks

Introduction to Back-propagation

For an introduction to back-propagation I refer to Generation5.org - http://www.generation5.org/bp.shtml[1]. At this site you can find everything you need to know about Artificial Intelligence, Neural Networks and Back-propagation before you start.

A back-propagation network is a neural network that can be trained by the back-propagation algorithm, also called the generalized delta rule (Haykin, 1994).

The algorithm:

  1. Create a neural network
    Create a neural network consisting of input units, hidden layer units and output units.
  2. Set random weights
    Set random weights for the connections from input to hidden and hidden to output.
  3. Calculate outputs
    Calculate the outputs of the hidden layer units and the ouput units. The outputs depend on the input values and the weights to the units.
  4. Calculate the error
    Calculate the error of the hidden units and the output units depending on the desired output value(s).
  5. Change weights
    Change the weights for the connections and change the output value.
  6. Repeat step 3 to 5
    Repeat the steps 3 to 5 an x number of times.



Subsections
next up previous contents
Next: The XOR program - Up: Backpropagation Previous: Acknowledgments and Thanks
Copyright © 2001, R.M. Morriën