A back-propagation network is a neural network that can be trained by the back-propagation algorithm, also called the generalized delta rule (Haykin, 1994).

The algorithm:

**Create a neural network**

Create a neural network consisting of input units, hidden layer units and output units.**Set random weights**

Set random weights for the connections from input to hidden and hidden to output.**Calculate outputs**

Calculate the outputs of the hidden layer units and the ouput units. The outputs depend on the input values and the weights to the units.**Calculate the error**

Calculate the error of the hidden units and the output units depending on the desired output value(s).**Change weights**

Change the weights for the connections and change the output value.**Repeat step 3 to 5**

Repeat the steps 3 to 5 an x number of times.

- The XOR program - 2 input, 2 hidden
- Extended XOR program - 3 input, 3 hidden
- Object oriented back-propagation program