.
Widrow-Hoff Learning Rule
It is applicable for the supervised training of neural networks. It is independent of the activation function of neurons used since it minimizes the squared error between the desired output value d, and the neuron s activation value net.
Correlation Learning Rule
By substituting r = di into the general learning rule we obtain the correlation learning rule.
Winner-take-all Learning Rule
This learning rule differs substantially from any of the rules discussed so far in this section It can only be demonstrated and explained for an ensemble of neurons, preferably arranged in a layer of p units. This rule is an example of Competitive learning, and it is used for unsupervised network training. Typically, winner-take-all learning is used for learning statistical properties of inputs .
.Outstar Learning Rule
Outstar learning rule is another learning rule that is best explained when neurons are arranged in a layer. This rule is designed to produce a desired response d of the layer of p neurons . The rule is used to provide learning of repetitive and characteristic properties of input /output relationships. This rule is concerned with supervised learning; however, it is supposed to allow the network to extract statistical properties of the input and output signals.