Lecture17
Hebbian Learning Rule
For the Hebbian learning rule the learning signal is equal simply to the
neuron s output (Hebb 1949).
This learning rule requires the weight initialization at small random values around wi = 0 prior to learning. The Hebbian learning rule represents a purely feedforward, unsupervised learning. The rule implements the interpretation of the classic statement: "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes place in firing it, some growth process or metabolic change takes place in one or both cells such that A s efficiency, as one
of the cells firing B, is increased." (Hebb 1949.)The rule states that if the crossproduct of output and input, or correlation term oixj is positive, this results in an increase of weight wij; otherwise the weight decreases. It can be seen that the output is strengthened in turn for each
input presented. Therefore, frequent input patterns will have most influence at the neuron s weight vector and will eventually produce the largest output. Since its inception, the Hebbian rule has evolved in a number of directions. In some cases, the Hebbian rule needs to be modified to counteract unconstrained growth of weight values, which takes place when excitations and responses consistently agree in sign. This corresponds to the Hebbian learning rule with saturation of the weights at a certain, preset level. Throughout this text note
that other learning rules often reflect the Hebbian rule principle. Below, most of the learning rules are illustrated with simple numerical examples. Note that the subscript of the weight vector is not used in the examples since there is only a single weight vector being adapted there.