08.06.2016: Alireza Alemi

Three-threshold learning rule for optimal information storage in recurrent neural networks

 

Hebbian learning is a first choice for storing memories as point attractors in recurrent neural networks, a popular theoretical scenario for modeling memory function. The simplicity and the locality of this synaptic update rule comes at the cost of a poor storage capacity in comparison with the maximal storage capacity of recurrent neural networks known as the Gardner bound. In this talk, by transforming the perceptron learning rule, I present an online, local learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback; the memory patterns are presented online as strong afferent currents. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. Between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. The simulated model achieves a capacity close to the maximal storage capacity calculated analytically. Additionally, the statistical properties of synaptic weight matrix at maximal capacity is quantified, and the results of  applying this rule to another network architecture will be mentioned.