Mean field theory neural networks for feature recognition, content addressable memory and optimization

Forskningsoutput: TidskriftsbidragArtikel i vetenskaplig tidskrift


Various applications of the mean field theory (MFT) technique for obtaining solutions close to optimal minima in feedback networks are reviewed. Using this method in the context of the Boltzmann machine gives rise to a fast deterministic learning algorithm with a performance comparable with that of the backpropagation algorithm (BP) in feature recognition applications. Since MFT learning is bidirectional its use can be extended from purely functional mappings to a content addressable memory. The storage capacity of such a network grows like O (10–20)nH with the number of hidden units. The MFT learning algorithm is local and thus it has an advantage over BP with respect to VLSI implementations. It is also demonstrated how MFT and BP are related in situations where the number of input units is much larger than the number of output units. In the context of-finding good solutions to difficult optimization problems the MFT technique again turns out to be extremely powerful. The quality of the solutions for large travelling salesman and graph partition problems are in parity with those obtained by optimally tuned simulated annealing methods. The algorithm employed here is based on multistate K-valued (K > 2) neurons rather than binary (K = 2) neurons. This method is also advantageous for more nested decision problems like scheduling. The MFT equations are isomorfic to resistance-capacitance equations and hence naturally map onto custom-made hardware. With the diversity of successful application areas the MFT approach thus constitutes a convenient platform for hardware development.


  • Carsten Peterson
Enheter & grupper

Ämnesklassifikation (UKÄ) – OBLIGATORISK

  • Datavetenskap (datalogi)
  • Annan fysik
Sidor (från-till)3-33
Antal sidor30
TidskriftConnection Science
Utgåva nummer1
StatusPublished - 1991
Peer review utfördJa