Translating feed-forward nets to SOM-like maps

B J vanderZwaag, Lambert Spaanenburg, C Slump

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceedingpeer-review

Abstract

A major disadvantage of feedforward neural
networks is still the difficulty to gain insight into their internal functionality. This is much less the case for, e.g., nets that are trained unsupervised, such as Kohonen’s self-organizing feature maps (SOMs). These offer a direct view into the stored knowledge, as their internal knowledge is stored in the same format as the input data that was used for training or is used for evaluation. This paper discusses a mathematical
transformation of a feed-forward network into a SOMlike
structure such that its internal knowledge can be visually
interpreted. This is particularly applicable to networks
trained in the general classification problem domain.
Original languageEnglish
Title of host publicationProceedings ProRisc?03
Pages447-452
Publication statusPublished - 2003
Event14th ProRISC Workshop on Circuits, Systems and Signal Processing, 2003 - Veldhoven, The Netherlands, Veldhoven, Netherlands
Duration: 2003 Nov 262003 Nov 27

Conference

Conference14th ProRISC Workshop on Circuits, Systems and Signal Processing, 2003
Country/TerritoryNetherlands
CityVeldhoven
Period2003/11/262003/11/27

Subject classification (UKÄ)

  • Electrical Engineering, Electronic Engineering, Information Engineering

Free keywords

  • feature maps
  • selforganizing maps
  • Neural networks
  • rule extraction
  • character recognition.

Fingerprint

Dive into the research topics of 'Translating feed-forward nets to SOM-like maps'. Together they form a unique fingerprint.

Cite this