Sammanfattning
A major disadvantage of feedforward neural
networks is still the difficulty to gain insight into their internal functionality. This is much less the case for, e.g., nets that are trained unsupervised, such as Kohonen’s self-organizing feature maps (SOMs). These offer a direct view into the stored knowledge, as their internal knowledge is stored in the same format as the input data that was used for training or is used for evaluation. This paper discusses a mathematical
transformation of a feed-forward network into a SOMlike
structure such that its internal knowledge can be visually
interpreted. This is particularly applicable to networks
trained in the general classification problem domain.
networks is still the difficulty to gain insight into their internal functionality. This is much less the case for, e.g., nets that are trained unsupervised, such as Kohonen’s self-organizing feature maps (SOMs). These offer a direct view into the stored knowledge, as their internal knowledge is stored in the same format as the input data that was used for training or is used for evaluation. This paper discusses a mathematical
transformation of a feed-forward network into a SOMlike
structure such that its internal knowledge can be visually
interpreted. This is particularly applicable to networks
trained in the general classification problem domain.
Originalspråk | engelska |
---|---|
Titel på värdpublikation | Proceedings ProRisc?03 |
Sidor | 447-452 |
Status | Published - 2003 |
Evenemang | 14th ProRISC Workshop on Circuits, Systems and Signal Processing, 2003 - Veldhoven, The Netherlands, Veldhoven, Nederländerna Varaktighet: 2003 nov. 26 → 2003 nov. 27 |
Konferens
Konferens | 14th ProRISC Workshop on Circuits, Systems and Signal Processing, 2003 |
---|---|
Land/Territorium | Nederländerna |
Ort | Veldhoven |
Period | 2003/11/26 → 2003/11/27 |
Ämnesklassifikation (UKÄ)
- Elektroteknik och elektronik