Growing self-organizing map

Growing self-organizing map

A growing self-organizing map (GSOM) is a growing variant of the popular self-organizing map (SOM). The GSOM was developed to address the issue of identifying a suitable map size in the SOM. It starts with a minimal number of nodes (usually 4) and grows new nodes on the boundary based on a heuristic. By using the value called Spread Factor (SF), the data analyst has the ability to control the growth of the GSOM.

All the starting nodes of the GSOM are boundary nodes, i.e. each node has the freedom to grow in its own direction at the beginning. (Fig. 1) New Nodes are grown from the boundary nodes. Once a node is selected for growing all its free neighboring positions will be grown new nodes. The figure shows the three possible node growth options for a rectangular GSOM.

The algorithm

The GSOM process is as follows:
#Initialization phase:
##Initialize the weight vectors of the starting nodes (usually four) with random numbers between 0 and 1.
##Calculate the growth threshold (GT) for the given data set of dimension D according to the spread factor (SF) using the formula GT = - D imes ln ( SF )
#Growing Phase:
##Present input to the network.
##Determine the weight vector that is closest to the input vector mapped to the current feature map (winner), using Euclidean distance (similar to the SOM). This step can be summarized as: find q' such that left | v - w_{q'} ight vert le left | v - w_q ight vert forall q in mathbb{N} where v, w are the input and weight vectors respectively, q is the position vector for nodes and mathbb{N} is the set of natural numbers.
##The weight vector adaptation is applied only to the neighborhood of the winner and the winner itself. The neighborhood is a set of neurons around the winner, but in the GSOM the starting neighborhood selected for weight adaptation is smaller compared to the SOM (localized weight adaptation). The amount of adaptation (learning rate) is also reduced exponentially over the iterations. Even within the neighborhood, weights that are closer to the winner are adapted more than those further away. The weight adaptation can be described by w_j ( k + 1 ) = egin{cases} w_j ( k ) & mbox{if}j otin Nu_{k+1} \ w_j ( k ) + LR ( k ) imes (x_k - w_j ( k ) ) & mbox{if}j in Nu_{k+1} end{cases} where the Learning Rate LR ( k ), k in mathbb{N} is a sequence of positive parameters converging to zero as k o infty. w_j ( k ), w_j ( k + 1 ) are the weight vectors of the node j before and after the adaptation and Nu_{k+1} is the neighbourhood of the winning neuron at the ( k + 1 )th iteration. The decreasing value of LR ( k ) in the GSOM depends on the number of nodes existing in the map at time k.
##Increase the error value of the winner (error value is the difference between the input vector and the weight vectors).
##When TE_i > GT(where TE_i is the total error of node i and GT is the growth threshold). Grow nodes if i is a boundary node. Distribute weights to neighbors if i is a non-boundary node.
##Initialize the new node weight vectors to match the neighboring node weights.
##Initialize the learning rate (LR) to its starting value.
##Repeat steps 2 – 7 until all inputs have been presented and node growth is reduced to a minimum level.
#Smoothing phase.
##Reduce learning rate and fix a small starting neighborhood.
##Find winner and adapt the weights of the winner and neighbors in the same way as in growing phase.

Bibliography

# Hsu, A., Tang, S. and Halgamuge, S. K. (2003) An unsupervised hierarchical dynamic self-organizing approach to cancer class discovery and marker gene identification in microarray data. Bioinformatics 19(16): pp 2131-2140
# Alahakoon, D., Halgamuge, S. K. and Sirinivasan, B. (2000) Dynamic Self Organizing Maps With Controlled Growth for Knowledge Discovery, IEEE Transactions on Neural Networks, Special Issue on Knowledge Discovery and Data Mining, 11, pp 601-614.
# Alahakoon, D., Halgamuge, S. K. and Sirinivasan, B. (1998) A Structure Adapting Feature Map for Optimal Cluster Representation in Proceedings of The 5th International Conference on Neural Information Processing (ICONIP 98), Kitakyushu, Japan, pp 809-812
# Alahakoon, D., Halgamuge, S. K. and Sirinivasan, B. (1998) A Self Growing Cluster Development Approach to Data Mining in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, San Diego, USA, pp 2901-2906
# Alahakoon, D. and Halgamuge, S. K. (1998) Knowledge Discovery with Supervised and Unsupervised Self Evolving Neural Networks in Proceedings of 5th International Conference on Soft Computing and Information/Intelligent Systems, Fukuoka, Japan, pp 907-910

ee also

* Self-organizing map
* Artificial intelligence
* Machine learning
* Data mining
* Nonlinear dimensionality reduction


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Self-organizing map — A self organizing map (SOM) is a type of artificial neural network that is trained using unsupervised learning to produce a low dimensional (typically two dimensional), discretized representation of the input space of the training samples, called …   Wikipedia

  • Self-organizing map — Carte auto adaptative Carte auto adaptative ou auto organisatrice est une classe de réseau de neurones artificiels fondée sur des méthodes d apprentissage non supervisée. On la désigne souvent par le terme anglais self organizing map (SOM), on… …   Wikipédia en Français

  • Self-Organizing Maps — Als Selbstorganisierende Karten, Kohonenkarten oder Kohonennetze (nach Teuvo Kohonen) bezeichnet man eine Art von künstlichen neuronalen Netzen. Sie sind als unüberwachtes Lernverfahren ein leistungsfähiges Werkzeug des Data Mining. Ihr… …   Deutsch Wikipedia

  • Growing neural gas — is a self organization neural network first proposed by Bernd Fritzke. Unlike the earlier Neural Gas, Growing Neural Gas (GNG) can add and delete nodes during algorithm execution. The growth mechanism is based on Growing Cell Structures and… …   Wikipedia

  • Kohonen-karten — Als Selbstorganisierende Karten, Kohonenkarten oder Kohonennetze (nach Teuvo Kohonen) bezeichnet man eine Art von künstlichen neuronalen Netzen. Sie sind als unüberwachtes Lernverfahren ein leistungsfähiges Werkzeug des Data Mining. Ihr… …   Deutsch Wikipedia

  • Kohonenkarte — Als Selbstorganisierende Karten, Kohonenkarten oder Kohonennetze (nach Teuvo Kohonen) bezeichnet man eine Art von künstlichen neuronalen Netzen. Sie sind als unüberwachtes Lernverfahren ein leistungsfähiges Werkzeug des Data Mining. Ihr… …   Deutsch Wikipedia

  • Kohonennetz — Als Selbstorganisierende Karten, Kohonenkarten oder Kohonennetze (nach Teuvo Kohonen) bezeichnet man eine Art von künstlichen neuronalen Netzen. Sie sind als unüberwachtes Lernverfahren ein leistungsfähiges Werkzeug des Data Mining. Ihr… …   Deutsch Wikipedia

  • Kohonennetze — Als Selbstorganisierende Karten, Kohonenkarten oder Kohonennetze (nach Teuvo Kohonen) bezeichnet man eine Art von künstlichen neuronalen Netzen. Sie sind als unüberwachtes Lernverfahren ein leistungsfähiges Werkzeug des Data Mining. Ihr… …   Deutsch Wikipedia

  • Neuronales Gas — Neural Gas ein Künstliches neuronales Netz, angelehnt an die Self Organizing Maps und vorgestellt 1991 von Thomas Martinetz und Klaus Schulten. Das Neural Gas ist ein einfacher Algorithmus zur möglichst fehlerfreien Datenkodierung mit Hilfe von… …   Deutsch Wikipedia

  • Neuronen-Gas — Neural Gas ein Künstliches neuronales Netz, angelehnt an die Self Organizing Maps und vorgestellt 1991 von Thomas Martinetz und Klaus Schulten. Das Neural Gas ist ein einfacher Algorithmus zur möglichst fehlerfreien Datenkodierung mit Hilfe von… …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”