Self-Organizing Neural Network (SONN) is an unsupervised learning model in an Artificial Neural Network, commonly referred to as Self-Organizing Feature Maps or Kohonen Maps. You can create this feature mapping by a two-dimensional discretization of an input space during the model’s training (based on competitive learning). This behavior is similar to that seen in biological systems, and this is where the neural network name comes from.
In the human cortex, multi-dimensional sensory input spaces (e.g., auditory, motor, tactile, visual, somatosensory, etc.) are represented by two-dimensional maps. This projection of high dimensional inputs to lower dimensional maps is known as topology conservation, which can be achieved using Self-Organizing Networks.
These Self-Organizing Maps are used to classify and visualize higher-dimensional data in a lower dimension.
This phase is used to construct the network maps and requires a competitive process with training samples.
This phase is used to classify new data and provides a specific location on the converged map.
The SONN Algorithm can be simplified in 4 easy steps:
1. Initialization: Initialize the Weights of neurons in the map layer.
2. Competitive process: Select one input sample and identify the best matching unit (BMU) among all neurons in an n x m grid using distance measures.
3. Cooperative process: Find the proximity neurons of the BMU by a neighborhood function.
4. Adaptation process: Shift the weights of the BMU and its neighbors towards the input pattern. The process is complete if the maximum number of training iterations has been reached. Otherwise, increment the iteration count and repeat the process from step two.
SONN has tremendous advantages. However, this comes at a cost. Below are some of the disadvantages of the SONN: