Machine learning day16 self-organizing map neural network

Gaussian mixture model calculation

  1. Initially set the value of each parameter randomly, and then repeat the following steps until convergence

  2. According to the current parameters, calculate the probability of each point generated by a certain sub-model

  3. Use the probability calculated in step 2 to improve the mean, variance and weight of each model

We don't need to know the parameters of the K normal distributions at the beginning, nor do we know which distribution the points are generated from, in the iterative process.

First fix the current normal distribution unchanged, and get the probability of each point generated by each data point. Then fix the generation probability unchanged, and obtain a better normal distribution according to the data points and the generation probability, and repeat the cycle until the parameters converge. Get a reasonable set of normal distributions.

Self-organizing map neural network

Self-Organizing Map (SOM) is an important class of unsupervised learning methods, which can be used for clustering, high-dimensional visualization, data compression, feature extraction, etc. Also known as the Kohonen network.

The self-organizing map neural network is essentially a two-layer neural network, including an input layer and an output layer. The input layer module accepts external information input, and the number of neurons in the output layer is usually the number of clusters, representing each class that needs to be clustered. During training, "competitive learning" is used, and each input example finds a node that best matches it in the output layer, which is called the activation node, wonning neuron.

After that, the stochastic gradient descent method is used to update the parameters of the activated node, and at the same time, the points adjacent to the activated node also update the parameters appropriately according to the distance of the activated node.

Such competition can be achieved through lateral inhibitory connections between neurons. The output nodes of the self-organizing map neural network have a topological relationship, which is determined according to requirements.

Suppose the input space is D-dimensional and the input mode isimage.png

, The connection weight between input unit i and neuron j in the calculation layer isimage.png

, Where N is the total number of neurons. The self-organizing learning process of self-organizing map neural network can be summarized into the following sub-processes.

  1. Initialization, all connection weights are initialized with small random values.

  2. In competition, the neuron calculates the respective discriminant function value for each input mode, and declares the specific neuron with the smallest discriminant function value as the winner, where the discriminant function of each neuron j is

  3. image.png

  4. In cooperation, the winning neuron I(x) determines the spatial location of the topological neighborhood of the excited neural network. After determining the activation node I(x), we also hope to update the neighboring nodes. The degree of update is calculated as follows:

  5. image.png

  6. among them

  7. Represents the distance between neurons i and j in the competitive layer,

  8. Decay over time. That is, the farther the adjacent nodes are, the smaller the degree of update.

  9. image.png
  10. Adapt, appropriately adjust the connection weights of related excitatory neurons, so that the winning neurons respond more strongly to subsequent applications of similar input patterns:

  11. image.png

  12. The definition of time-dependent learning rate:

  13. image.png

  14. Iteratively, continue back to step 2 until the feature map area is stable.

After the iteration, the neuron activated by each sample is its corresponding category.


Guess you like

Origin blog.51cto.com/15069488/2578580