[Machine] Matlab RBF network learning

by fox fox deer

  1. RBF appreciated:
    RBF network is a radial basis function network.
    Here Insert Picture Description
    The basic structure : generally two networks, nNm structure. Wherein n is the number of characteristic X predictors number, m is the response variable Y feature. I.e., X is an n-dimensional vector, Y is a m-dimensional vector. If a node as a vector, in fact, 1-N-1. N is the number of the center point, the training can be concentrated from a uniform distribution of a number of selected X as a central point. Using the hidden layer to the output connection weights.
    Here Insert Picture Description
    The need to select a different activation function, most commonly used for the Gaussian function: Here Insert Picture Description
    The basic idea : on RBF network can approximate any curve or surface or hypersurface. The method is that any curve can be represented as the superposition of a plurality of normal distributions. The number is the number of the center point of the normal distribution.
    RBF network learning is divided into two stages:
    1. Determine the center of the hidden layer. Using the K-means algorithm, after several iterations to find cluster centers X is N. Detailed records are not here, see the K-means algorithm.
      If selected as a function of the Gaussian radial basis function:
      Here Insert Picture Description
    2. Weight adjustment. Similar to the common perception of the network, how to fix weights W can use the following formula:
      Here Insert Picture Description
      t is the iteration count, eita learning rate.

2. The following gives rbf implemented in matlab:

Newrb using a radial basis function networks create function fitting. A total of 21 samples, the input P = -1: 0.1: 1, the output
T = (-0.9602 - .5770 - 0.0729 0.3771 0.6405 0.6600 0.4609 0.1336 - 0.2013
-0.4344 - 0.5 - .3930 - .1647 .3449 .1816 0.0988 0.3072 0.3960
-0.0312 - 0.2189 - 0.3201)

All Clear;
P = -1: 0.1:. 1;
T = [- 0.9602 -0.5770 -0.0729 -0.2013 0.1336 0.3771 .6405 0.6600 0.4609 0.0988 0.3072 -0.1647 -0.4344 -0.5 -0.3930 -0.0312 -0.2189 -0.3201 0.1816 .3960 .3449];
EG mean square error = 0.002 #
sc = 1; # propagation velocity
NET = newrb (P, T, EG, SC);
nEWRB, neurons & = 0, the MSE = .176192

X21=-1:0.01:1;
A=sim(net,X21);
plot(P,T,’+’,X21,A,’-’);
title(‘Training Vectors’);
Here Insert Picture Description

ref:
. [1] Lin Jianzhong analysis of financial information p189-196
[2]. http://www.cnblogs.com/zhangchaoyang/articles/2591663.html

Guess you like

Origin blog.csdn.net/yao09605/article/details/84931815