BP neural network (classification application) based on Crimson Salamander Optimization - Code attached
Article directory
Abstract: This article mainly introduces how to use the bonito algorithm to optimize the BP neural network, using iris data to give a simple explanation.
1. Introduction to iris data
This case uses matlab's public iris iris data as test data. The iris data has 4 dimensions and 3 categories. The data format is as follows:
Feature 1 | Feature 2 | Feature 3 | category | |
---|---|---|---|---|
Single set of iris data | 5.3 | 2.1 | 1.2 | 1 |
The three categories are represented by 1, 2, and 3.
2. Data set organization
The iris data contains a total of 150 groups of data, which are divided into 105 groups of training sets and 45 groups of test sets. As shown in the following table:
training set (set) | Test set (group) | Total data (group) |
---|---|---|
105 | 45 | 150 |
Category data processing: The original data categories are represented by 1, 2, and 3. In order to facilitate neural network training, categories 1, 2, and 3 are represented by 1, 0, 0; 0, 1, 0; 0, 0, 1 respectively.
When performing data training, all input feature data are normalized.
3. Crimson salamander and bonito optimization of BP neural network
3.1 BP neural network parameter setting
Generally speaking, intelligent algorithms are generally used to optimize the initial weights and thresholds of the BP neural network to improve the performance of the BP neural network. This case is based on iris data. Since the iris data dimension is not high, a simple BP neural network is used. The neural network parameters are as follows:
The neural network parameters are as follows:
%创建神经网络
inputnum = 4; %inputnum 输入层节点数 4维特征
hiddennum = 10; %hiddennum 隐含层节点数
outputnum = 3; %outputnum 隐含层节点数
net = newff( minmax(input) , [hiddennum outputnum] , {
'logsig' 'purelin' } , 'traingdx' ) ;
%设置训练参数
net.trainparam.show = 50 ;
net.trainparam.epochs = 200 ;
net.trainparam.goal = 0.01 ;
net.trainParam.lr = 0.01 ;
3.2 Application of Crimson Salamander and Bonito Algorithm
For the principle of Crimson Salamander and Bonito algorithm, please refer to: https://blog.csdn.net/u011835903/article/details/107815254
The parameters of the Crimson Salamander and Bonito algorithm are set as:
popsize = 10;%种群数量
Max_iteration = 15;%最大迭代次数
lb = -5;%权值阈值下边界
ub = 5;%权值阈值上边界
% inputnum * hiddennum + hiddennum*outputnum 为阈值的个数
% hiddennum + outputnum 为权值的个数
dim = inputnum * hiddennum + hiddennum*outputnum + hiddennum + outputnum ;% inputnum * hiddennum + hiddennum*outputnum维度
It should be noted here that the threshold number of the neural network is calculated as follows:
This network has 2 layers:
The number of thresholds in the first layer is: 4*10 = 40; that is, inputnum * hiddennum;
The number of weights in the first layer is: 10; that is, hiddennum;
The number of thresholds in the second layer is: 3*10 = 30; that is, hiddennum * outputnum;
The number of weights in the second layer is: 3; that is, outputnum;
So we can see that the dimensions we optimize are: inputnum * hiddennum + hiddennum*outputnum + hiddennum + outputnum = 83;
Fitness function value setting:
This article sets the fitness function as follows:
fitness = argmin ( T rain D ata E rror R ate + T est D ata E rror R ate ) fitness = argmin(TrainDataErrorRate + TestDataErrorRate)fitness=argmin(TrainDataErrorRate+T es t D a t a Error R a t e )
where TrainDataErrorRate and TestDataErrorRate are the error classification rates of the training set and test set respectively . The fitness function shows that the network we ultimately want is a network that can get better results on both the test set and the training set.
4.Test results:
From the convergence curve of the Crimson Salamander and Bonito algorithm, we can see that the overall error is continuously decreasing, indicating that the Crimson Salamander and Bonito algorithm has played an optimization role: