Java实现BP神经网络,实现对空气质量的分析和评级

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_24369113/article/details/53422028

使用java实现BP神经网络进行回归分析,并利用训练好的神经网络实现空气质量的评级。

本实验的工程项目和训练数据集可访问以下网址下载:

http://download.csdn.net/detail/qq_24369113/9711645

https://github.com/muziyongshixin/Back-Propagation-Neural-Network



实验描述:

对指定数据集进行回归分析,选择适当的回归算法,编写程序实现,提交程序和结果报告。

数据集: AirQualityUCI.data ,根据数据集中的数据对神经网络进行训练,得到回归模型,对新的数据进行预测。

每个数据包括15个属性:

0 Date (DD/MM/YYYY)

1 Time (HH.MM.SS)

2 True hourlyaveraged concentration CO in mg/m^3 (reference analyzer)

3 PT08.S1(tin oxide) hourly averaged sensor response (nominally CO targeted) 

4 True hourlyaveraged overall Non Metanic HydroCarbons concentration in microg/m^3(reference analyzer)

5 True hourlyaveraged Benzene concentration in microg/m^3 (reference analyzer)

6 PT08.S2(titania) hourly averaged sensor response (nominally NMHC targeted)

7 True hourlyaveraged NOx concentration in ppb (reference analyzer)

8 PT08.S3(tungsten oxide) hourly averaged sensor response (nominally NOx targeted)

9 True hourlyaveraged NO2 concentration in microg/m^3 (reference analyzer)

10 PT08.S4(tungsten oxide) hourly averaged sensor response (nominally NO2 targeted)

11 PT08.S5(indium oxide) hourly averaged sensor response (nominally O3 targeted)

12Temperature in °C  

13 RelativeHumidity (%)

14 AHAbsolute Humidity




实验环境和编程语言:

本实验使用的编程语言为:Java

编程环境为:Intellij idea

实现聚类的算法为:BP神经网络算法

训练用例的样本个数为:9357个

样本示例:

1/23/2005,20:00:00,-200,1174,-200,8.7,926,245,674,157,1220,1020,5.4,78.2,0.7074,Verylow

算法分析:

BP (BackPropagation)神经网络,即误差反传误差反向传播算法的学习过程,由信息的正向传播和误差的反向传播两个过程组成。输入层各神经元负责接收来自外界的输入信息,并传递给中间层各神经元;中间层是内部信息处理层,负责信息变换,根据信息变化能力的需求,中间层可以设计为单隐层或者多隐层结构;最后一个隐层传递到输出层各神经元的信息,经进一步处理后,完成一次学习的正向传播处理过程,由输出层向外界输出信息处理结果。当实际输出与期望输出不符时,进入误差的反向传播阶段。误差通过输出层,按误差梯度下降的方式修正各层权值,向隐层、输入层逐层反传。周而复始的信息正向传播和误差反向传播过程,是各层权值不断调整的过程,也是神经网络学习训练的过程,此过程一直进行到网络输出的误差减少到可以接受的程度,或者预先设定的学习次数为止。

BP神经网络模型包括其输入输出模型、作用函数模型、误差计算模型和自学习模型。

(1)节点输出模型

隐节点输出模型:Oj=f(∑Wij×Xi-qj) (1)

输出节点输出模型:Yk=f(∑Tjk×Oj-qk) (2)

f-非线形作用函数;q -神经单元阈值。

(2)作用函数模型

作用函数是反映下层输入对上层节点刺激脉冲强度的函数又称刺激函数,一般取为(0,1)内连续取值Sigmoid函数:f(x)=1/(1+e乘方(-x)) (3)

(3)误差计算模型

误差计算模型是反映神经网络期望输出与计算输出之间误差大小的函数:

  tpi- i节点的期望输出值;Opi-i节点计算输出值。

(4)自学习模型

神经网络的学习过程,即连接下层节点和上层节点之间的权重矩阵Wij的设定和误差修正过程。BP网络有师学习方式-需要设定期望值和无师学习方式-只需输入模式之分。自学习模型为

△Wij(n+1)=h ×Фi×Oj+a×△Wij(n) (5)

h -学习因子;Фi-输出节点i的计算误差;Oj-输出节点j的计算输出;a-动量因子。



实验结果分析:

根据程序运行的结果如图1.1所示,对训练集进行10000次训练之后,使用训练集进行测试,神经网络的正确率为90.8%。经过测试实验和分析,神经网络在继续加大训练次数时(>50000次),并不能显著提高正确率。综合得出,该神经网络还是拥有较高的预测精度,具有一定的回归分析能力。


图 1.1 实验结果截图



思考与改进:

1.   本次实验编程利用BP神经网络算法的时候最开始没有对数据进行归一化处理,由于不同的属性值其值域有较大的差别,因此在在迭代多次之后依然正确率只有20%,但是经过归一化处理之后,预测正确率显著提升,可见对于神经网络来说,对数据进行归一化处理显得尤为重要。

2.   同时观察错误结果出现的标号可以发现,大多数错误结果出现在4000~7000的测试样例中,可见该神经网络对处于均值附近的样例的区分度还不是很好!

3.   最后一点就是对于神经网络,当达到一定的训练次数后,再增加训练次数依然难以显著提高神经网络的预测正确率,同时一味地增加隐含层的层数以及单层的节点数也不能有效提高预测的准确性,反而会极大的减慢训练速度,因此选择合适的节点数和隐含层数十分重要。



编程实现:


函数设计与分析

1.	public static boolean loadData(String url) 
//加载测试的数据文件
2.	public static void pretreatment(Vector<String> indata) 
//数据预处理,将原始数据中的每一个属性值提取出来并进行归一化处理存放到Vector<double[]>  data中
3.	public static String Show_air_quality(double[] result)
//根据result数据返回一个字符串表示空气质量
4.	public double[] test(double[] inData) 
//根据输入的indata数组利用神经网络进行预测,返回一个double[]数组
5.	public void train(double[] trainData, double[] target)
//BP神经网络的训练函数,根据输入的数据集以及目标结果来调整神经网络


源码实现:

BpDeepTest.java

/**
 * Created by 32706 on 2016/11/29.
 */

import java.io.File;
import java.text.DecimalFormat;
import java.util.Arrays;
import java.util.Scanner;
import java.util.Vector;

public class BpDeepTest {

    public static Vector<String> indata = new Vector<>();  //存储从文件中读取的原始数据
    public static Vector<double[]> data = new Vector<>();//存储预处理和归一化后的训练集

    static double[] max = new double[]{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
    static double[] min = new double[]{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
    static double[] weigth = new double[]{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};

    public static boolean loadData(String url) {//加载测试的数据文件
        try {
            Scanner in = new Scanner(new File(url));//读入文件
            while (in.hasNextLine()) {
                String str = in.nextLine();//将文件的每一行存到str的临时变量中
                indata.add(str);//将每一个样本点的数据追加到Vector 中
            }
            return true;
        } catch (Exception e) { //如果出错返回false
            return false;
        }
    }


    public static void pretreatment(Vector<String> indata) {   //数据预处理,将原始数据中的每一个属性值提取出来并进行归一化存放到Vector<double[]>  data中
        Vector<double[]> temdata = new Vector<>();

        int i = 1;
        String t;
        while (i < indata.size()) {//取出indata中的每一行值
            double[] tem = new double[14];
            t = indata.get(i);
            String[] sourceStrArray = t.split(",", 16);//使用字符串分割函数提取出各属性值

            for (int j = 0; j < 13; j++) {
                tem[j] = Double.parseDouble(sourceStrArray[j + 2]);//将每一个的样本的各属性值类型转换后依次存入到double[]数组中
                if (tem[j] > max[j])
                    max[j] = tem[j];
                if (tem[j] < min[j])
                    min[j] = tem[j];
            }
            switch (sourceStrArray[15]) {
                case "Very High": {
                    tem[13] = 1;
                    break;
                }
                case "High": {
                    tem[13] = 2;
                    break;
                }
                case "Moderate": {
                    tem[13] = 3;
                    break;
                }
                case "Low": {
                    tem[13] = 4;
                    break;
                }
                case "Very low": {
                    tem[13] = 5;
                    break;
                }
                default:
                    break;

            }
            temdata.add(tem);//将每一个样本加入到temdata中
            i++;
        }
        /*******以下部分对数据进行归一化处理**********/
        for (int r = 0; r < max.length; r++) {
            weigth[r] = max[r] - min[r];
        }

        for (int r = 0; r < temdata.size(); r++) {
            double[] t1 = temdata.get(r);
            for (int j = 0; j < t1.length - 1; j++) {
                t1[j] = t1[j] / weigth[j];
            }

            data.add(t1);
        }

    }


    public static String Show_air_quality(double[] result) {//根据结果返回空气质量
        String rt = "";
        int NO = 0;
        double max = 0;
        for (int i = 0; i < result.length; i++) {
            if (result[i] >= max) {
                max = result[i];
                NO = i;
            }

        }
        switch (NO) {
            case 0: {
                rt = "Very high";
                break;
            }
            case 1: {
                rt = "High";
                break;
            }
            case 2: {
                rt = "Moderate";
                break;
            }
            case 3: {
                rt = "Low";
                break;
            }
            case 4: {
                rt = "Very low";
                break;
            }
            default:
                break;
        }
        return rt;
    }

    public static void main(String[] args) {
        long startTime = System.currentTimeMillis();//或得程序开始运行时间

        loadData("AirQualityUCI.data");//载入训练数据
        pretreatment(indata);//预处理数据


        double[][] train_data = new double[data.size()][data.get(0).length - 1];//构建训练样本集
        int r = 0;
        while (r < data.size()) {
            double[] tem = data.get(r);
            for (int j = 0; j < tem.length - 1; j++) {
                train_data[r][j] = tem[j];
            }
            r++;
        }

        double[][] target = new double[data.size()][5];//构建训练样本集的结果集
        r = 0;
        while (r < data.size()) {
            int t = (int) data.get(r)[13];
            switch (t) {
                case 1: {
                    target[r] = new double[]{1.0, 0.0, 0.0, 0.0, 0.0};
                    break;
                }
                case 2: {
                    target[r] = new double[]{0.0, 1.0, 0.0, 0.0, 0.0};
                    break;
                }
                case 3: {
                    target[r] = new double[]{0.0, 0.0, 1.0, 0.0, 0.0};
                    break;
                }
                case 4: {
                    target[r] = new double[]{0.0, 0.0, 0.0, 1.0, 0.0};
                    break;
                }
                case 5: {
                    target[r] = new double[]{0.0, 0.0, 0.0, 0.0, 1.0};
                    break;
                }
                default:
                    break;
            }
            r++;
        }

        BP bp1 = new BP(13, 13, 5);//新建一个神经网络

        for (int s = 0; s < 10000; s++) {//循环训练10000次

            for (int i = 0; i < data.size(); i++) {    //训练
                bp1.train(train_data[i], target[i]);
            }

            int correct = 0;
            for (int j = 0; j < data.size(); j++) {   //测试
                double[] result = bp1.test(train_data[j]);
                double max = 0;
                int NO = 0;
                for (int i = 0; i < result.length; i++) {
                    if (result[i] >= max) {
                        max = result[i];
                        NO = i;
                    }

                }
                if (target[j][NO] == 1.0) {
                    correct++;
                }
                else if(s==9999)//输出训练10000次后测试的错误结果
                    System.out.println("第"+(s+1)+"次训练后,第"+j+"号测试用例预测错误--------------");
            }

            double b=(correct * 1.0 / data.size()) * 100;//计算正确率
            DecimalFormat df = new DecimalFormat( "0.00 ");//设置输出精度
            System.out.println("第 " + (s+1) + " 次训练后,使用训练集检测的正确率为==" +df.format(b) + "%");
        }


        double[] x = new double[]{-200,883,-200,1.3,530,63,997,46,1102,617,13.7,68.2,1.0611};
        System.out.print("使用测试用例" + Arrays.toString(x) + "   根据神经网络计算预计空气质量为:");
        for(int i=0;i<x.length;i++)
            x[i]=x[i]/weigth[i];//对数据归一化

        double[] result = bp1.test(x);
        System.out.println(Show_air_quality(result));

        System.out.println("程序运行时间为:" + (System.currentTimeMillis() - startTime) * 1.0 / 1000 + " s");
    }
}
 



BP.java 

/*
*该类实现神经网络的定义以及提供神经网络的训练和预测的方法
 * Created by 32706 on 2016/11/29.
 */
import java.util.Random;

public class BP {
    private final double[] input;
    private final double[] hidden;
    private final double[] output;
    private final double[] target;
    private final double[] hidDelta;
    private final double[] optDelta;
    private final double eta;
    private final double momentum;
    private final double[][] iptHidWeights;
    private final double[][] hidOptWeights;
    private final double[][] iptHidPrevUptWeights;
    private final double[][] hidOptPrevUptWeights;
    public double optErrSum = 0d;
    public double hidErrSum = 0d;
    private final Random random;

    public BP(int inputSize, int hiddenSize, int outputSize, double eta, double momentum) {
        input = new double[inputSize + 1];
        hidden = new double[hiddenSize + 1];
        output = new double[outputSize + 1];
        target = new double[outputSize + 1];
        hidDelta = new double[hiddenSize + 1];
        optDelta = new double[outputSize + 1];
        iptHidWeights = new double[inputSize + 1][hiddenSize + 1];
        hidOptWeights = new double[hiddenSize + 1][outputSize + 1];
        random = new Random(19881211);
        randomizeWeights(iptHidWeights);
        randomizeWeights(hidOptWeights);
        iptHidPrevUptWeights = new double[inputSize + 1][hiddenSize + 1];
        hidOptPrevUptWeights = new double[hiddenSize + 1][outputSize + 1];
        this.eta = eta;
        this.momentum = momentum;
    }



    private void randomizeWeights(double[][] matrix) {
        for (int i = 0, len = matrix.length; i != len; i++)
            for (int j = 0, len2 = matrix[i].length; j != len2; j++) {
                double real = random.nextDouble();
                matrix[i][j] = random.nextDouble() > 0.5 ? real : -real;
            }
    }

    public BP(int inputSize, int hiddenSize, int outputSize) {
        this(inputSize, hiddenSize, outputSize, 0.25, 0.9);

        // this(inputSize, hiddenSize, outputSize, 0.25, 0.9);
    }

  
    public void train(double[] trainData, double[] target) {
        loadInput(trainData);
        loadTarget(target);
        forward();
        calculateDelta();
        adjustWeight();
    }

    
    public double[] test(double[] inData) {
        if (inData.length != input.length - 1) {
            throw new IllegalArgumentException("Size Do Not Match.");
        }
        System.arraycopy(inData, 0, input, 1, inData.length);
        forward();
        return getNetworkOutput();
    }

    
    private double[] getNetworkOutput() {
        int len = output.length;
        double[] temp = new double[len - 1];
        for (int i = 1; i != len; i++)
            temp[i - 1] = output[i];
        return temp;
    }
   
    private void loadTarget(double[] arg) {
        if (arg.length != target.length - 1) {
            throw new IllegalArgumentException("Size Do Not Match.");
        }
        System.arraycopy(arg, 0, target, 1, arg.length);
    }
    
    private void loadInput(double[] inData) {
        if (inData.length != input.length - 1) {
            throw new IllegalArgumentException("Size Do Not Match.");
        }
        System.arraycopy(inData, 0, input, 1, inData.length);
    }
    
    private void forward(double[] layer0, double[] layer1, double[][] weight) {
        // threshold unit.  
        layer0[0] = 1.0;
        for (int j = 1, len = layer1.length; j != len; ++j) {
            double sum = 0;
            for (int i = 0, len2 = layer0.length; i != len2; ++i)
                sum += weight[i][j] * layer0[i];
            layer1[j] = sigmoid(sum);
        }
    }
   
    private void forward() {
        forward(input, hidden, iptHidWeights);
        forward(hidden, output, hidOptWeights);
    }
   
    private void outputErr() {
        double errSum = 0;
        for (int idx = 1, len = optDelta.length; idx != len; ++idx) {
            double o = output[idx];
            optDelta[idx] = o * (1d - o) * (target[idx] - o);
            errSum += Math.abs(optDelta[idx]);
        }
        optErrSum = errSum;
    }

    private void hiddenErr() {
        double errSum = 0;
        for (int j = 1, len = hidDelta.length; j != len; ++j) {
            double o = hidden[j];
            double sum = 0;
            for (int k = 1, len2 = optDelta.length; k != len2; ++k)
                sum += hidOptWeights[j][k] * optDelta[k];
            hidDelta[j] = o * (1d - o) * sum;
            errSum += Math.abs(hidDelta[j]);
        }
        hidErrSum = errSum;
    }

    private void calculateDelta() {
        outputErr();
        hiddenErr();
    }
    
    private void adjustWeight(double[] delta, double[] layer,
                              double[][] weight, double[][] prevWeight) {

        layer[0] = 1;
        for (int i = 1, len = delta.length; i != len; ++i) {
            for (int j = 0, len2 = layer.length; j != len2; ++j) {
                double newVal = momentum * prevWeight[j][i] + eta * delta[i]
                        * layer[j];
                weight[j][i] += newVal;
                prevWeight[j][i] = newVal;
            }
        }
    }
    private void adjustWeight() {
        adjustWeight(optDelta, hidden, hidOptWeights, hidOptPrevUptWeights);
        adjustWeight(hidDelta, input, iptHidWeights, iptHidPrevUptWeights);
    }

    private double sigmoid(double val) {
        return 1d / (1d + Math.exp(-val));
    }
}  





猜你喜欢

转载自blog.csdn.net/qq_24369113/article/details/53422028