RNN实例

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/love__live1/article/details/79481192

机器学习算法完整版见fenghaootong-github

航空公司客运流量预测

数据集

数据集有两列,分别是时间和客运流量,用到的主要是客运流量

导入模块

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import LSTM, Dense, Activation
Using TensorFlow backend.
import warnings
warnings.filterwarnings('ignore')

导入数据

#导入数据
df = pd.read_csv('../DATA/airData.csv', sep=',')
df = df.set_index('time')
df.head()
passengers
time
1949-01 112
1949-02 118
1949-03 132
1949-04 129
1949-05 121
1949-06 135
1949-07 148
1949-08 148
1949-09 136
1949-10 119
1949-11 104
1949-12 118
1950-01 115
1950-02 126
1950-03 141
1950-04 135
1950-05 125
1950-06 149
1950-07 170
1950-08 170
1950-09 158
1950-10 133
1950-11 114
1950-12 140
1951-01 145
1951-02 150
1951-03 178
1951-04 163
1951-05 172
1951-06 178
1958-07 491
1958-08 505
1958-09 404
1958-10 359
1958-11 310
1958-12 337
1959-01 360
1959-02 342
1959-03 406
1959-04 396
1959-05 420
1959-06 472
1959-07 548
1959-08 559
1959-09 463
1959-10 407
1959-11 362
1959-12 405
1960-01 417
1960-02 391
1960-03 419
1960-04 461
1960-05 472
1960-06 535
1960-07 622
1960-08 606
1960-09 508
1960-10 461
1960-11 390
1960-12 432

144 rows × 1 columns

画图

#画图
df['passengers'].plot()
plt.show()

这里写图片描述

数据预处理

#只用客运流量一列
df = pd.read_csv('DATA/data.csv', sep=',', usecols=[1])
data_all = np.array(df).astype(float)
#数据归一化
scaler = MinMaxScaler()
data_all = scaler.fit_transform(data_all)

时间序列

#时间序列
sequence_length=10
data = []
for i in range(len(data_all) - sequence_length - 1):
    data.append(data_all[i: i + sequence_length + 1])
reshaped_data = np.array(data).astype('float64')
reshaped_data
array([[[ 0.01544402],
        [ 0.02702703],
        [ 0.05405405],
        ..., 
        [ 0.06177606],
        [ 0.02895753],
        [ 0.        ]],

       [[ 0.02702703],
        [ 0.05405405],
        [ 0.04826255],
        ..., 
        [ 0.02895753],
        [ 0.        ],
        [ 0.02702703]],

       [[ 0.05405405],
        [ 0.04826255],
        [ 0.03281853],
        ..., 
        [ 0.        ],
        [ 0.02702703],
        [ 0.02123552]],

       ..., 
       [[ 0.4980695 ],
        [ 0.58108108],
        [ 0.6042471 ],
        ..., 
        [ 1.        ],
        [ 0.96911197],
        [ 0.77992278]],

       [[ 0.58108108],
        [ 0.6042471 ],
        [ 0.55405405],
        ..., 
        [ 0.96911197],
        [ 0.77992278],
        [ 0.68918919]],

       [[ 0.6042471 ],
        [ 0.55405405],
        [ 0.60810811],
        ..., 
        [ 0.77992278],
        [ 0.68918919],
        [ 0.55212355]]])

训练集和测试集

split = 0.8
np.random.shuffle(reshaped_data)
x = reshaped_data[:, :-1]
y = reshaped_data[:, -1]
split_boundary = int(reshaped_data.shape[0] * split)
train_x = x[: split_boundary]
test_x = x[split_boundary:]

train_y = y[: split_boundary]
test_y = y[split_boundary:]
train_x = np.reshape(train_x, (train_x.shape[0], train_x.shape[1], 1))
test_x = np.reshape(test_x, (test_x.shape[0], test_x.shape[1], 1))

搭建LSTM模型

#搭建LSTM模型
model = Sequential()
model.add(LSTM(input_dim=1, output_dim=50, return_sequences=True))
print(model.layers)
model.add(LSTM(100, return_sequences=False))
model.add(Dense(output_dim=1))
model.add(Activation('linear'))

model.compile(loss='mse', optimizer='rmsprop')
[<keras.layers.recurrent.LSTM object at 0x000001D78422DC50>]

模型训练

model.fit(train_x, train_y, batch_size=512, nb_epoch=100, validation_split=0.1)
predict = model.predict(test_x)
predict = np.reshape(predict, (predict.size, ))
Train on 95 samples, validate on 11 samples
Epoch 1/100
95/95 [==============================] - 0s 253us/step - loss: 0.0117 - val_loss: 0.0073
Epoch 2/100
95/95 [==============================] - 0s 248us/step - loss: 0.0121 - val_loss: 0.0093
Epoch 3/100
95/95 [==============================] - 0s 242us/step - loss: 0.0116 - val_loss: 0.0073
Epoch 4/100
95/95 [==============================] - 0s 253us/step - loss: 0.0120 - val_loss: 0.0092
Epoch 5/100
95/95 [==============================] - 0s 274us/step - loss: 0.0115 - val_loss: 0.0072
Epoch 6/100
95/95 [==============================] - 0s 258us/step - loss: 0.0119 - val_loss: 0.0091
Epoch 7/100
95/95 [==============================] - 0s 258us/step - loss: 0.0114 - val_loss: 0.0072
Epoch 8/100
95/95 [==============================] - 0s 248us/step - loss: 0.0118 - val_loss: 0.0090
Epoch 9/100
95/95 [==============================] - 0s 274us/step - loss: 0.0113 - val_loss: 0.0071
Epoch 10/100
95/95 [==============================] - 0s 263us/step - loss: 0.0117 - val_loss: 0.0089
Epoch 11/100
95/95 [==============================] - 0s 306us/step - loss: 0.0113 - val_loss: 0.0071
Epoch 12/100
95/95 [==============================] - 0s 253us/step - loss: 0.0116 - val_loss: 0.0088
Epoch 13/100
95/95 [==============================] - 0s 253us/step - loss: 0.0112 - val_loss: 0.0070
Epoch 14/100
95/95 [==============================] - 0s 290us/step - loss: 0.0115 - val_loss: 0.0087
Epoch 15/100
95/95 [==============================] - 0s 274us/step - loss: 0.0111 - val_loss: 0.0070
Epoch 16/100
95/95 [==============================] - 0s 263us/step - loss: 0.0114 - val_loss: 0.0086
Epoch 17/100
95/95 [==============================] - 0s 263us/step - loss: 0.0110 - val_loss: 0.0069
Epoch 18/100
95/95 [==============================] - 0s 258us/step - loss: 0.0113 - val_loss: 0.0085
Epoch 19/100
95/95 [==============================] - 0s 258us/step - loss: 0.0109 - val_loss: 0.0068
Epoch 20/100
95/95 [==============================] - 0s 306us/step - loss: 0.0112 - val_loss: 0.0084
Epoch 21/100
95/95 [==============================] - 0s 279us/step - loss: 0.0108 - val_loss: 0.0068
Epoch 22/100
95/95 [==============================] - 0s 279us/step - loss: 0.0111 - val_loss: 0.0083
Epoch 23/100
95/95 [==============================] - 0s 263us/step - loss: 0.0107 - val_loss: 0.0067
Epoch 24/100
95/95 [==============================] - 0s 279us/step - loss: 0.0110 - val_loss: 0.0082
Epoch 25/100
95/95 [==============================] - 0s 269us/step - loss: 0.0106 - val_loss: 0.0067
Epoch 26/100
95/95 [==============================] - 0s 263us/step - loss: 0.0109 - val_loss: 0.0081
Epoch 27/100
95/95 [==============================] - 0s 263us/step - loss: 0.0105 - val_loss: 0.0066
Epoch 28/100
95/95 [==============================] - 0s 258us/step - loss: 0.0108 - val_loss: 0.0079
Epoch 29/100
95/95 [==============================] - 0s 253us/step - loss: 0.0103 - val_loss: 0.0065
Epoch 30/100
95/95 [==============================] - 0s 269us/step - loss: 0.0107 - val_loss: 0.0078
Epoch 31/100
95/95 [==============================] - 0s 248us/step - loss: 0.0102 - val_loss: 0.0065
Epoch 32/100
95/95 [==============================] - 0s 253us/step - loss: 0.0105 - val_loss: 0.0077
Epoch 33/100
95/95 [==============================] - 0s 242us/step - loss: 0.0101 - val_loss: 0.0064
Epoch 34/100
95/95 [==============================] - 0s 258us/step - loss: 0.0104 - val_loss: 0.0076
Epoch 35/100
95/95 [==============================] - 0s 242us/step - loss: 0.0100 - val_loss: 0.0063
Epoch 36/100
95/95 [==============================] - 0s 253us/step - loss: 0.0103 - val_loss: 0.0074
Epoch 37/100
95/95 [==============================] - 0s 248us/step - loss: 0.0099 - val_loss: 0.0063
Epoch 38/100
95/95 [==============================] - 0s 237us/step - loss: 0.0102 - val_loss: 0.0073
Epoch 39/100
95/95 [==============================] - 0s 242us/step - loss: 0.0097 - val_loss: 0.0062
Epoch 40/100
95/95 [==============================] - 0s 242us/step - loss: 0.0100 - val_loss: 0.0072
Epoch 41/100
95/95 [==============================] - 0s 232us/step - loss: 0.0096 - val_loss: 0.0061
Epoch 42/100
95/95 [==============================] - 0s 263us/step - loss: 0.0099 - val_loss: 0.0070
Epoch 43/100
95/95 [==============================] - 0s 248us/step - loss: 0.0095 - val_loss: 0.0061
Epoch 44/100
95/95 [==============================] - 0s 242us/step - loss: 0.0097 - val_loss: 0.0069
Epoch 45/100
95/95 [==============================] - 0s 248us/step - loss: 0.0093 - val_loss: 0.0060
Epoch 46/100
95/95 [==============================] - 0s 248us/step - loss: 0.0096 - val_loss: 0.0067
Epoch 47/100
95/95 [==============================] - 0s 248us/step - loss: 0.0092 - val_loss: 0.0059
Epoch 48/100
95/95 [==============================] - 0s 253us/step - loss: 0.0095 - val_loss: 0.0066
Epoch 49/100
95/95 [==============================] - 0s 232us/step - loss: 0.0090 - val_loss: 0.0059
Epoch 50/100
95/95 [==============================] - 0s 258us/step - loss: 0.0093 - val_loss: 0.0064
Epoch 51/100
95/95 [==============================] - 0s 248us/step - loss: 0.0089 - val_loss: 0.0058
Epoch 52/100
95/95 [==============================] - 0s 248us/step - loss: 0.0092 - val_loss: 0.0063
Epoch 53/100
95/95 [==============================] - 0s 269us/step - loss: 0.0087 - val_loss: 0.0058
Epoch 54/100
95/95 [==============================] - 0s 258us/step - loss: 0.0090 - val_loss: 0.0061
Epoch 55/100
95/95 [==============================] - 0s 237us/step - loss: 0.0086 - val_loss: 0.0057
Epoch 56/100
95/95 [==============================] - 0s 248us/step - loss: 0.0088 - val_loss: 0.0060
Epoch 57/100
95/95 [==============================] - 0s 253us/step - loss: 0.0084 - val_loss: 0.0057
Epoch 58/100
95/95 [==============================] - 0s 232us/step - loss: 0.0087 - val_loss: 0.0058
Epoch 59/100
95/95 [==============================] - 0s 253us/step - loss: 0.0082 - val_loss: 0.0057
Epoch 60/100
95/95 [==============================] - 0s 263us/step - loss: 0.0085 - val_loss: 0.0057
Epoch 61/100
95/95 [==============================] - 0s 248us/step - loss: 0.0081 - val_loss: 0.0056
Epoch 62/100
95/95 [==============================] - 0s 237us/step - loss: 0.0083 - val_loss: 0.0055
Epoch 63/100
95/95 [==============================] - 0s 248us/step - loss: 0.0079 - val_loss: 0.0056
Epoch 64/100
95/95 [==============================] - 0s 279us/step - loss: 0.0082 - val_loss: 0.0053
Epoch 65/100
95/95 [==============================] - 0s 274us/step - loss: 0.0077 - val_loss: 0.0056
Epoch 66/100
95/95 [==============================] - 0s 295us/step - loss: 0.0080 - val_loss: 0.0052
Epoch 67/100
95/95 [==============================] - 0s 258us/step - loss: 0.0075 - val_loss: 0.0057
Epoch 68/100
95/95 [==============================] - 0s 242us/step - loss: 0.0078 - val_loss: 0.0051
Epoch 69/100
95/95 [==============================] - 0s 269us/step - loss: 0.0074 - val_loss: 0.0057
Epoch 70/100
95/95 [==============================] - 0s 253us/step - loss: 0.0076 - val_loss: 0.0049
Epoch 71/100
95/95 [==============================] - 0s 269us/step - loss: 0.0072 - val_loss: 0.0057
Epoch 72/100
95/95 [==============================] - 0s 279us/step - loss: 0.0075 - val_loss: 0.0048
Epoch 73/100
95/95 [==============================] - 0s 274us/step - loss: 0.0070 - val_loss: 0.0058
Epoch 74/100
95/95 [==============================] - 0s 258us/step - loss: 0.0073 - val_loss: 0.0047
Epoch 75/100
95/95 [==============================] - 0s 274us/step - loss: 0.0069 - val_loss: 0.0059
Epoch 76/100
95/95 [==============================] - 0s 248us/step - loss: 0.0071 - val_loss: 0.0046
Epoch 77/100
95/95 [==============================] - 0s 248us/step - loss: 0.0067 - val_loss: 0.0060
Epoch 78/100
95/95 [==============================] - 0s 284us/step - loss: 0.0070 - val_loss: 0.0046
Epoch 79/100
95/95 [==============================] - 0s 269us/step - loss: 0.0066 - val_loss: 0.0061
Epoch 80/100
95/95 [==============================] - 0s 248us/step - loss: 0.0069 - val_loss: 0.0045
Epoch 81/100
95/95 [==============================] - 0s 279us/step - loss: 0.0064 - val_loss: 0.0062
Epoch 82/100
95/95 [==============================] - 0s 258us/step - loss: 0.0067 - val_loss: 0.0045
Epoch 83/100
95/95 [==============================] - 0s 279us/step - loss: 0.0063 - val_loss: 0.0063
Epoch 84/100
95/95 [==============================] - 0s 242us/step - loss: 0.0065 - val_loss: 0.0044
Epoch 85/100
95/95 [==============================] - 0s 264us/step - loss: 0.0062 - val_loss: 0.0064
Epoch 86/100
95/95 [==============================] - 0s 253us/step - loss: 0.0064 - val_loss: 0.0044
Epoch 87/100
95/95 [==============================] - 0s 258us/step - loss: 0.0061 - val_loss: 0.0065
Epoch 88/100
95/95 [==============================] - 0s 248us/step - loss: 0.0064 - val_loss: 0.0045
Epoch 89/100
95/95 [==============================] - 0s 274us/step - loss: 0.0060 - val_loss: 0.0066
Epoch 90/100
95/95 [==============================] - 0s 269us/step - loss: 0.0063 - val_loss: 0.0044
Epoch 91/100
95/95 [==============================] - 0s 253us/step - loss: 0.0059 - val_loss: 0.0067
Epoch 92/100
95/95 [==============================] - 0s 253us/step - loss: 0.0062 - val_loss: 0.0045
Epoch 93/100
95/95 [==============================] - 0s 258us/step - loss: 0.0058 - val_loss: 0.0067
Epoch 94/100
95/95 [==============================] - 0s 242us/step - loss: 0.0061 - val_loss: 0.0045
Epoch 95/100
95/95 [==============================] - 0s 290us/step - loss: 0.0057 - val_loss: 0.0068
Epoch 96/100
95/95 [==============================] - 0s 263us/step - loss: 0.0060 - val_loss: 0.0045
Epoch 97/100
95/95 [==============================] - 0s 242us/step - loss: 0.0057 - val_loss: 0.0069
Epoch 98/100
95/95 [==============================] - 0s 269us/step - loss: 0.0060 - val_loss: 0.0046
Epoch 99/100
95/95 [==============================] - 0s 258us/step - loss: 0.0056 - val_loss: 0.0069
Epoch 100/100
95/95 [==============================] - 0s 269us/step - loss: 0.0059 - val_loss: 0.0046

比较

predict_y = scaler.inverse_transform([[i] for i in predict])
test = scaler.inverse_transform(test_y)

plt.plot(predict_y, 'g:', label='prediction')
plt.plot(test, 'r-', label='true')
plt.legend(['predict', 'true'])
plt.show()

这里写图片描述

猜你喜欢

转载自blog.csdn.net/love__live1/article/details/79481192
RNN