Keras 基础学习 I Sequential Model

知识共享许可协议 版权声明:署名,允许他人基于本文进行创作,且必须基于与原先许可协议相同的许可协议分发本文 (Creative Commons

keras介绍

keras是一个深度学习的高级API接口,有python实现,支持tensorflow,theano作为后端,最近keras也成为tensorflow的官方高级API,因此和tensorflow的适配性更好了。keras支持简介的快速的原型设计,支持CNN和RNN,无缝CPU和GPU切换。此外keras模型也能直接转为coreML模型应用在iOS设备上

如果你熟悉深度学习基本概念,keras很容易上手进行快速复现,因此不需要自己实现很多layer,门槛很低。

Sequential Model

序列模型是多个网络层的线性叠加,可以传入layer list或者调用add()方法将layer加入模型

from keras.models import Sequential
from keras.layers import Dense,Activation
# add方法
model = Sequential()
model.add(Dense(32,input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dense(10))
model.add(Activation("softmax"))

定义好模型之后,需要调用compile方法对模型进行配置,此时传入三个参数:optimizer,loss,metrics,loss和metrics都可以自定义

# for a multi-class classification problem
model.compile(optimizer='rmsprop',
             loss = 'categorical_crossentropy',
              metrics=['accuracy']
             )
# For a binary classification problem
model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])

# For a mean squared error regression problem
model.compile(optimizer='rmsprop',
              loss='mse')


#for custom metrics
import keras.backend as K

def mean_pred(y_true,y_pred):
    return K.mean(y_pred)


model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy', mean_pred])

定义好模型和配置之后就可以使用Fit()和fit_generator()方法传入进行训练

model.fit(x_train,y_train,epochs=20,batch_size=128)

训练完之后就可以调用evaluate()方法对训练好的模型进行评估

score = model.evaluate(x_test,y_test,batch_size=128)

下面通过一个手写字符分类来看下keras如何建模

from keras.layers import Dense,Dropout
from keras import models
from keras.datasets import mnist
from keras.utils import to_categorical # convert int labels to one-hot vector
#define model
model = models.Sequential()
model.add(Dense(128,activation="relu",input_dim=784))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
# print model
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_79 (Dense)             (None, 128)               100480    
_________________________________________________________________
dropout_21 (Dropout)         (None, 128)               0         
_________________________________________________________________
dense_80 (Dense)             (None, 64)                8256      
_________________________________________________________________
dropout_22 (Dropout)         (None, 64)                0         
_________________________________________________________________
dense_81 (Dense)             (None, 10)                650       
=================================================================
Total params: 109,386
Trainable params: 109,386
Non-trainable params: 0
_________________________________________________________________
# load data
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.astype('float32')/255 # normalize to 0~1
test_images = test_images.astype('float32')/255
train_images = train_images.reshape((60000,-1))
test_images = test_images.reshape((10000,-1))
# convert to one-hot vectors
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# define training config
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

#train the model
model.fit(train_images,train_labels,epochs=5,batch_size=64)


#评估模型
test_loss,test_accuracy = model.evaluate(test_images,test_labels)

print("test loss:",test_loss)
print("test acc:",test_accuracy)
Epoch 1/5
60000/60000 [==============================] - 7s 113us/step - loss: 0.6265 - acc: 0.8106
Epoch 2/5
60000/60000 [==============================] - 5s 83us/step - loss: 0.3415 - acc: 0.9079
Epoch 3/5
60000/60000 [==============================] - 5s 82us/step - loss: 0.2935 - acc: 0.9228
Epoch 4/5
60000/60000 [==============================] - 5s 82us/step - loss: 0.2749 - acc: 0.9312
Epoch 5/5
60000/60000 [==============================] - 5s 84us/step - loss: 0.2656 - acc: 0.9356
10000/10000 [==============================] - 1s 128us/step
test loss: 0.1488323472943157
test acc: 0.9647

CNN

from keras.datasets import mnist
from keras.utils import np_utils #convert int labels to one-hot vector
from keras.layers import Dense,Conv2D,MaxPool2D
from keras.models import Sequential

# define model
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))


#print model
# model.summary()

#load data
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255  # normalize to 0~1

test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255

# convert to one-hot vectors
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

# define training config
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

# train the model
model.fit(train_images, train_labels, epochs=5, batch_size=64)

# evaluate the model
test_loss, test_accuracy = model.evaluate(test_images, test_labels)
print("test loss:", test_loss)
print("test accuracy:", test_accuracy)
Epoch 1/5
60000/60000 [==============================] - 17s 276us/step - loss: 0.1867 - acc: 0.9410
Epoch 2/5
38592/60000 [==================>...........] - ETA: 3s - loss: 0.0514 - acc: 0.9841

猜你喜欢

转载自blog.csdn.net/zhonglongshen/article/details/94719571