keras:2)函数式(Functional)模型

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/jiangpeng59/article/details/77532016

相对序贯模型,函数式模型显得更灵活(序贯是函数式的特例),这里对函数式模型进行简单地介绍,具体可以参考官方网站。
官方的栗子:

from keras.layers import Input, Dense
from keras.models import Model

# This returns a tensor
inputs = Input(shape=(784,))

# a layer instance is callable on a tensor, and returns a tensor
x = Dense(64, activation='relu')(inputs)
x = Dense(64, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)

# This creates a model that includes
# the Input layer and three Dense layers
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit(data, labels)  # starts training

layers.Input输入层是个特例,它直接就返回tensor对象。而其他的层如layers.Dense

Dense(64, activation='relu')

这里它返回的是一个层(亦可视为函数),这个层作用于后面紧跟着的tensor(括号里面的变量,几个”function()()”和scala中的柯里化神似)

Q1:如何使用模型中的任意层作为一个子模型,输入数据后获得对应的输出数据?

以手写识别的AutoEncode为例

from keras.layers import Input, Dense
from keras.models import Model
# this is the size of our encoded representations
encoding_dim = 32  # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)

模型最后拿到的是编码解码的模型,但我们想查看输入数据经编码器后的输出是什么?如何做呢?

encoder = Model(input_img, encoded)
#x_test是输出的测试数据
encoded_imgs = encoder.predict(x_test)

单独对编码后的数据解码,与上面有点不同

# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))

autoencoder.layers[-1]为按索引取模型的层(函数),最后在Model函数的输出中作用于输入的数据,注意Model函数的输入必须是Input对象。
最后只需编译主模型autoencoder,子模型decoder 和encoder都可以直接使用

autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

参考:
https://keras-cn.readthedocs.io/en/latest/models/model
https://blog.keras.io/building-autoencoders-in-keras.html

猜你喜欢

转载自blog.csdn.net/jiangpeng59/article/details/77532016