编写keras层

版权声明:本文为博主原创文章,欢迎转载,请注明转自http://blog.csdn.net/m0_37733057 https://blog.csdn.net/m0_37733057/article/details/78337649

1. Lambda:将均值为0的输入数据的正与负部分级联。Lambda用来编写不需要训练的层。

import keras.backend as K
from keras.layers.core import Lambda
from keras.models import Sequential
import numpy as np

np.random.seed(19931221)

def antirectifier(x):
    x -= K.mean(x, axis=1, keepdims=True)
#    x = K.l2_normalize(x, axis=1)
    pos = K.relu(x)
    neg = K.relu(-x)
    return K.concatenate([pos, neg], axis=1)

def antirectifier_output_shape(input_shape):
    shape = list(input_shape)
    assert len(shape) == 2  # only valid for 2D tensors
    shape[-1] *= 2
    return tuple(shape)

input_data=np.random.randint(0,10,(3,2))
x1 = input_data-np.mean(input_data, axis=1, keepdims=True)
model = Sequential()
model.add(Lambda(antirectifier, output_shape=antirectifier_output_shape,input_shape=(2,)))
res=model.predict(input_data)

当Lambda作为第一层时,要指定input_shape;在中间层时,将前一层的输出作为输入。

输入

input_data=[7 7
            9 0
            8 9]

中间结果

x1=[   0    0
     4.5 -4.5
    -0.5  0.5]

最终结果

res=[  0    0   0   0
     4.5    0   0 4.5
       0  0.5 0.5   0]

lambda定义多输入层时,如下

import keras.backend as K
def compare(x):
    pool_conf, conf = x[0],x[1]
    threshold =0.1
    sign = (1+K.sign(K.sign(pool_conf-tf.constant(threshold))-0.5))/2
    output = (1-sign)*tf.maximum(1-pool_conf, conf)+sign*tf.minimum(1-pool_conf, conf)
#    output = tf.cond(tf.less_equal(pool_conf,tf.constant(threshold)), 
#                     tf.maximum(1-pool_conf, conf), 
#                     tf.minimum(1-pool_conf, conf))
    return output
    
def compare_output_shape(input_shape):
    return input_shape[0]   
    
prior = Lambda(compare, output_shape=compare_output_shape)([gray_conf, conf])

注意:函数导入输入变量时两个输入作为一个整体,在函数里分开。tf.cond只能用于常数,tensorflow模式固定易出错,尽量用K的函数。K.sign()会得到3种值{-1,0,1},只想用到2种值{0,1},减去0.5后用嵌套,再平移。

2. 自定义非递归层:训练系数,使输出是输入的0.1。

from keras.models import Sequential
import numpy as np
from keras.engine.topology import Layer

np.random.seed(19931221)

class MyLayer(Layer):  
    def __init__(self, output_dim, **kw):  
        self.output_dim = output_dim 
        self.SCALER = None
        super(MyLayer, self).__init__(**kw)  
    def build(self, input_shape):  
        input_dim = input_shape[1] 
        self.SCALER=self.add_weight(shape=(input_dim,), initializer='uniform', trainable=True)#注意shape写法
        super(MyLayer, self).build(input_shape)  
    def call(self, x):  
        x *= self.SCALER
        return x  
    def compute_output_shape(self, input_shape):
        return input_shape

input_data=np.random.randint(0, 10, (3000, 2))
test_data=np.random.randint(0, 10, (1, 2))    #test_data=[4, 1]
labels = 0.1*input_data
model = Sequential()
model.add(MyLayer(2, input_shape=(2,)))  
model.compile(optimizer='rmsprop', loss='mse')
model.fit(input_data, labels, verbose=1)
res=model.predict(test_data)                   #res=[0.40302727, 0.09999914]
model.get_weights()                            #weights=[0.10075682, 0.09999914]

可以不定义输出维度,即在__init__中去掉“self.output_dim”,则MyLayer(input_shape=(2,))即可。

self.SCALER可能要加名字

self.SCALER=self.add_weight(name='scaler', shape=(input_dim, ), initializer='uniform', trainable=True)

3. 对参数进行约束

参考:https://www.colabug.com/5345160.html

keras中文文档

keras英文文档

使用预训练模型

预训练模型中文文档(显示中间层)

layer

源码解析之layer

源码解析之layer(二)

编写自己的层

利用keras的扩展性

猜你喜欢

转载自blog.csdn.net/m0_37733057/article/details/78337649