tensorflow keras 存储模型错误 使用自定义层 报错 TypeError: ('Not JSON Serializable:', ) 等

错误原因

原因在于使用了自定义的层
例如:定义损失函数 Loss(因为这个函数不好,就不多做说明了,只是做个例子来说明)

def contrastive_loss_layer(top_different, deep_different, y_true):
         margin = 1000
         top_distance=tf.norm((top_different),ord=2)
         deep_distance=tf.norm((deep_different),ord=2)
         mul_distance=K.log(top_distance*deep_distance)
         loss=(1-y_true)*mul_distance+y_true*tf.square(tf.maximum(margin-mul_distance,0))
         loss=tf.reduce_mean(loss)
         return loss
model.add_loss(layers.Lambda(lambda x:self.contrastive_loss_layer(*x), name='loss')([left_inputs-right_inputs,left_output-right_output,label_inputs]))                                                                        

替换成层类就不会报错了

class ContrastiveLoss(layers.Layer):
    def __init__(self, **kwargs):
        super(ContrastiveLoss, self).__init__(**kwargs)

    def call(self, inputs, **kwargs):
        """
        # inputs:Input tensor, or list/tuple of input tensors.
        如上,父类KL.Layer的call方法明确要求inputs为一个tensor,或者包含多个tensor的列表/元组
        所以这里不能直接接受多个入参,需要把多个入参封装成列表/元组的形式然后在函数中自行解包,否则会报错。
        """
        # 解包入参
        
        top_different,deep_different,y_true=inputs
        margin = 100
        top_distance = tf.norm((top_different), ord=2)
        deep_distance = tf.norm((deep_different), ord=2)
        mul_distance = K.log(top_distance * deep_distance)
        loss = (1 - y_true) * mul_distance + y_true * tf.square(tf.maximum(margin - mul_distance, 0))
        loss = tf.reduce_mean(loss)
        # 重点:把自定义的loss添加进层使其生效,同时加入metric方便在KERAS的进度条上实时追踪
        
        self.add_loss(loss, inputs=True)
        self.add_metric(loss, aggregation="mean", name="C_loss")
        return loss
发布了11 篇原创文章 · 获赞 3 · 访问量 824

猜你喜欢

转载自blog.csdn.net/qq_44930937/article/details/104509254