Deep learning I - III Shallow Neural Network - Derivatives of activation functions激励函数导数推导

Derivatives of activation functions 激励函数的导数


Derivatives of Sigmod

(1) s i g m o d = g ( z ) = 1 1 + e z

(2) g ( z ) = d d z g ( z ) = d ( 1 + e z ) 1 d ( 1 + e z ) d ( 1 + e z ) d ( z ) d ( z ) d z = ( 1 + e z ) 2 e z 1 = 1 ( 1 + e z ) e z ( 1 + e z ) = 1 ( 1 + e z ) 1 + e z 1 ( 1 + e z ) = 1 ( 1 + e z ) ( 1 1 ( 1 + e z ) ) = g ( z ) ( 1 g ( z ) )

这里写图片描述

Derivatives of tanh

(3) s i n h ( z ) = e z e z 2

(4) c o s h ( z ) = e z + e z 2

(5) d d z s i n h ( z ) = e z + e z 2 = c o s h ( z )

(6) d d z c o s h ( z ) = e z e z 2 = s i n h ( z )

(7) t a n h ( z ) = s i n h ( z ) c o s h ( z ) = g ( z ) = e z e z e z + e z

(8) g ( z ) = d d z ( s i n h ( z ) c o s h ( z ) 1 ) = d d z s i n h ( z ) c o s h ( z ) 1 + s i n h ( z ) d d c o s h ( z ) c o s h ( z ) 1 d d z c o s h ( z ) = 1 + s i n h ( z ) 1 c o s h ( z ) 2 s i n h ( z ) = 1 t a n h ( z ) 2

这里写图片描述

Derivatives of ReLU

(9) R e L U = g ( z ) = m a x ( 0 , z )

(10) g ( z ) = { 0 i f   z < 0 1 i f   z 0

(11) L e a k y   R e L U = g ( z ) = m a x ( 0.01 z , z )

(12) g ( z ) = { 0.01 i f   z < 0 1 i f   z 0

这里写图片描述

猜你喜欢

转载自blog.csdn.net/zfcjhdq/article/details/80705711