DeepLearning.ai-Week1-Convolution+model+-+Step+by+Step

1 - Import Packages

import numpy as np 
import h5py
import math
import matplotlib.pyplot as plt
%matplotlib inline

2 - Global Parameters Setting

plt.rcParams["figure.figsize"] = (5.0, 4.0) # 设置figure_size尺寸
plt.rcParams["image.interpolation"] = "nearest" # 设置插入风格
plt.rcParams["image.cmap"] = "gray" # 设置颜色风格

# 动态重载模块,模块修改时无需重新启动 %load_ext autoreload %autoreload 2
# 随机数种子 np.random.seed(
1)

3 - Convolutional Neural Networks

3.1 - Zero-padding

  对输入张量X指定pad大小,对其进行zero的填充。运用numpy模块中的pad方法可以简单的实现。

# GRADED FUNCTION: zero_pad

def zero_pad(X, pad):
    """
    Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, 
    as illustrated in Figure 1.
    
    Argument:
    X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
    pad -- integer, amount of padding around each image on vertical and horizontal dimensions
    
    Returns:
    X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
    """
    
    ### START CODE HERE ### (≈ 1 line)
   # np.pad第一个参数为pad目标张量,第二个参数为每一个维度要pad的两边的大小,第三个参数为pad的模式,第四个参数对应第二个参数
# 为每一个维度每一边要pad的值
X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), "constant", constant_values=((0, 0), (0, 0), (0, 0), (0, 0))) ### END CODE HERE ### return X_pad
np.random.seed(1) # 随机数种子
x = np.random.randn(4, 3, 3, 2) # 随机一个输入变量
x_pad = zero_pad(x, 2) # 对输入变量x进行zero_pad
print ("x.shape =", x.shape) 
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])

fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
Result:
x.shape = (4, 3, 3, 2) x_pad.shape = (4, 7, 7, 2) x[1,1] = [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] x_pad[1,1] = [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] Out[7]: <matplotlib.image.AxesImage at 0x242c35b4a58>

3.2 - Single step of convolution

  对于输入张量a_slice_prev,求出其与其相同规模的卷积核W和偏置项b计算之后的结果。python支持张量相乘,因此相乘之后求和加上偏置项即可得结果。

# GRADED FUNCTION: conv_single_step

def conv_single_step(a_slice_prev, W, b):
    """
    Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation 
    of the previous layer.
    
    Arguments:
    a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
    W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
    b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
    
    Returns:
    Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
    """

    ### START CODE HERE ### (≈ 2 lines of code)
    # Element-wise product between a_slice and W. Do not add the bias yet.
    s = a_slice_prev * W
    # Sum over all entries of the volume s.
    Z = np.sum(s)
    # Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
    Z = Z + float(b)
    ### END CODE HERE ###

    return Z
np.random.seed(1)
# 随机相同规模的a_slice_prev以及卷积核W a_slice_prev
= np.random.randn(4, 4, 3) W = np.random.randn(4, 4, 3) b = np.random.randn(1, 1, 1) Z = conv_single_step(a_slice_prev, W, b) print("Z =", Z)
Result:
Z = -6.99908945068

猜你喜欢

转载自www.cnblogs.com/CZiFan/p/9476399.html