CNN in the feature map, convolution kernel concept, the number of convolution kernel, filter, channel interpretation

CNN in the feature map, convolution kernel concept, the number of convolution kernel, filter, channel interpretation

Reference links: https://blog.csdn.net/xys430381_1/article/details/82529397
authors write well, solve many basic problems.

feather map to understand

This result is input through the convolution output, generally a plurality of two-dimensional images, (like as yuba) are arranged so the plurality of two-dimensional image on the paper of FIG together, each of them are called \ (feature \ quad map \)

feather map is how did

There are several convolution kernel will have several \ (Feather \ Quad the Map \) . Input data through the convolution operation output \ (Feather \ Quad the Map \) , also know from the name (translation is characteristic map) This is the result of the convolution kernel extract derived features .

More than \ (feather \ quad map \) means that we extract more characteristic values, so might be able to more accurately identify the data.

Convolution kernel of understanding

Also known as convolution kernel filter ( \ (filter \) ).
Each convolution kernel has three properties: the length and width of deep, there is generally no need to define their own depth, depth is the depth of the input data and the same;
how many there are that many convolution kernel \ (Feather \ Quad the Map \) ;

For example, in \ (pytorch.nn.Conv2d () \) function:

torch.nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2)
#二维卷积层,输入通道数1,输出通道数16(相当于有16个filter,也就是16个卷积核),卷积核大小为5*5*1(因为输入的通道数为1,所以这里卷积核的深度也就自动设置为1了),步长为1,零填充2圈
#经过计算,可以得到卷积输出的图像的大小和输入的图像大小是等大小的,但是深度不一样,为28*28*16(16为深度),因为这里的padding抵消了卷积的缩小

The number of convolution kernel: see a lot of articles, the general said that with the deepening of the network, \ (Feather \ Quad the Map \) in length and width dimensions become smaller, which is the convolution extracted more typical features , so convolution is necessary to increase the number of layers of the back, so that the number of the convolution kernel to be added is, typically multiplied (some will be more specifically set according to the circumstances of the experiment). The following diagram illustrates this well :( intermediate is a convolution kernel, which is a three-dimensional convolution kernel is a, the output is a \ (Feather \ Quad Map \) )

CNN learning process: the updated value of the convolution kernel (convolution kernel is updated value of)

Start value convolution kernel is random, after each of the calculation process will come back in the category of image, of course, the first results are not the most accurate, after \ (loss \ quad function \) role, CNN will update the values of these convolution kernel, and then again learning. After several such learning, CNN will find the best parameter convolution kernel, making the feature extraction can accurately distinguish between these pictures, which will complete the process of learning of CNN.

Guess you like

Origin www.cnblogs.com/alking1001/p/11957037.html