Network in Network----MLPConv structure

First, look at a schematic MLPConv

In a convolutional neural network, whether it is input or output, the convolution kernels between different feature maps are different;

In mlpconv, the weights between the beginning and the end of different feature maps are different, and the weights between the hidden layers are shared;

    The convolution kernel size of the first layer is 2*2, the stride is 1, the input is 2*(4*4), and the output is 4*(3*3); ordinary convolutional layer

    The convolution kernel size of the second layer is 1*1, the stride is 1, the input is 4*(3*3), and the output is 3*(3*3); CCCP layer

    The convolution kernel size of the third layer is 1*1, the stride is 1, the input is 3*(3*3), and the output is 2*(3*3); CCCP layer

 

2. What is a 1*1 convolution kernel

The NIN cascaded cross-feature map (Feature Map) integration process allows the network to learn complex and useful cross-feature map features. Take a closer look at the caffe implementation of NIN. After each traditional convolutional layer, two cccp layers ( cascaded cross channel parametric pooling), which are actually two 1×1 convolutional layers, so the cross-channel parametric perception layer is equivalent to a convolutional layer with a convolution kernel of 1*1:

 

3. Function

1. Add more complex operations in each local receptive field, instead of using a large number of filters, thereby reducing parameters.

2. Introduce stronger nonlinearity, Conv+CCCP+CCCP, each layer of 3 layers is connected to relu

The 3.1*1 convolution kernel can play a dimensionality reduction role in GoogleNet.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325297677&siteId=291194637