Face Detection using Deep Learning: An Improved Faster RCNN Approach论文解读

Flowchart of the training procedure


First of all, we train the CNN model of Faster RCNN using the WIDER FACE dataset [30]. We
further use the same dataset to test the pre-trained model so as to generate hard negatives. Thesehard negatives are fed into the network as the second step of our training procedure. The resulting
model will be further fine-tuned on the FDDB dataset. During the final fine-tuning process, we
apply the multi-scale training process, and adopt a feature concatenation strategy to further boost
the performance of our model. For the whole training processes, we follow the similar end-to-end
training strategy as Faster RCNN.

文章在faster RCNN上做了一些改进,主要体现在三个方面:

1、Feature Concatenation

Network architecture of the proposed feature concatenation scheme

我理解的就是把得出的ROIs映射到Conv3_3,Conv4_3,conv5_3,之后连接一个1X1的卷积层以保持深度一致,然后将这些ROIs输入到roi pool层,分别得到ROI_pool3,ROI_pool4,ROI_pool5,然后将这些ROI_pool进行concatenate(具体concatenate是怎么操作的文章中没写)。特别的,比较浅的特征层ROI_poo之后有一个L2-normalized的操作。

2、Hard Negative Mining

hard negatives are the regions where the network has failed to make correct prediction.
Thus, the hard negatives are fed into the network again as a reinforcement for improving our
trained model.

把hard negatives输入到用wider face预训练好的RPN部分,保持正负样本比例为1:3

3、Multi-Scale Training

文章对每张图片取了三种尺度进行训练,短边不超过480; 600; 750. 长边不超过1250.实验结果表示多尺度的训练让模型对不同尺寸的图片更具有鲁棒性,而且提高了检测的性能。

Experiments

1、VGG16 was selected to be our backbone CNN network, which had been pre-trained on ImageNet.

2、训练数据是WIDER FACE的training +validation datasets。

3、We gave each ground-truth annotation a difficulty value


忽略掉加起来分数大于2的ground-truth

4、具体的一些参数设置见文章

5、用WIDER FACE训练完后,将confidence scores大于0.8,与ground-truth之间的IOU小于0.5的hard negatives fed into网络中.

猜你喜欢

转载自blog.csdn.net/kkkkkkkkq/article/details/79246953