近来人脸识别的 loss

最近(2017/2018)人脸识别的相关论文

人脸识别的Loss函数改进的论文比较多, 如:

[2017] L2-constrained Softmax Loss for Discriminative Face Verification

[2017 ACM MM] NormFace_ L2 Hypersphere Embedding for Face Verification

[2017 CVPR] SphereFace_ Deep Hypersphere Embedding for Face Recognition (A_Softmax Loss)

[2017 NIPS] Rethinking Feature Discrimination and Polymerization for Large-scale Recognition (COCO Loss)

[2017 ICCV] Deep Metric Learning with Angular Loss

[2017] Contrastive-center loss for deep neural networks

[2017 CVPR] Range Loss for Deep Face Recognition with Long-tail

2018年伊始也出几两篇相关改进的论文:

[2018] Additive Margin Softmax for Face Verification

[2018] Face Recognition via Centralized Coordinate Learning

[2018] ArcFace_ Additive Angular Margin Loss for Deep Face Recognition


人脸识别还有一些其他难点和热点的, 比如

1) 基于视频的人脸识别

[2017 CVPR] Neural Aggregation Network for Video Face Recognition;

[2017 PAMI] Trunk-Branch Ensemble Convolutional Neural Networks for Video-based Face Recognition

2) 三维人脸识别

[2017] Deep 3D Face Identification

[2017] Learning from Millions of 3D Scans for Large-scale 3D Face Recognition

3) 跨年龄的人脸识别

[2017 PRL] Large Age-Gap face verification by feature injection in deep networks

[2017] Cross-Age LFW_ A Database for Studying Cross-Age Face Recognition in Unconstrained Environments

4) 少样本人脸识别

[2017] One-shot Face Recognition by Promoting Underrepresented Classes;

[2017] SSPP-DAN_ Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person

5) 遮挡情况下的人脸识别

[2017 ICCVW] Disguised Face Identification (DFI) with Facial KeyPoints using Spatial Fusion Convolutional Network

[2017] Enhancing Convolutional Neural Networks for Face Recognition with Occlusion Maps and Batch Triplet Loss

6) 多模型特征融合

[2017 PAMI] Face Search at Scale

[2017] Deep Heterogeneous Feature Fusion for Template-Based Face Recognition

还有注意到现在人脸识别的评价方式逐渐转向更贴近实用的1:N的开集测试([2017 CVPRW] Toward Open Set Face Recognition).


来自知乎的图片

这个得从2016年说起,2016年有两篇文章(Center Loss和Large Margin Softmax)首次给出了softmax损失函数的可视化图像,人们才知道原来特征的分布是长这个样子的:

<img src="https://pic4.zhimg.com/50/v2-c68a088599b6afdbafac1cf7d80b5f0e_hd.jpg" data-caption="" data-size="normal" data-rawwidth="592" data-rawheight="468" class="origin_image zh-lightbox-thumb" width="592" data-original="https://pic4.zhimg.com/v2-c68a088599b6afdbafac1cf7d80b5f0e_r.jpg">

于是就开始了各种研究了,有把特征变细的:

<img src="https://pic3.zhimg.com/50/v2-b95aeda7b26ad880f363a9f18f1cf1e3_hd.jpg" data-caption="" data-size="normal" data-rawwidth="584" data-rawheight="549" class="origin_image zh-lightbox-thumb" width="584" data-original="https://pic3.zhimg.com/v2-b95aeda7b26ad880f363a9f18f1cf1e3_r.jpg">

有让特征向各自类中心收缩的:

<img src="https://pic1.zhimg.com/50/v2-4ec741a5c1b46c377bc1c822078dc0c5_hd.jpg" data-caption="" data-size="normal" data-rawwidth="618" data-rawheight="549" class="origin_image zh-lightbox-thumb" width="618" data-original="https://pic1.zhimg.com/v2-4ec741a5c1b46c377bc1c822078dc0c5_r.jpg">

有把特征投影到球上的:

<img src="https://pic3.zhimg.com/50/v2-34989e27af83440660a75750acaf9a74_hd.jpg" data-caption="" data-size="normal" data-rawwidth="506" data-rawheight="460" class="origin_image zh-lightbox-thumb" width="506" data-original="https://pic3.zhimg.com/v2-34989e27af83440660a75750acaf9a74_r.jpg">

总之把特征可视化之后,人们发现特征原来是呈这种放射型分布的,而放射型分布就可以从幅度和角度两个方向来尝试改进分类函数,于是2017年人们要么改改幅度,要么改改角度,水了一堆paper。


猜你喜欢

转载自blog.csdn.net/u011808673/article/details/80345772