刚刚!YOLOv4重磅推出!

点上方蓝字Python人工智能与深度学习社区获取更多干货

在右上方 ··· 设为星标 ★,第一时间获取资源

刚刚YoloV4在arxiv更新了!

https://arxiv.org/abs/2004.10934v1

https://github.com/AlexeyAB/darknet

Github代码也更新了!

摘要:

There are a huge number of features which are said to improve Convolutional Neural Network (CNN) accuracy. Practical testing of combinations of such features on large datasets, and theoretical justification of the result, is required. Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets. We assume that such universal features include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections (CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT) and Mish-activation. We use new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, CmBN, DropBlock regularization, and CIoU loss, and combine some of them to achieve state-of-the-art results: 43.5% AP (65.7% AP50) for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100. 

不对论文解读了,以免有失偏颇,欢迎大家留言分享对此论文的点评!

大家对此论文有何看法?欢迎留言点评

---------♥---------

声明:本内容为学习笔记摘录

图片来源网络,不代表本公众号立场。如有侵权,联系删除

点个在看支持一下吧

猜你喜欢

转载自blog.csdn.net/qq_15698613/article/details/116751346