密集人脸关键点检测(包额头) 资料项目合集

WFLW(Wider Facial Landmarks in-the-wild,野外更广泛的面部特征)

:数据集地址:Look at Boundary: A Boundary-Aware Face Alignment Algorithm

图像:图像骨割云盘

标注:标注骨割云盘

在Python中加载WFLW数据集训练子集

<span style="color:#ffb746"><span style="background-color:#ffffff"><span style="background-color:#2d2d2d"><span style="color:#cccccc"><code class="language-javascript">import deeplake
ds = deeplake.load("hub://activeloop/wflw-train")</code></span></span></span></span>
复制

在Python中加载WFLW数据集测试子集

<span style="color:#ffb746"><span style="background-color:#ffffff"><span style="background-color:#2d2d2d"><span style="color:#cccccc"><code class="language-javascript">import deeplake
ds = deeplake.load("hub://activeloop/wflw-test")</code></span></span></span></span>
复制

WFLW 数据集结构

WFLW 数据字段
  • image:包含图像的张量。
  • box:包含边界框数组的张量
  • keypoint:表示面部 98 个关键点的张量。
  • 化妆:检测面部是否化妆的张量
  • occlusions:用于检测面部图像是否存在遮挡的张量
  • 模糊:用于识别图像是否模糊的张量
  • 表达:用于识别面部图像上是否有表情的张量
  • 照明:用于识别图像中是否有照明的张量
  • 姿势:张量,用于识别面部是否存在任何姿势
WFLW 数据分割
  • WFLW 训练分割包含 7500 张图像。
  • WFLW 测试分割包含 2500 张图像。

Inference

C++ APIs

The ONNXRuntime(CPU/GPU), MNN, NCNN and TNN C++ inference of torchlm will be release in lite.ai.toolkit. Here is an example of 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:

More classes for face alignment (68 points, 98 points, 106 points, 1000 points)

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks, 1.0Mb only!
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks, 4.8Mb only!
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks, 2.8Mb only!
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks, 9.4Mb only!
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks, 11Mb only!
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks, 2.0Mb only!
auto *align = new lite::cv::face::align::PIPNet98(onnx_path);  // 98 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet68(onnx_path);  // 68 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet29(onnx_path);  // 29 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet19(onnx_path);  // 19 landmarks, CVPR2021!

More details of C++ APIs, please check 

Python APIs

point_down

In torchlm, we provide pipelines for deploying models with PyTorch and ONNXRuntime. A high level API named runtime.bind can bind face detection and landmarks models together, then you can run the runtime.forward API to get the output landmarks and bboxes. Here is an example of PIPNet. Pretrained weights of PIPNet, Download.

Inference on PyTorch Backend

import torchlm
from torchlm.tools import faceboxesv2
from torchlm.models import pipnet

torchlm.runtime.bind(faceboxesv2(device="cpu"))  # set device="cuda" if you want to run with CUDA
# set map_location="cuda" if you want to run with CUDA
torchlm.runtime.bind(
  pipnet(backbone="resnet18", pretrained=True,  
         num_nb=10, num_lms=98, net_stride=32, input_size=256,
         meanface_type="wflw", map_location="cpu", checkpoint=None) 
) # will auto download pretrained weights from latest release if pretrained=True
landmarks, bboxes = torchlm.runtime.forward(image)
image = torchlm.utils.draw_bboxes(image, bboxes=bboxes)
image = torchlm.utils.draw_landmarks(image, landmarks=landmarks)

Inference on ONNXRuntime Backend

import torchlm
from torchlm.runtime import faceboxesv2_ort, pipnet_ort

torchlm.runtime.bind(faceboxesv2_ort())
torchlm.runtime.bind(
  pipnet_ort(onnx_path="pipnet_resnet18.onnx",num_nb=10,
             num_lms=98, net_stride=32,input_size=256, meanface_type="wflw")
)
landmarks, bboxes = torchlm.runtime.forward(image)
image = torchlm.utils.draw_bboxes(image, bboxes=bboxes)
image = torchlm.utils.draw_landmarks(image, landmarks=landmarks)

 Evaluating

In torchlm, each model have a high level and user-friendly API named apply_evaluating for evaluation. This method will calculate the NME, FR and AUC for eval dataset. Here is an example of PIPNet.

from torchlm.models import pipnet
# will auto download pretrained weights from latest release if pretrained=True
model = pipnet(backbone="resnet18", pretrained=True, num_nb=10, num_lms=98, net_stride=32,
               input_size=256, meanface_type="wflw", backbone_pretrained=True)
NME, FR, AUC = model.apply_evaluating(
    annotation_path="../data/WFLW/convertd/test.txt",
    norm_indices=[60, 72],  # the indexes of two eyeballs.
    coordinates_already_normalized=True, 
    eval_normalized_coordinates=False
)
print(f"NME: {NME}, FR: {FR}, AUC: {AUC}")

Then, you will get the Performance(@NME@FR@AUC) results.

Built _PIPEvalDataset: eval count is 2500 !
Evaluating PIPNet: 100%|██████████| 2500/2500 [02:53<00:00, 14.45it/s]
NME: 0.04453323229181989, FR: 0.04200000000000004, AUC: 0.5732673333333334

猜你喜欢

转载自blog.csdn.net/sinat_37574187/article/details/131822371