so cool! Liveness detection Blinking, mouth opening, nodding, shaking head movements all in one go: Human face liveness detection system [including Python source code + PyqtUI interface + detailed explanation of principle]

Demonstration of basic functions

insert image description here

Abstract: 活体检测It is a technical means for judging whether the captured face is a real face or a fake face attack. This article introduces the technical principle of its implementation in detail, and at the same time gives a complete Pythonimplementation code, and PyQTrealizes the UI interface, which is more convenient for displaying functions. The face liveness detection system supports two detection modes of video and camera, and can perform facial detection of the four common actions of , , , and the face 眨眼, 张嘴and 点头can 摇头count the execution times of each action. This article provides complete Python code and tutorials for reference and study by interested partners. See the end of the article for how to obtain complete code resource files.

Click to jump to the end of the article "Complete related documents and source code" to obtain


foreword

At present, face recognition technology is widely used in our daily life, and can be accessed almost everywhere, such as face recognition unlocking of mobile phones, face payment, face recognition of access control gates, etc. But there will also be certain problems, such as: face recognition can only detect whether the target face is consistent with the reserved face data characteristics, but cannot detect whether it is a real living person. Therefore, various deception methods have also begun to surface. How to judge whether the detected object is a real living person, rather than a photo, video or even a human skin mask, is an urgent problem to be solved. At this time, it has stepped onto the stage of the times 活体检测.
活体检测The main purpose of this method is to judge whether the captured face is a real face or a forged face attack (such as: printed face pictures on colored paper, digital images of faces on electronic device screens and masks, etc.).

Common liveness detection mainly has the following three methods:
[1] Cooperative detection
In the process of liveness detection and authentication, the system requires the user to cooperate to complete the specified work, such as blinking, raising the head, opening the mouth, etc., to check whether the target is a real activity .
[2] rgb detection
This type of detection method is suitable for blocking the attack behavior of using pictures or video screenshots to deceive face recognition, and identifying whether it is a real living body by identifying the subtle features on the picture. This type of detection method can be divided into online and There are two offline versions.
[3] 3D structured light detection
In the process of living body detection, the three-dimensional imaging principle of 3D structured light is used to compare the three-dimensional features of the face to determine whether the detection target is a real living body, so as to prevent the deception of pictures, video screenshots and masks.

Among them, the cooperative living detection method is also very common in life. This paper mainly uses the four movements of opening mouth, blinking, nodding and shaking head to detect human face for human face liveness detection.

Based on the dlib library, the blogger has developed a simple human face liveness detection system through the distance change of key points of the face, which can detect human face liveness in two ways, and display the recognition results 视频. 摄像头It can recognize 眨眼, 张嘴, 点头, 摇头these 4 common facial expressions, interested friends can try it by themselves.

觉得不错的小伙伴,感谢点赞、关注加收藏!如果大家有任何建议或意见,欢迎在评论区留言交流!

The software interface is as follows:insert image description here

1. Software core function introduction and effect demonstration

The main functions of the software include the following parts:

眨眼、张嘴、点头、摇头1. It can perform motion detection on the face in the video or camera ;
2. It can count 眨眼、张嘴、点头、摇头the number of times of each motion; 3. It can test and detect
independently , if the detection is successful, it will display the words "test passed " ; box to choose whether to display the face outline, which is displayed by default.眨眼、张嘴、点头、摇头
显示面部轮廓线

(1) Video detection demonstration
Click 打开视频the button and select the video to be detected. The operation demonstration is as follows:
insert image description here
(2) Counting the number of facial movements
The system 眨眼、张嘴、点头、摇头will automatically count the number of times of each movement. 0.
The demonstration is as follows:
insert image description here
(3) Single action test function
By clicking the radio button 眨眼测试, 张嘴测试, 摇头测试, 点头测试to detect each action separately, if the corresponding action is detected, it will display the words that the corresponding action test passed. The operation demonstration is as follows:
insert image description here

2. The basic principle of human face liveness detection

1. Basic principles

The face biopsy detection system is mainly based on the change of the distance between the key points after the detection of the key points of the face.
First, use the shape_predictor_68_face_landmarks model of the dlib library to detect 68 key points of the face. The key points are as follows:
insert image description here
the distribution of points on each part of the face is as follows:

Cheek line [1,17]
Left eye eyebrow [18,22] Right
eye eyebrow [23,27]
Nose bridge [28,31]
Nose [32,36]
Left eye [37,42]
Right eye [43,48]
Outer edge of lips [49,55]
Inner edge of upper lip [66,68]
Outer edge of lower lip [56,60]
Inner edge of lower lip [61,65]

2. Blink detection

Basic principle: Based on the change of the eye aspect ratio EAR (Eye Aspect Ratio), it is judged whether the human eye blinks. When the human eye is open, the EAR fluctuates up and down at a certain value, and when the human eye is closed, the EAR drops rapidly, theoretically close to zero. So we can consider that the eyes are closed when the EAR is below a certain threshold. In order to detect the number of blinks, it is necessary to set the number of consecutive frames of the same blink. The blinking speed is relatively fast, and the blinking action is generally completed in 1 to 3 frames. Both thresholds should be set according to the actual situation.
insert image description here
**Judgement criteria:** We calculate the aspect ratio of the left and right eyes respectively and take the average value as an indicator of blinking. After many tests, we select 0.3 as the threshold. When two real EARs are continuously detected to be less than the threshold, that is, when the eyes are opened and the eyes are closed, we will record it as a blink.

Note: The threshold may be affected by factors such as the distance of the camera or the shape of the face, and may need to be fine-tuned according to the actual situation.

The eye aspect ratio EAR distance calculation code is as follows:

def EAR(eye):
    # 默认二范数:求特征值,然后求最大特征值得算术平方根
    A = np.linalg.norm(eye[1] - eye[5])
    B = np.linalg.norm(eye[2] - eye[4])
    C = np.linalg.norm(eye[0] - eye[3])
    return (A + B) / (2.0 * C)

The core code of blink judgment is as follows:

# 提取左眼和右眼坐标,然后使用该坐标计算两只眼睛的眼睛纵横比
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
ear = EAR(leftEye) + EAR(rightEye) / 2.0
# 判断眼睛纵横比是否低于眨眼阈值
if ear < EAR_THRESH:
    count_eye += 1
else:
    # 检测到一次闭眼
    if count_eye >= EYE_close:
        total += 1
    count_eye = 0

3. Mouth opening detection

Basic principle: Similar to blink detection, calculate the aspect ratio MAR (Mouth Aspect Ratio) of the mouth. When the MAR is greater than the set threshold, the mouth is considered to be open.
Mouth aspect ratio MAR distance calculation code is as follows:

def MAR(mouth):
    # 默认二范数:求特征值,然后求最大特征值得算术平方根
    A = np.linalg.norm(mouth[2] - mouth[10])  # 51, 59(人脸68个关键点)
    B = np.linalg.norm(mouth[4] - mouth[8])  # 53, 57
    C = np.linalg.norm(mouth[0] - mouth[6])  # 49, 55
    return (A + B) / (2.0 * C)

The core code for mouth opening judgment is as follows:

Mouth = shape[mStart:mEnd]
mar = MAR(Mouth)
 # 判断嘴唇纵横比是否高于张嘴阈值,如果是,则增加张嘴帧计数器
 if mar > MAR_THRESH:
     COUNTER_MOUTH += 1
 else:
     # 如果张嘴帧计数器不等于0,则增加张嘴的总次数
     if COUNTER_MOUTH >= 2:
         TOTAL_MOUTH += 1
     COUNTER_MOUTH = 0

4. Shaking head and nodding detection

Similarly, for head shaking and nodding, we only need to calculate the change in the width of the cheeks on the left and right sides, and the distance from the nose to the chin to judge whether it is a nodding or shaking head movement.
Shake your head to judge the core code as follows:

# 左脸大于右脸
if face_left1 >= face_right1 + Config.FACE_DIFF and face_left2 >= face_right2 + Config.FACE_DIFF:
    distance_left += 1
# 右脸大于左脸
if face_right1 >= face_left1 + Config.FACE_DIFF and face_right2 >= face_left2 + Config.FACE_DIFF:
    distance_right += 1
# 左脸大于右脸,并且右脸大于左脸,判定摇头
if distance_left != 0 and distance_right != 0:
    TOTAL_FACE += 1
    distance_right = 0
    distance_left = 0

The above is the introduction of the basic principles and codes of face detection. Based on the above content, the blogger has pythondeveloped Pyqt5a visual human face liveness detection system software, which can more intuitively see the detection of various movements of the human face. 眨眼、张嘴、点头、摇头That is the second part of the software demonstration part, the system can perform motion detection on the video or the face in the camera very well .

The complete source code, UI interface code and other related documents related to the facial liveness detection system have been packaged and uploaded, and interested partners can obtain them by themselves through the download link.


【method of obtaining】

Follow the business card GZH below: [Axu Algorithm and Machine Learning], reply [Live Detection] to get the download method

All the complete program files involved in this article: including python source code, UI files, etc. (see the picture below), see the end of the article for how to obtain them:
insert image description here

Note: The code is developed using Pycharm+Python3.8, the main program of the running interface is MainProgram.py, and other test script descriptions are shown in the figure above. In order to ensure the smooth operation of the program, please follow 程序环境配置说明.txtthe configuration software to run the required environment.

Follow the business card below GZH: [Axu Algorithm and Machine Learning], reply [Live Detection] to get the download method


conclusion

The above is the entire content of the face detection system developed by the blogger. Due to the limited ability of the blogger, it is inevitable that there are omissions. I hope that the friends can criticize and correct me. If you have any suggestions or opinions on this article, please
comment District message exchange!

Friends who feel good, thank you for your likes, attention and collection!

Guess you like

Origin blog.csdn.net/qq_42589613/article/details/131440709