Upgrading Face Anti-Spoofing Detection With 3D Mannequin Head Data

Today, face recognition technology is more and more widely used in all walks of life. As the face recognition system brings convenience to people’s daily life, it also faces various types of face fraud attacks, and the technical security has also received more and more attention. How to distinguish real faces from fake faces is of great significance to the security of face recognition systems.

Common face fraud attacks include printing face photos, screen playing faces, and 3D masks. Among them, the mask type is the most difficult attack to detect, mainly because its authenticity is closer to that of a real person.

Liveness Detection Methods

At present, the main methods of liveness detection can be divided into three types, one is based on a plane 2D RGB camera, the other is based on an infrared camera, and the third is a living body detection scheme based on a 3D depth camera.

According to the images collected by the RGB camera, the apparent color, texture and shadow of the face can be obtained, so that the face camouflage detection can be performed according to the apparent features or imaging quality. Face motion is an important living body signal. If video is used, the motion information of the face can be further captured and even the 3-dimensional information and active physiological information of the face can be estimated for face living body detection. This allows a better distinction between living and prosthetic.

Infrared face detection is mainly based on the optical flow method. The optical flow method uses the temporal change and correlation of the pixel intensity data in the image sequence to determine the “motion” of each pixel position, that is, the operation information of each pixel point is obtained from the image sequence, and the Gauss difference filter and LBP feature are used and support vector machine for statistical analysis of data. At the same time, the optical flow field is more sensitive to the movement of the object, and the eye movement and blinking can be detected uniformly by using the optical flow field. This method of liveness detection can realize blind testing without the user feeling it.

The live detection scheme of the 3D depth camera is mainly based on extracting the three-dimensional information of feature points (recommended 256) in the living and non-living face areas, and preliminary analysis and processing of the geometric structure relationship between these points; Curvature extracts convex regions from depth images, extracts EGI features for each region, and then uses its spherical correlation for reclassification.

Common Algorithms for Liveness Detection

Algorithms commonly used in live detection can be divided into two categories: live detection algorithms based on 2D data and live detection algorithms based on 3D data.

Because the 2D data-based living detection algorithm cannot obtain 3D data information, it uses a combination of multiple face actions to improve the recognition accuracy. For example, a bank’s remote identity authentication system requires people to continuously blink, turn their heads, look up and bow their heads to confirm the identity of the real person. In addition, the algorithm will also detect the noise generated by remakes such as moiré to improve the recognition accuracy.

There are generally two types of living detection algorithms based on 3D data. One is to divide the 3D data into multiple modalities (RGB, infrared, depth map) to identify and then combine the recognition results, and the other is to directly identify the points of the 3D face. Identify the cloud. No matter which scheme is used, since 3D data has more depth dimensions, the upper limit of recognition accuracy is much higher than that of 2D live detection algorithms.

Datatang’s Anti-Spoofing Data Solution

2D Living_Face & Anti-Spoofing Data

The collection scenes include indoor and outdoor scenes. The data includes male and female. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The data includes multiple postures, multiple expressions, and multiple anti-spoofing samples.

3D Living_Face & Anti_Spoofing Data-1

The collection scenes include indoor and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes various expressions, facial postures, anti-spoofing samples, multiple light conditions, multiple scenes.

3D Living_Face & Anti_Spoofing Data-2

The 3D living detection data uses the front 3D structured light lens module of the iPhone to realize the collection of 3D face and fake face samples. It covers most of the data forms required in the live detection algorithm, in addition to the original real face action, mobile phone, face action adversarial sample-pad remake, photo adversarial sample-face photo and mask deception, it also includes 3D masks or mannequin head template. In terms of material for 3D masks or mannequin head, we selected materials such as sandstone and resin, which greatly improves the sample distribution richness of masks or mannequin head.

The original intention of technology-enabled “face recognition” is to bring convenience to people’s lives, not to “streak” personal privacy. While the law continues to define “red lines” for face recognition, the industry also needs to establish technical standards for face recognition, and design and develop mature face recognition solutions.

Datatang strictly abides by the relevant regulations, and the collected data has been authorized by the person being collected.


If you want to know more details about the datasets or how to acquire, please feel free to contact us: info@datatang.com.




Off-the-shelf AI training data, on-demand data collection & annotation services

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How I Learned Machine Learning in Three Days: Microsoft OpenHack

Introduction to Reinforcement Learning. Part 2: Q-Learning

Indian Actors Classification using Deep Neural Networks

Image and Video Understanding: A Roadmap For Implementation

Reading Reflection #5

Teaching a car to drive itself

Dropout (inverted dropout)

Building your first ML model and Getting Started with PySpark

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Off-the-shelf AI training data, on-demand data collection & annotation services

More from Medium

Faces: AI Blitz XIII with Team GLaDOS

Deep Learning-Based Car Damage Classification and Detection on Colab

Understand Evaluation Metrics of Object Detection: GloU, Objectness, Classification, Precision…

Paper Review: “Instance-aware Image Colorization”