As reported by Engadget on Tuesday, March 16, 2021
Researchers have created a “two-factor authentication” system for facial recognition that uses facial gestures such as blinking or lip movement to unlock devices. BYU professor D.J. Lee says his new authentication algorithm is more secure than current facial biometrics.
Called Concurrent Two-Factor Authentication (C2FIV), the system asks you to record a short video of your movements using your face, which also includes reading out a unique phrase. You then upload the clip to your device for input and authorization, and the system then requires your face and gestures for verification.
Lee claims the bad guys can bypass biometrics such as fingerprint readers and retina scans and break into your phone by using a mask or photo and by holding it up to your face while you sleep. The professor of calculator and electrical engineering said in a statement, “The biggest problem we’re trying to solve is making sure that the authentication process is intentional. You see this a lot in movies-think Ethan Hunt in Mission Impossible-and even wearing a mask to replicate someone’s face. “
However, while additional layers of device security are always useful, the fact remains that most modern face unlocking systems are not confused by masks or photos. Device makers have also learned from previous mistakes, such as the Google Pixel 4’s facial recognition flaw, which can be accessed even if the subject’s eyes are closed, to create a more waterproof tool.
Apple’s Face ID, for example, relies on the company’s TrueDepth camera to map your face using more than 30,000 invisible dots. Apple claims this information cannot be found in printed or 2D digital photos and can be prevented from spoofing by masking or other techniques.
This is not to say that the new system is without merit. Lee has patented the technology and also envisions use cases such as online banking, ATMs, safe deposit boxes and keyless entry cars.
The C2FIV system relies on an integrated neural network framework to learn facial features and movements simultaneously. In a preliminary study, Lee trained the algorithm on a dataset of 8,000 video clips from 50 subjects that performed facial actions such as blinking, dropping their jaw, smiling or raising their eyebrows.
Recent Comments