How Apple Says It Prevented Face ID From Being Racist

By Kate Conger on at

When Apple debuted its new facial recognition unlock system, Face ID, in September, the company faced questions about how it would sidestep the security and bias problems that have undermined similar facial recognition systems in the past. Senator Al Franken was one of the many people curious about how exactly how Apple was going to ensure Face ID’s success, and today, Apple responded to a series of questions sent by Franken’s office the day after the system was announced.

Earlier facial recognition systems from HP and Google failed to recognise people with dark skin. In 2009, a HP webcam failed to register black people. And in 2015, Google Photos’ facial recognition categorised black people as gorillas. If Face ID made similar mistakes, it could be a sign that Apple didn’t train Face ID to recognise a diverse set of faces.

Although Franken raised several questions about the privacy and security of Face ID, most of those have since been addressed in Apple’s white paper on the topic. However, Franken also asked about algorithmic bias, writing, “What steps did Apple take to ensure its system was trained on a diverse set of faces, in terms of race, gender, and age? How is Apple protecting against racial, gender, or age bias in Face ID?”

Today, Apple’s vice president of public policy for the Americas, Cynthia Hogan, provided some answers:

The accessibility of the product to people of diverse races and ethnicities was very important to us. Face ID uses facial matching neural networks that we developed using over a billion images, including IR and depth images collected in studies conducted with the participants’ informed consent. We worked with participants from around the world to include a representative group of people accounting for gender, age, ethnicity, and other factors. We augmented the studies as needed to provide a high degree of accuracy for a diverse range of users. In addition, a neural network that is trained to spot and resist spoofing defends against attempts to unlock your phone with photos or masks.

Apple’s answer to the bias question sounds promising. The point Hogan makes about consent is key—researchers have previously faced backlash for using photos of transgender people to train neural networks without permission. But we’ll have to wait and see how well Face ID performs once the iPhone X finally lands in the hands of consumers.

“I appreciate Apple’s willingness to engage with my office on these issues, and I’m glad to see the steps that the company has taken to address consumer privacy and security concerns,” Franken said in a statement. “I plan to follow up with Apple to find out more about how it plans to protect the data of customers who decide to use the latest generation of iPhone’s facial recognition technology.”


More Apple Posts: