On Monday the chief executive of Google parent company Alphabet, Sundar Pichai, backed EU proposals for a temporary ban on facial recognition technology while regulators assess potential risks, highlighting concerns around the use of face recognition in public areas for subject identification, saying:
“Facial recognition is fraught with risks. I think it is important that dominant regulation tackles it sooner rather than later… It is important that governments are involved. It can be immediate but maybe there’s a waiting period before we really think about how it’s being used.”
Here MEF Global Chair and CEO of facial verification specialists iProov, Andrew Bud CBE, and the General Manager of Realnetworks’ facial recognition platform SAFR Dan Grimm, share their thoughts on the regulatory developments and suggest why facial recognition is of valuable importance to society and why an open and candid debate about what the rules should be will result in the best outcome for the industry and the public.
Andrew Bud, CEO & Founder, iProov
There is a growing reaction against the use of face recognition in public places, for reasons both of privacy and of reliability. There have been a number of stories in the last year reporting that facial recognition technology used by UK police was ‘staggeringly inaccurate’. Yet for the last two years multinational financial services firms such as ING and Rabobank, and public bodies such as the UK Home Office have been using automatic face matching on a large scale to replace in-person identity checking with a faster, more reliable process. What is the relationship between these two trends?
To begin with, these are in fact two different technologies. They solve different issues and don’t experience the same problems. Trying to spot a suspect in a crowd is like trying to find a needle in a haystack. Any of the world’s seven billion inhabitants could look just like that suspect on CCTV. The purpose of facial recognition here is to help officers thin down this haystack. It is critical how officers deal with those matches, and acknowledge that most of them will inevitably be false matches.
There is also the question of personal privacy to be addressed. These systems help discover where people are and if they were in a specific location. But, what they don’t do is confirm if people are happy to be identified in this manner. Police and society at large may benefit, but not the individual.
On the other hand, when a person’s ID documents are verified against a selfie of that individual, something quite different is happening. This is not recognition. Here facial verification tools are being used to determine whether a customer is in fact whom they claim to be – just as humans would at border control. Modern machine learning is already very effective at doing this. In fact, it is about 100 times more accurate than the trained border staff assessed for a study conducted in 2014 in Australia.
If there is to be trust in the mobile online ecosystem, verification technology will be a critical means for sustaining faith. Citizens benefit from a faster, simpler and more reliable service, which also means we can do away with laborious in-person ID checks. From a privacy standpoint, there are no issues.
Essentially, facial verification is incrementally replacing such in-person checks. Its ability to onboard new customers, authenticate returning users and replace passwords means it is increasingly adopted by governments and banks alike. Performing these checks in the cloud prevents any device limitations getting in the way too. Verifying a person’s face rather than their paper trail is ultimately the most secure way to confirm their identity.
The crucial differences between face recognition, which the European Commission is targeting, and face verification whose use is spreading rapidly, lie in the informed awareness of the user that they are being identified, the consent of the user to such a matching process before it takes place, and the direct benefit to the user of the process. Those conditions are not normally met by public surveillance face recognition, opening it to social and ethical question. They are always met by face verification, making it a trusted and sustainable way to protect citizens’ identity and digital assets.
Dan Grimm, GM, SAFR, RealNetworks
Like any powerful technology, facial recognition can be misused, compromising the rights and individual privacy of citizens. However, we believe that a blanket ban on facial recognition is extremely short sighted and would preclude us from many of its benefits. In addition to finding missing persons and wanted criminals, facial recognition is used to streamline operations and processes for airports, schools and businesses every day.
Hence, we need thoughtful approaches to ensure facial recognition is used safely, ethically, and always with consumers’ privacy interests in mind. Regulations should be enacted at the national level to lay down the broader policy framework.
However, all stakeholders — technology companies, privacy groups, advocacy groups, and legislators — have an obligation to contribute and move the process forward.
RealNetworks has been very active in this area and has been working with technology companies and lawmakers to put in place legislation that regulates the use of facial recognition. RealNetworks’ most recent initiatives included support for the 2019 Washington Privacy Act (Senate Bill 5376), which included regulations relating to the use of facial recognition technology.
For Europe, the protections put in place by GDPR already provides an excellent starting point that facial recognition systems should adhere to. That is why our SAFR facial recognition system already adheres to GDPR standards. It’s also important to educate and empower lawmakers about how facial recognition works and the fact that it doesn’t “see” or find anyone it’s not looking for. Our mission is to empower organizations to use this valuable technology ethically and responsibly.