Deep learning, a branch of artificial intelligence (AI), has powered applications of AI to new heights. Facial recognition technology, for example, has progressed to the point where Delta Airlines will use it to check passengers in over the December peak holiday, at least in its Atlanta hub. Apple is unlocking its iPhones and allow payments via facial recognition of designated iPhone users, according to a recent article in MIT Technology Review.
Facial recognition can be used to check people in at airports and to monitor people in stores.
Convenience and Negative Consequences at Issue
These instances of technology news make clear that facial recognition certainly has the potential to make life more convenient. Who wouldn’t want to be waved through the gate or have phone access without waiting in line or entering a password, respectively?
But as a recent report from New York University’s AI Now Institute points out, facial recognition also raises a host of ethical questions about the potential for negative uses and encroachments on privacy and freedom of movement. The AI Now 2018 Report observes that facial recognition is being implemented by leaps and bounds, often without any legal or regulatory framework, and generally without consumer and public awareness of its deployment.
The implications are broad. Past criminals could be targeted for surveillance in shops and other public venues. Shoppers could be targeted by salespeople based on their shopping history. The Secret Service is developing a system for facial recognition access to the White House. ATMs could release funds based on facial recognition rather than cards and codes. Students who seem inattentive in the classroom could be identified by facial recognition. (AI systems are becoming adept at recognizing emotion such as boredom or states such as sleepiness in addition to connecting identities with faces.) Patients in hospitals and doctor’s offices could be checked in by facial recognition.
As the above list makes apparent, however, the potential of facial recognition is an uneasy admixture of the convenient, the potentially convenient/potentially annoying/potentially neutral, and the potentially harmful. There are concerns in China, for example, that political dissidents are being targeted and followed by facial recognition methods. In the U.S., AI systems have shown some tendency to disproportionately identify minorities as criminal suspects.
Facial recognition stems from artificial intelligence and deep learning recognition of patterns.
Should the Public Know?
All these uses are being facilitated by the widespread use of surveillance cameras in public. Some observers point out that many societies have the capability to track the location of criminals, dissidents, or other people it regards as suspect.
These abilities have existed as long as phones and other devices have had location services. But facial recognition makes the targeted “location” the face itself, rather than a device. And as deep learning teaches itself patterns more and more, it can increasingly identify faces and expressions regardless of changes in faces, such as beards or new surrounding hairstyles.
The AI Now Report 2018 calls for regulatory and legal frameworks to be examined before facial recognition is implemented more. It also recommends that consumers and the public be told when and where facial recognition systems are being used, and that they be given the choice to opt out of the technology.
It’s not clear how an opt-out might be implemented, but at least awareness of facial recognition’s use would be a start, for business leadership and the public at large.