Earlier this week, Facebook (otherwise known as “Meta”) announced the planned shutdown of its facial recognition technology, stating the company would no longer use the technology to automatically recognize users in photos and videos. As part of this process, Facebook agreed to “delete more than a billion people’s individual facial recognition templates.” 

For years, Facebook’s facial recognition technology has automatically identified users when they appear in photos and videos posted on the platform. The technology allowed for users to receive tagging recommendations for postings by other users, and provided the underlying framework for Facebook’s automatic alt text system to inform blind or visually impaired users when they or one of their friends could be found in an image.

In its announcement, Facebook attributed the change to a “careful consideration” that weighed the possibility of the technology to serve as a “powerful tool” for identity verification and fraud prevention against the “concerns about the place of facial recognition in society” and the “ongoing uncertainty” regarding its regulation. With this decision, Facebook joins other companies that have recently curbed the use of facial recognition, citing concerns around possible misuse and lack of government regulation.

Facial recognition technology and the processing of biometric information has been a significant topic for the legal community in 2021, and we expect that trend to continue in 2022. Below are some examples of law and enforcement activity relating to biometric information within the past year:

  • As previously blogged, in January, the FTC reached settlement with Everalbum, resolving allegations that the company deceived consumers about its use of facial recognition technology and its retention of users' photos and videos.
  • In July, New York City’s biometric law took effect, which requires any commercial establishment in the city to disclose the processing of biometric information at the entrance to their establishment, and prohibits the sale, leasing, trade or sharing of biometric information for anything of value or otherwise profiting from the transaction of biometric information. That law includes a private right of action where plaintiffs can seek $500 for each failure to provide notice, $500 for each negligent violation, and $5,000 for each intentional or reckless violation, as well as reasonable attorney’s fees.
  • In August, Baltimore enacted its facial recognition law which prohibits both the Baltimore City government and “any person in Baltimore City” from purchasing or obtaining certain facial recognition technology.
  • In August, a state judge ruled against Clearview AI’s argument that the Illinois Biometric Information Privacy Act violated the First Amendment by prohibiting its ability to use public information.  Elsewhere, just this week Clearview was ordered to cease its activities in Australia, with the Office of the Australian Information Commissioner finding that Clearview had breached the Australian Privacy Act.

These examples serve as yet another reminder of the legal risks, both domestically and globally, associated with facial recognition technology and the processing of biometric information. Companies that use facial recognition technology or process biometric information should carefully evaluate the legal risks and obligations around such technology and processing.