The controversial facial recognition on Meta is once again in the works

0comments
The Facebook app on a smartphone.
Once again, Meta is toying around with facial recognition. This comes three years after Zuckerberg and company announced the shut-down of the facial recognition project on Meta.

In 2021, the discontinuation of facial recognition on Facebook was due to privacy concerns and regulatory pressure. Now, the company is cautiously reintroducing the technology with the pretext of combatting "celebrity bait" scams, Reuters tells us.

Meta announced that it will run a trial involving around 50,000 public figures. The system will automatically compare these celebrities' profile pictures with images used in suspected scam advertisements. If a match is found and the ad appears to be indeed a fraud, Meta will block the content.

Monika Bickert, Meta's VP of content policy, explained that this initiative aims to protect high-profile individuals whose images have been frequently exploited for scam ads.

Celebrities included in this trial will be notified and have the option to opt out. The trial, set to roll out globally starting in December, will exclude regions where Meta doesn’t yet have regulatory clearance, such as the European Union, South Korea, the UK, and certain US states like Texas and Illinois.

Why Texas, of all places, you might ask? Well, in 2024, Meta has agreed to pay $1.4 billion to Texas to settle a lawsuit accusing the company of illegally using facial recognition technology to collect biometric data from millions of Texans without their consent.

This marks the largest settlement ever reached by a single state, according to the legal team representing Texas. The lawsuit, filed in 2022, accused Facebook of violating Texas' 2009 biometric privacy law by using facial recognition through its now-discontinued "Tag Suggestions" feature, which scanned photos and videos uploaded by users.

Recommended Stories
The settlement was reached in May, avoiding a scheduled trial, and follows a similar case in Illinois, where Meta paid $650 million in 2020 for violating that state’s strict biometric privacy law.

At the same time, Meta faces lawsuits over its alleged failure to prevent scams using AI-generated images of celebrities to lure users into fraudulent schemes. As part of the new trial, Meta promises to delete any facial data it collects during the scanning process, regardless of whether a scam is detected.

The technology being tested has undergone Meta’s internal privacy reviews, as well as consultations with external regulators and privacy experts. In addition, Meta plans to explore using facial recognition to help regular users regain access to their Facebook or Instagram accounts if they’ve been locked out or hacked.

2021: facial problems


In late 2021, Facebook faced significant challenges on multiple fronts. Whistleblower Frances Haugen had revealed internal documents, sparking widespread criticism of the company’s handling of user safety and mental health, particularly among younger users.

Amid this scrutiny, Mark Zuckerberg’s company underwent a major transformation, rebranding as "Meta" to reflect ambitions for a future centered on the metaverse. However, one of the biggest announcements came from Jerome Pesenti, Meta’s VP of AI, who revealed that the company would shut down its Face Recognition system.

The decision to discontinue facial recognition marked a pivotal shift. This technology, once used by over a third of Facebook’s daily active users, enabled automatic recognition in photos and videos. As a result of the shutdown, Meta deleted the facial recognition templates of over a billion users. Features like AI-suggested tagging, automatic notifications when users appeared in photos, and enhanced descriptions for the visually impaired would be affected.

The move came as Meta weighed the benefits of facial recognition against rising societal concerns around privacy and security. While the technology had positive applications, such as assisting the visually impaired with image descriptions, concerns about privacy and the lack of regulatory clarity were critical factors in the decision. The company indicated that facial recognition would continue to be used in limited scenarios, such as verifying identities for account access and preventing impersonation.

Yup, back in the day when I used Facebook, I used the automatic tagging quite often. It wasn't bad! To those who are preoccupied with safety and security: I'm sure that our faces have been thoroughly scanned and archived long, long ago on all kinds of non-government platforms and services.

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless