Ingannare il riconoscimento facciale
文章探讨了面部识别技术的广泛应用及其对隐私的影响。Face Shield是一款旨在通过修改照片防止面部识别的应用,但其效果有限且面临技术升级的挑战。文章还分析了隐私风险及用户在使用服务时需考虑的因素。 2025-9-25 08:16:13 Author: www.adainese.it(查看原文) 阅读量:7 收藏

Post cover

This article was initially drafted about a year ago, but I never published it. Revisiting it now has allowed me to follow up and confirm my initial concerns.

We know that nearly every photo we take likely passes through facial recognition software. Apple includes one in its Photos app, and both Google and Facebook have their own versions. Images are associated with people and objects, and can later be retrieved via text-based searches.
If you’ve never tried it, take a picture of a car license plate with an iPhone and then search for the word “plate.”

For corporations (Google, Apple, etc.), associating descriptive attributes with each photo—including the names of people involved and geolocation—provides immense value for behavioral analysis.

For government agencies, as we have seen , being able to link a photo (perhaps from a surveillance camera) with a name and surname is extremely valuable.

It is no surprise, therefore, that facial recognition is a subject of significant interest.

Defeating Facial Recognition

Facial recognition relies on algorithms that learn, given a set of images, how to identify a person across different contexts. The more photos are available, the better the algorithm becomes.

About a year ago, Face Shield publicized an application designed to alter photos so that current facial recognition algorithms would fail to detect the people depicted.

But since these algorithms continuously improve, such a strategy is only effective temporarily: the more popular the app becomes, the faster countermeasures are developed to adapt recognition systems.

The Spread of Personal Photos

We have already discussed the risks of sharing personal data . Many services have a business model based on collecting massive datasets of personal, profiled information and reselling them. Clearview.ai is a prime example.

The safest recommendation remains: only publish content online that you consciously decide to share—always considering the worst-case scenario if it ends up in the wrong hands.

Face Shield

Face Shield provided a cloud-based app that altered photos to prevent facial recognition algorithms from identifying the people depicted. The tool offered three levels of obfuscation.

Original image:

Original image

Subtle effect: nearly identical to the original but capable of deceiving some recognition software (source: Face Shield website).

Face Shield subtle effect

Medium effect: visually different from the original, fooling several recognition systems.

Face Shield medium effect

Intense effect: unrecognizable to both humans and recognition software.

Face Shield intense effect

Key Questions

Before using any service, we should start asking the right questions. Face Shield is a perfect example.

Where is the company based?

It was headquartered in Canada—outside the EU and therefore beyond the reach of GDPR.

Is there a privacy policy?

No. While a privacy policy does not guarantee compliance (as discussed here ), its absence suggests a lack of consideration for user rights.

What is the business model?

Every company needs a sustainable business model. In this case, the application was free, with no revenue stream. The only plausible model was data collection and the eventual sale of the company.

Is the service useful?

Does it make sense to render people in photos unrecognizable—thereby degrading the images—only to then share those altered photos on social media?

Is the service effective?

In an ongoing arms race, will adding noise to photos plausibly keep them unrecognizable to facial recognition algorithms? Short term, maybe. Long term, certainly not.

Recovering obfuscated password

If tools such as Depix can recover censored (pixelated) passwords, it is not difficult to imagine similar countermeasures being developed against Face Shield.

What Happened Next

As mentioned, this article was written in April 2020 but never published. Revisiting it now, I found that Face Shield no longer exists.

Its author announced on Twitter :

PhD Candidate @rllabmcgill/@Mila_Quebec Deep Learning SwissArmy Knife: CV, NLP, Graphs, Gen. Models. http://FaceShield.ai (acquired), Ex-@UberAILabs,@BorealisAI,@UofT

We do not know who acquired Face Shield, but it is safe to assume the buyer obtained its software, algorithms, and potentially personal data. Under European law, such data would belong to the users; under U.S. law, it typically belongs to the company that collected it—largely free to use as it pleases.

Conclusions

Face Shield is a perfect case study to reflect on the lifecycle of many startups, their business models, and the degree of caution we should exercise before entrusting them with our personal data.

The key takeaway: before joining any service, always ask yourself:

  • Who operates it, where are they based, and what is their business model (follow the money)?
  • Does it truly solve a problem, or is it creating one?
  • Is the proposed solution effective?
  • What alternatives exist once the service disappears?

文章来源: https://www.adainese.it/blog/2021/02/04/ingannare-il-riconoscimento-facciale/
如有侵权请联系:admin#unsafe.sh