This article was initially drafted about a year ago, but I never published it. Revisiting it now has allowed me to follow up and confirm my initial concerns.
We know that nearly every photo we take likely passes through facial recognition software. Apple includes one in its Photos app, and both Google and Facebook have their own versions. Images are associated with people and objects, and can later be retrieved via text-based searches.
If you’ve never tried it, take a picture of a car license plate with an iPhone and then search for the word “plate.”
For corporations (Google, Apple, etc.), associating descriptive attributes with each photo—including the names of people involved and geolocation—provides immense value for behavioral analysis.
For government agencies, as we have seen , being able to link a photo (perhaps from a surveillance camera) with a name and surname is extremely valuable.
It is no surprise, therefore, that facial recognition is a subject of significant interest.
Facial recognition relies on algorithms that learn, given a set of images, how to identify a person across different contexts. The more photos are available, the better the algorithm becomes.
About a year ago, Face Shield publicized an application designed to alter photos so that current facial recognition algorithms would fail to detect the people depicted.
But since these algorithms continuously improve, such a strategy is only effective temporarily: the more popular the app becomes, the faster countermeasures are developed to adapt recognition systems.
We have already discussed the risks of sharing personal data . Many services have a business model based on collecting massive datasets of personal, profiled information and reselling them. Clearview.ai is a prime example.
The safest recommendation remains: only publish content online that you consciously decide to share—always considering the worst-case scenario if it ends up in the wrong hands.
Face Shield provided a cloud-based app that altered photos to prevent facial recognition algorithms from identifying the people depicted. The tool offered three levels of obfuscation.
Original image:
Subtle effect: nearly identical to the original but capable of deceiving some recognition software (source: Face Shield website).
Medium effect: visually different from the original, fooling several recognition systems.
Intense effect: unrecognizable to both humans and recognition software.
Before using any service, we should start asking the right questions. Face Shield is a perfect example.
Where is the company based?
It was headquartered in Canada—outside the EU and therefore beyond the reach of GDPR.
Is there a privacy policy?
No. While a privacy policy does not guarantee compliance (as discussed here ), its absence suggests a lack of consideration for user rights.
What is the business model?
Every company needs a sustainable business model. In this case, the application was free, with no revenue stream. The only plausible model was data collection and the eventual sale of the company.
Is the service useful?
Does it make sense to render people in photos unrecognizable—thereby degrading the images—only to then share those altered photos on social media?
Is the service effective?
In an ongoing arms race, will adding noise to photos plausibly keep them unrecognizable to facial recognition algorithms? Short term, maybe. Long term, certainly not.
If tools such as Depix can recover censored (pixelated) passwords, it is not difficult to imagine similar countermeasures being developed against Face Shield.
As mentioned, this article was written in April 2020 but never published. Revisiting it now, I found that Face Shield no longer exists.
Its author announced on Twitter :
PhD Candidate @rllabmcgill/@Mila_Quebec Deep Learning SwissArmy Knife: CV, NLP, Graphs, Gen. Models. http://FaceShield.ai (acquired), Ex-@UberAILabs,@BorealisAI,@UofT
We do not know who acquired Face Shield, but it is safe to assume the buyer obtained its software, algorithms, and potentially personal data. Under European law, such data would belong to the users; under U.S. law, it typically belongs to the company that collected it—largely free to use as it pleases.
Face Shield is a perfect case study to reflect on the lifecycle of many startups, their business models, and the degree of caution we should exercise before entrusting them with our personal data.
The key takeaway: before joining any service, always ask yourself: