An Innocent Picture? How the rise of AI makes it easier to abuse photos online.
2023-4-4 16:15:0 Author: blog.nviso.eu(查看原文) 阅读量:10 收藏

Introduction

The topic of this blog post is not directly related to red teaming (which is my usual go-to), but something I find important personally. Last month, I gave an info session at a local elementary school to highlight the risks of public sharing of children’s pictures at school. They decided that instead of their photos being publicly accessible, changes would be implemented to restrict access to a subset of people. However, there are many more instances of excessive sharing of information online; photographers’ portfolios, youth/sports clubs, sharenting on social media, etc.

There are many risks stemming from this type of information being openly available, and the potential risks have only increased with the rise of artificial intelligence. Since you are reading this post on the NVISO blog, I’m assuming you are more cyber-aware than the average person out there and therefore perfectly positioned to use the takeaways from this post and spread the word to others. Obligatory Simpsons reference:

Since the children themselves may not have a say in the matter yet and the people who do may not be aware of the possible dangers, it’s up to us to think of the children!

Traditional Risks

When thinking of the risks linked to the presence of children’s pictures online, an obvious threat is the type of person that might drive a van like this:

There are three traditional risks we will be discussing here:

  • Kidnapping
  • Digital Kidnapping
  • Pornographic Collections

Kidnapping

How does a picture of a child pose a risk for physical kidnapping? First of all, a picture could give away a physical location, for example due to the presence of street signs/names, recognizable elements such as shops, bars, monuments, schools, etc. If this is a location frequented by the child, a possible child predator could identify an opportunity for kidnapping there.

In case no identifiable elements are present, certain people might still giveaway the location due to oversharing. Imagine a picture on a Facebook profile that is publicly accessible with comments such as “birthday party at …”, “visiting grandma & grandpa in …”, “always a fun day when we go to …”. Often-visited locations can be deduced from comments like these.

Finally, a more technical approach is looking at the picture’s metadata, which often gives information about the type of camera that was used, shutter time, lens, etc. but can also contain an exact location where the picture was taken. No additional research is required to figure out where the child has been.

Digital Kidnapping

With digital kidnapping, the victim is affected by some type of identity fraud. Pictures of the child are stolen and reused by people online on their own social media, often pretending to be related to the children. An example could be an adoption fantasy, reposting pictures of the child for likes and comments without the child or its parents knowing about this.

Another, more dangerous form of digital kidnapping consists of a sexual predator reusing the victim’s pictures to target other possible victims. Someone could pretend to be a young child themselves to lure other children into meeting with them online or sharing potentially explicit pictures.

Pornographic Collections

Continuing on the topic of potentially explicit pictures, it is not a secret that the Dark Web is full of pornographic pictures of children. However, pictures that you or I would not consider to be risky or explicit could end up in such collections as well. Holiday pictures of children in swimsuits are happily shared by child predators in an attempt to fulfill their fantasies. They search through social media to identify such pictures, sharing them among each other along with sexual fantasies. With pictures of a certain child, they might search for pictures of lookalike children to add to their fantasy. With only a textual story, they might search for pictures of children that match the story.

However, these risks have been existent for a number of years already. What’s more dangerous is that the life of a child predator looking for pictures has been facilitated with rise of artificial intelligence.

Next-gen Risks

So what is the problem with public pictures? Not only can they be retrieved by anyone browsing the web, but they can and will also be gathered by automated systems through concepts called spidering and scraping. These activities aren’t particularly nefarious and actually part of the regular functioning of the web, used by search engines for example. However, other applications can make use of these same techniques and have already done so to create massive collections of pictures, even those you would not expect to be public, such as medical records

Facial Recognition

One such example is ClearView AI, which is aimed at law enforcement by applying its facial recognition algorithm to a huge collection of facial images to help with investigative leads. However, for the broader public, a similar application has become available, allowing anyone to upload a picture and receive an overview of other pictures with matching faces. While probably having legitimate use cases, PimEyes provides people with less honorable intentions an easy way to add a high-tech touch to the traditional risks mentioned above. If you haven’t heard about PimEyes yet, it allows to upload a picture of someone’s face, after which the application will provide you with a collection of matching pictures. The tool is already quite controversial, as evidenced by the articles below:

As an example, we provided PimEyes with the face of the middle child selected from the stock photo on the left below, which resulted in a set of pictures containing the same child:

Of course, the algorithm identifies the pictures that are part of the same set of stock pictures. When trying this out with a private picture of someone, the set of results contained distinct public pictures with the same person. The algorithm was able to identify them in pictures of low quality or with the person wearing a hat or mouth mask covering a large part of the face. Scary stuff, especially considering what you could be able to do with this output:

  • Imagine a picture of a child without any hints towards the location (e.g. stolen from Facebook or other social media). Upload it to PimEyes and you might be able to link the child’s face to other public pictures where a location can easily be deducted (such as a school website for example). You now know locations where the child may frequently be present.
  • Remember in one of the previous paragraphs where we said “With pictures of a certain child, they might search for pictures of lookalike children to add to their fantasy.” Well, this type of technology automates the task.
  • Resources above mention a woman having found sexually explicit content through facial recognition. Imagine your child falling victim to revenge porn in the future and having those pictures exposed. Through PimEyes it may even be possible that such pictures are shown in the results together with pictures of when the victim was still a child.

Of course, in addition to these “extreme cases”, in the future it may very well be that possible employers don’t just google your name, but also search your face before an interview. The results may consist of shameful pictures you would rather not have an employer see. There could be a psychological effect as well; maybe in the past you were struggling with certain physical conditions (e.g. being overweight) or affected by other conditions which are no longer relevant at the time when someone tries to find your older pictures. Being confronted with that type of past content may be a painful experience.

Generation of previously non-existent content

We’ve all been playing around and having a lot of fun with ChatGPT, DALL-E, and other AI models. While it is is possible to generate a picture from a textual prompt, it is also possible to take an existing image and swap out parts of the image based on a textual prompt. What could possibly go wrong? OpenAI does mention following protections having been put in place: “… we filtered out violent and sexual images from DALL·E 2’s training dataset. Without this mitigation, the model would learn to produce graphic or explicit images when prompted for them, and might even return such images unintentionally in response to seemingly innocuous prompts … “ Let’s see what we are able to do with some stock photos.

Starting off from the same stock photo, I erased the bottom part – very amateuristically I admit – so that it can be completed again by DALL-E:

Using a fairly innocent prompt (“modify the image to portray the children at the beach in swimming gear”), which could however be the type of picture child predators are after, we get the following possible images (note that we have blurred the resulting images):

Alright, these first two images do indeed look like a fun day at the beach, with an inflatable tire, bucket, and what looks like sand. The third image on the other hand, did surprise me a bit. This time, the girls have received shorts and the middle child even has some cleavage generated (adding to our decision of blurring the image). Do note that this is the result with an innocent prompt, specifically mentioning it is about children, and with mitigations against the generation of explicit content built-in by removing sexual images from the training set. Let’s leave it at this for this photo and try to generate something a bit more suggestive starting from this stock picture resulting from “business woman” as a search term. When asking to “turn this into a pin-up model”, starting from just the neck and head, we are able to receive some spicier results:

So this is what we can create from a completely random picture on the internet without having any photo editing skills. Now imagine this result applied to pictures of children and the risks are obvious.

Taking things a step further, other applications may not have the same limitations applied to their training data and are as a result clearly biased towards female nudity. The popular avatar app “Lensa” is known to return nude or semi-nude variations of photos for female users, even when uploading childhood pictures, as evidenced in following articles:

Taking things another step further, certain apps or services are specifically aimed at the creation of sexually explicit content in the form of deepfakes. Deepfakes are computer-generated images or videos that make use of machine learning to replace the face or voice of someone with that of someone else. Usually this consists of fake pornographic material targeting celebrities. However, deepfake content of adult women personally known to the people wanting to create deepfakes is on the rise, in part due to the ease with which you can create such content or request to have this content created.

However, applying deepfake technology to photo or video content of children is unlikely to remain off-limits for some people and the report above states that already some of the victims of the DeepNude telegram bot appear to be under 18.

There is no doubt that artificial intelligence and machine learning are here to stay. With all of their legitimate and highly useful applications, there is inevitably the potential for abuse as well. The only thing we can do as cybersecurity professionals, parents, friends, … is limiting the attack surface as much as possible and trying to make those close to us aware of the dangers.

Tips on reducing the risks

Some general tips we can take into account to protect ourselves and our children include:

  • Determine for yourself and your children what kind of information you are willing to share online and make this desire clear to others. Respect other people’s wishes in this regard. Some people may not like it when you post a picture of them or their children on your social media, even if it is a group picture.
  • Share pictures privately instead of via social media, e.g. mail pictures of the birthday party to a selection of recipients instead of posting online.
  • If you do want to post pictures on your social media, limit the target audience to friends or people you know. As an extension, make sure you only accept connections of people you know.
  • Avoid metadata and limit details regarding location and other information that could give away a location. Some additional guidance on removing metadata provided by Microsoft here.

Conclusion

Public pictures can easily be scraped into huge collections that are used for different purposes. While traditional risks (such as sharing on the Dark Web) linked to pictures of children are well-known, emerging technologies such as artificial intelligence and machine learning have opened Pandora’s Box for potential abuse. These collections of gathered pictures can be used for facial recognition or generation of new, possibly explicit content. The resulting dangers may not only manifest now, but perhaps years in the future. As such, it is not only about protecting the child they are today, but also the adult they will become.

About the author

Jonas Bauters

Jonas Bauters is a manager within NVISO, mainly providing cyber resiliency services with a focus on target-driven testing.
As the Belgian ARES (Adversarial Risk Emulation & Simulation) solution lead, his responsibilities include both technical and non-technical tasks. While occasionally still performing pass the hash (T1550.002) and pass the ticket (T1550.003), he also greatly enjoys passing the knowledge.


文章来源: https://blog.nviso.eu/2023/04/04/an-innocent-picture-how-the-rise-of-ai-makes-it-easier-to-abuse-photos-online/
如有侵权请联系:admin#unsafe.sh