Google flags man as sex abuser after he sends photos of child to doctor
2022-8-23 19:0:0 Author: www.malwarebytes.com(查看原文) 阅读量:11 收藏

Mark noticed something was wrong with his son. His penis was hurting and appeared to be swollen. Since it was a Saturday during the pandemic, an emergency consultation was scheduled by video. So the doctor could assess the problem ahead of time, the parents were advised to send photos of their toddler’s groin area before the appointment. In one of these pictures Mark’s hand was visible, helping to better display the swelling.

Luckily for his son, the doctor diagnosed the issue and prescribed antibiotics. But the episode left Mark with a much larger problem, one which made him the target of a police investigation, according to a recent article in the New York Times. The subsequent investigation ruled that this was not a case of child sexual abuse.

Two days after Mark sent the photos, he got a notification saying his account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” One of the list of possible reasons was “child sexual abuse & exploitation.”

Mark realised it must be connected to the photos. 

False positive

In computing, a false positive is a file that gets marked as malicious when it actually isn't, and Mark's photos were a false positive. But sadly there are a lot of images that aren't false positives.

Although estimates vary across studies, research shows that about one in four girls and one in thirteen boys in the United States experience child sexual abuse.

In the second half of 2021, Google alone filed over 287,368 reports of child abuse material and disabled the accounts of over 140,000 users as a result. The US National Center for Missing and Exploited Children (NCMEC) which is the clearinghouse for abuse material, received 29.3 million reports last year, an increase of 35% from 2020. In 2021, The NCMEC’s CyberTipline reported that it had alerted authorities to over 4,260 potential new child victims.

From numbers provided by Facebook and LinkedIn we can see that over half of the accounts that were reported to the authorities were dismissed after manual review.

The consequences

We have heard from cybersecurity evangelist Carey Parker how hard it is to de-Google your life. Imagine being forced into that position without fair warning.

When Mark's photos were flagged as abuse, he lost access to all his Google accounts, including his Android phone. Even after being exonerated by the police his access was not restored.

“Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.”

Inevitable

When you look at the numbers, it is clear that automation is needed to review the huge number of reports. Not to mention the mental health issues a human moderator may encounter. But can we trust Artificial Intelligence (AI) to decide in cases where the consequences can be so dire? How can we make a choice between ruining someone’s life just because an algorithm coughed up their name, and missing a case of child abuse?

Whichever way we decide to go, we should not leave this in the hands of machines alone. To extend the comparison to malware false positives: If our AI detects a false positive and we find out, we hurry to remove the false detection and help correct any errors that ensued from it. That is what our customers expect from us and they are right to do so.

Oblivious

Both governments and tech giants are unwilling to share details about the inner working of the system, for understandable reasons. But should we not have at least some insight? At least enough to not become the next false positive.

By not knowing how these scanning algorithms work, we have no idea of knowing how we can avoid becoming  a false positive. I’ve sometimes wondered, because of my profession and my interests in coding and malware how many alarms I have triggered and whether at some point someone is coming to knock on my door and ask what’s up with that?

The end verdict

While the discussion about the algorithms and their consequences is a valid one, maybe we shouldn’t even be having it. What gives any government or tech giant the right to go through our personal files? In an article written in response to the New York Times article, the Electronic Frontier Foundation (EFF) concludes that the real solution lies in “real privacy”.

“The answer to a better internet isn’t racing to come up with the best scanning software. There’s no way to protect human rights while having AI scan peoples’ messages to locate wrongdoers.”

The problem is real, but giving up our privacy may not be the answer.


文章来源: https://www.malwarebytes.com/blog/news/2022/08/google-flags-man-as-sex-abuser-after-he-sends-photos-of-child-to-doctor
如有侵权请联系:admin#unsafe.sh