Instagram flagged explicit messages to minors in 2018. Image-blurring arrived six years later
好的,我现在需要帮用户总结一篇文章的内容,控制在100字以内。用户要求直接写描述,不需要开头语。首先,我得通读整篇文章,抓住主要信息。 文章讲的是Meta公司花了六年时间才在Instagram上模糊 explicit 图片。内部邮件显示,早在2018年高管们就知道未成年人收到这些图片。后来他们用了客户端分类器来自动模糊图片,但直到2024年才推出。同时,内部调查数据显示近五分之一的青少年遇到不想要的色情内容,还有8.4%看到自残行为。这些数据之前没有公开。 用户可能需要一个简洁明了的总结,用于快速了解文章内容。我需要确保涵盖Meta的延迟行动、内部邮件、过滤器推出时间以及内部调查结果。 现在把这些要点浓缩到100字以内,确保信息完整且简洁。 </think> Meta被指控拖延六年才在Instagram上实施模糊 explicit 图片的功能,尽管内部邮件显示高管早在2018年就知悉未成年人收到此类内容。最终于2024年推出的客户端分类器可自动模糊发送给青少年的 explicit 图片。内部调查显示近五分之一青少年曾遇不想要的色情内容,8.4%看到自残行为。 2026-2-26 10:34:43 Author: www.malwarebytes.com(查看原文) 阅读量:6 收藏

Meta took six years to blur explicit images on Instagram, even though internal emails show executives were aware in 2018 that minors were receiving them, according to newly unsealed court documents.

In a deposition given last year, Adam Mosseri (now the head of Instagram) discusses an email thread with Guy Rosen, Meta’s VP and chief information security officer at the time. Rosen explained in the thread that adults could find and message minors on the platform. The messages could contain what Rosen called:

“tier 2 sexual harassment, like dudes sending dick pics to everyone”

up to…

“tier 1 cases where they end up doing horrible damage.”

The tool Meta now uses to address the problem is a client-side classifier that automatically blurs explicit images sent to teens in direct messages. But it wasn’t rolled out until roughly six years after that email exchange, in September 2024.

The deposition was unsealed last week and filed on February 20, 2026, in MDL No. 3047 (Case No. 4:22-md-03047-YGR), a multidistrict litigation case in Northern California in which hundreds of families allege that platforms including Instagram were designed to maximize screen time at the expense of young users’ well-being. The filing is available through the court’s PACER docket.

The filing also surfaces internal survey data that Instagram had kept confidential. Nearly one in five respondents aged 13 to 15 reported encountering unwanted nudity or sexual imagery on the platform. A further 8.4% of them said they had seen someone harm themselves or threaten to do so on Instagram within the past week.

Instagram’s own Transparency Center didn’t disclose this at the time. Its child-endangerment section stated simply that the company was still working on the numbers. Mosseri also confirmed he had never publicly shared an internal estimate of around 200,000 daily child users experiencing inappropriate interactions, a figure referenced during questioning.

His defence, and Meta’s, rests on the claim that the company was not idle during those six years. Mosseri told the court that other protections were introduced in the interim, including restrictions on adults messaging teens they are not connected to, and systems designed to flag potentially risky accounts.

He pushed back on the idea that parents should have been explicitly warned about unmonitored direct messages, arguing that the risk exists on many messaging platforms. Meta spokesperson Liza Crenshaw pointed to Teen Accounts and parental controls, saying the company has been working on the problem for years.

The nudity filter is not the only safety measure under scrutiny. Court filings in related proceedings allege Meta explored making teen accounts private by default as early as 2019, then dropped the idea over concerns it would damage engagement metrics. That default-private switch did not arrive until September 2024.

Whistleblower Arturo Béjar, a former Meta engineering director, told the US Senate in 2023 that he had raised teen safety concerns directly with Mosseri and other executives. He acknowledged that the company researched these harms extensively, but questioned whether it acted with sufficient urgency.

An independent audit published in September 2025 found that of 47 teen safety features Instagram publicly promoted, fewer than one in five functioned as described, according to the report’s findings.

Mosseri’s 2023 performance self-review, entered as an exhibit in the case, celebrated revenue at all-time highs and boasted about delivering results despite cutting his team by 13%. Teen well-being did not appear as a criteria in that review. He explained that well-being sat with a centralized Meta team, outside his direct remit.

In a courtroom asking whether Instagram’s leadership prioritised growth over safety, that distinction may not land the way he hopes.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

About the author

Danny Bradbury has been a journalist specialising in technology since 1989 and a freelance writer since 1994. He covers a broad variety of technology issues for audiences ranging from consumers through to software developers and CIOs. He also ghostwrites articles for many C-suite business executives in the technology sector. He hails from the UK but now lives in Western Canada.


文章来源: https://www.malwarebytes.com/blog/family-and-parenting/2026/02/instagram-flagged-explicit-messages-to-minors-in-2018-image-blurring-arrived-six-years-later
如有侵权请联系:admin#unsafe.sh