AI chat app leak exposes 300 million messages tied to 25 million users
好的,我需要帮用户总结这篇文章的内容,控制在100个字以内。首先,我快速浏览文章,发现主要讲的是一个安全研究员发现了Chat & Ask AI的数据泄露事件。这个应用有超过5000万用户,泄露了3亿条消息,涉及非法活动和自杀请求。问题出在Firebase配置错误,导致数据库暴露。研究员开发了一个工具扫描应用,发现了大量类似问题,并帮助修复。最后给出了保护隐私的建议。 接下来,我要把这些关键点浓缩到100字以内。先确定主要事件:数据泄露、用户数量、泄露内容、原因、解决方案和建议。然后用简洁的语言表达出来,确保不遗漏重要信息。 可能的结构是:安全研究员发现Chat & Ask AI数据泄露,影响5000多万用户,3亿条消息暴露,涉及非法活动和自杀请求。原因是Firebase配置错误。研究员开发工具扫描并帮助修复问题,并提供隐私保护建议。 检查字数是否在限制内,并确保语句通顺自然。 </think> 一位独立安全研究员发现AI聊天应用Chat & Ask AI因Firebase配置错误导致大规模数据泄露,影响超5000万用户,暴露3亿条消息及敏感内容。该漏洞被广泛利用后迅速修复,专家建议用户采取措施保护隐私。 2026-2-9 15:17:48 Author: www.malwarebytes.com(查看原文) 阅读量:2 收藏

An independent security researcher uncovered a major data breach affecting Chat & Ask AI, one of the most popular AI chat apps on Google Play and Apple App Store, with more than 50 million users.

The researcher claims to have accessed 300 million messages from over 25 million users due to an exposed database. These messages reportedly included, among other things, discussions of illegal activities and requests for suicide assistance.

Behind the scenes, Chat & Ask AI is a “wrapper” app that plugs into various large language models (LLMs) from other companies, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. Users can choose which model they want to interact with.

The exposed data included user files containing their entire chat history, the models used, and other settings. But it also revealed data belonging to users of other apps developed by Codeway—the developer of Chat & Ask AI.

The vulnerability behind this data breach is a well-known and documented Firebase misconfiguration. Firebase is a cloud-based backend-as-a-service (BaaS) platform provided by Google that helps developers build, manage, and scale mobile and web applications.

Security researchers often refer to a set of preventable errors in how developers set up Google Firebase services, which leave backend data, databases, and storage buckets accessible to the public without authentication.

One of the most common Firebase misconfigurations is leaving Security Rules set to public. This allows anyone with the project URL to read, modify, or delete data without authentication.

This prompted the researcher to create a tool that automatically scans apps on Google Play and Apple App Store for this vulnerability—with astonishing results. Reportedly, the researcher, named Harry, found that 103 out of 200 iOS apps they scanned had this issue, collectively exposing tens of millions of stored files. 

To draw attention to the issue, Harry set up a website where users can see the apps affected by the issue. Codeway’s apps are no longer listed there, as Harry removes entries once developers confirm they have fixed the problem. Codeway reportedly resolved the issue across all of its apps within hours of responsible disclosure.

How to stay safe

Besides checking if any apps you use appear in Harry’s Firehound registry, there are a few ways to better protect your privacy when using AI chatbots.

  • Use private chatbots that don’t use your data to train the model.
  • Don’t rely on chatbots for important life decisions. They have no experience or empathy.
  • Don’t use your real identity when discussing sensitive subjects.
  • Keep shared information impersonal. Don’t use real names and don’t upload personal documents.
  • Don’t share your conversations unless you absolutely have to. In some cases, it makes them searchable.
  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you’re not logged in to that social media platform. Your conversations could be linked to your social media account, which might contain a lot of personal information.

Always remember that the developments in AI are going too fast for security and privacy to be baked into technology. And that even the best AIs still hallucinate.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

About the author

Was a Microsoft MVP in consumer security for 12 years running. Can speak four languages. Smells of rich mahogany and leather-bound books.


文章来源: https://www.malwarebytes.com/blog/news/2026/02/ai-chat-app-leak-exposes-300-million-messages-tied-to-25-million-users
如有侵权请联系:admin#unsafe.sh