Prompt Injection Vulnerability in EmailGPT Discovered
2024-6-7 00:11:34 Author: securityboulevard.com(查看原文) 阅读量:8 收藏

Prompt injection vulnerabilities have been discovered in EmailGPT, an API service and Google Chrome extension that helps users write email messages in Gmail using OpenAI’s GPT models.

According to the Synopsys Cybersecurity Research Center (CyRC), the vulnerability allows malicious users to inject harmful prompts and take over the service logic. Attackers can force the AI service to leak hard-coded system prompts or execute unwanted prompts, potentially exposing sensitive information.

In prompt injection attacks, individuals input specific instructions to trick chatbots into revealing sensitive information. They are a growing risk as GenAI adoption grows.

A blog post detailing the vulnerability noted that EmailGPT’s vulnerability can be exploited by anyone with access to the service, and the issue affects the “main” branch of the EmailGPT software.

The vulnerability has a CVSS Base Score of 6.5, indicating a medium severity. CyRC has not received a response from the developers within the 90-day disclosure period. As a fix, CyRC recommends immediately removing the application from networks to prevent exploitation.

Mohammed Alshehri, the Synopsys security researcher who found the vulnerability, explained an attacker submits a specially crafted prompt to the EmailGPT service. “The service processes the malicious prompt, causing it to execute unintended actions,” he said. “These actions might include revealing confidential data or executing unintended operations based on the injected prompt.”

While the CVE itself is not particularly dangerous, and the vulnerable software is not commonly used, Alshehri noted, it highlights a significant real-world issue in AI-based services, especially those using Large Language Models (LLMs). The critical vulnerability empowers attackers to bypass the intended functionality and directly insert malicious prompts, thereby seizing control over the service and potentially jeopardizing sensitive user data.

“This form of attack is not the same as traditional prompt injection attacks, which manipulate the AI model’s behavior through cleverly worded prompts without exploiting a code flaw,” said Eric Schwake, director of cybersecurity strategy at Salt Security

Security researchers within Synopsys’ Software Integrity Group found that controlling an AI’s behavior via prompt injection without accessing the underlying code or systems is surprisingly common.

“This advisory emphasizes the need for prompt handling protections when integrating LLMs,” Alshehri said. “The field of AI security is growing, and this disclosure should serve as a reference to help identify similar vulnerabilities.”

Exploiting this vulnerability could have several serious consequences:

  • A risk of intellectual property leakage, as system prompts might be exposed.
  • Email content could be partially exposed, depending on how the AI service provider implements the LLM’s “memory.”
  • Financial loss is another potential outcome, resulting when a user makes repeated requests to the AI provider’s API, which operates on a pay-per-use basis.

The CVSS indicates that anyone within the same shared physical or logical network could exploit the AI directly without needing any authentication scheme.

“This exposure of the AI’s system prompts and users’ email content would result in a total loss of confidentiality, severely compromising the privacy and security of users’ data,” Alshehri cautioned.

Audit Your Organization’s Applications for Vulnerabilities

Given the situation’s urgency, Schwake said, organizations should prioritize a comprehensive audit of all installed applications, specifically focusing on those using AI services and language models. “This audit should identify any applications similar to EmailGPT that rely on external API services and assess their security measures,” he said.

If any potentially vulnerable applications are discovered, remove them to prevent potential data breaches.

“Furthermore, organizations should enforce stringent policies regarding installing and using third-party applications, particularly those that interact with sensitive information,” Schwake said.

Developers of AI services, such as EmailGPT, must prioritize security at every stage of development, Schwake stressed. This involves thorough input validation to block malicious prompt injections, frequent security audits to detect and fix vulnerabilities, and comprehensive testing to guarantee the service’s ability to withstand different attack methods.

“It’s important to keep up with the latest security research and best practices in AI security,” Schwake said. “Additionally, developers should establish robust authentication and authorization measures to manage access to sensitive API endpoints and data.”

Photo credit: 84 Video on Unsplash


文章来源: https://securityboulevard.com/2024/06/prompt-injection-vulnerability-in-emailgpt-discovered/
如有侵权请联系:admin#unsafe.sh