55% of Generative AI Inputs Include Sensitive Data: Menlo Security
2024-2-14 23:54:19 Author: securityboulevard.com(查看原文) 阅读量:5 收藏

The rapid growth in the number of generative AI tools and platforms and their expanding adoption by organizations are giving legs to long-held concerns about security and privacy threats from the technology. A report released today by Menlo Security gives legs to those concerns.

The cybersecurity firm found that despite repeated warnings from their organizations, people continue to input sensitive information in generative AI tools. Over a 30-day period, 55% of the data loss prevention (DLP) events detected by Menlo analysts involved people entering personally identifiable information (PII) and another 40% included confidential documents.

A scattering of other such incidents included restricted, medical, and payment card information, according to the report, “The Continued Impact of Generative AI on Security Posture.”

“The data loss implications around generative AI are well documented and while many organizations have implemented corporate policies, we’re still seeing data loss events on generative AI platforms,” the report’s authors wrote. “Most organizations have sent out policies to their employees on responsible use of generative AI, however, this data illustrates how employees knowingly or unknowingly still attempt to input sensitive information into these platforms.”

The numbers highlight the need to supplement such policies with cybersecurity tools, they wrote.

An Ongoing Concern

Security and privacy concerns about generative AI arose soon after OpenAI jumpstarted the embrace of the technology when it launched ChatGPT in November 2022. There were worries about the data being inputted into platforms by users, the leak of corporate, proprietary, or sensitive data being used to train the large-language models (LLMs) used to create ChatGPT and similar products, and the use of generative AI by cybercriminals in their nefarious activities.

In particular, the use of generative AI in phishing attacks is a major worry, according to the researchers. Bad actors can use ChatGPT and similar tools to improve the quality of their phishing emails, smoothing out the misspellings, bad grammar, and awkward phrasing that typically have been warning signs to those targeted by the hackers. Cybersecurity firm SlashNext found that in the 12 months after ChatGPT hit the scene, there was a 1,265% jump in phishing emails sent, including a 967% increase in credential phishing.

That said, generative AI also is increasingly being used by cybersecurity professionals in such areas as threat hunting, incident response, policy management, and augmenting existing security teams.

There also were concerns about data breaches, which came into focus in March 2023 when OpenAI announced that such an attack exposed the personal and payment information of 1.2% of ChatGPT Plus subscribers. Other incidents include ChatGPT inadvertently leaking some company secrets of Samsung, which subsequently banned the use of the tool in its operations.

Researchers with cybersecurity firm Group-IB in June 2023 said they had found as many as 100,000 compromised ChatGPT user accounts for sale on the dark web.

The Changing Generative AI Landscape

In the year-plus since ChatGPT was released, the number of generative AI platforms has grown significantly and organizations, looking to customize ChatGPT and other generative AI tools to better address their specific needs, are training their own models, Negin Aminian, senior manager of cybersecurity strategy at Menlo, wrote in a blog post accompanying the report.

“We wanted to find out the impact GenAI is having on enterprise security postures,” Aminian wrote. “We analyzed GenAI interactions from 500 global organizations.”

While there are reports about a decrease in generative AI use generally, there’s been “significant” growth and use within enterprises, Menlo researchers wrote.

“This insight might highlight the differences between business usage versus personal usage,” they wrote. “In a business setting, generative AI could help create new ideas, improve emails, create content, and check for spelling and grammar mistakes.”

Understanding the market’s evolving nature is critical for striking the balance between productivity and security. In the last six months, the flow of investment money into the space has fueled increases in the number of AI platforms and specializations, they wrote. There also a trend of organizations training algorithms for their specific needs, but using available platforms that they then fine-tune with private and corporate data.

The exponential growth in the use of generative AI that was seen in the months following ChatGPT’s release has slowed, though growth continues.

A Focus on Security

The researchers found that companies are instituting more security policies, with 92% of the companies looked at doing so. Only 8% allow unrestricted use of generative AI platforms.

One problem is that employees are using multiple means for inputting data. Most do it through typing, but the other two most common ways are file uploads and copy-and-pasting. The incidence of data lost via file uploads is increasing.

“Previously, most solutions did not natively allow file uploads, but as new versions of generative AI platforms are released, new features are added, such as the ability to upload a file,” the researchers wrote, adding that “copy & pasting and file uploads could have the largest impact on data loss due to the amount of data that is quickly uploaded or inputted.”

In addition, given that the generative AI space continues to evolve with new platforms and functionality, security teams that apply security policies on a domain-by-domain basis should ensure they reviewing their lists frequently to ensure users aren’t access or exposing sensitive data on more obscure platforms.

“This process can be time consuming and ultimately will not scale,” the Menlo researchers wrote. “Organizations need to adopt security technology that enables policy management on a generative AI group level, providing protection against a broader cross-section of generative AI sites.”

Striking a Balance

All of this is forcing enterprises to balance the benefits of generative AI with the dangers, according to Menlo’s Aminian.

“Despite the difficulties, GenAI will keep growing and become common in almost every business area,” she wrote. “This will put increased pressure on security teams to make sure they have the technology and policies in place to enable the safe use of these GenAI tools. However, security shouldn’t come at the expense of productivity. Organizations need to ensure the safe use of these new tools without limiting their groundbreaking innovations.”

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/02/55-of-generative-ai-inputs-include-sensitive-data-menlo-security/
如有侵权请联系:admin#unsafe.sh