Until just a few years ago, the idea of using AI in our personal and professional lives seemed like science fiction. Now, various sectors are incorporating GenAI applications as part of their technology stack, promising advances in productivity, creativity, and efficiency.
Secure ChatGPT, Secure AI Transformation
From natural language models that generate text to deep learning algorithms that create realistic images and videos, GenAI seems poised for far-reaching implications. Beyond the veneer, the science behind GenAI is that it is driven by data—in the form of input prompts and outputs in responses.
This raises data security and governance concerns for organizations. The challenge in front of them: How to harness the power of GenAI while keeping all sensitive data secure. That’s where Data Security Posture Management (DSPM) comes in. DSPM allows organizations to discover, classify, monitor data usage and identify data risks, such as spotting sensitive data in locations where it shouldn’t be.
As GenAI technology continues to advance rapidly, so do the data risks. That is why it’s critical for DSPM solutions to keep up the speed of GenAI. Here are three GenAI advancements, their associated risks, and how DSPM mitigates them.
- How it works: With federated learning, AI models are trained in a decentralized way using multiple servers. Each server holds onto their respective data locally without sharing it among the other servers training the AI.
- The data risk: Although federated learning is meant to mitigate data risks by prohibiting servers to share data, the local server training the AI model may contain sensitive data that should not leave your organization, intellectual property (IP), PPI, PHI, etc.
- How DSPM mitigates risk: With strong access controls to protect data, DSPM ensures that sensitive data is not ingested into AI models. This is done by controlling users’ permission rights to sensitive files and monitoring who accesses GenAI applications.
- How it works: Multimodal AI is typically used in text-to-image GenAI apps such as OpenAI’s DALL-E where a user prompts the AI engine to generate images. It works by processing and learning from various data sources such as images, text, and audio to generate the response.
- The data risk: Data privacy can be a risk with multimodal AI, as it processes various types of data. Organizations need to be aware of what type of data is being used to train the model and prompt the AI engine, particularly data that is regulated by privacy regulations such as GDPR, CCPA and others.
- How DSPM mitigates risk: Users with the best intentions can inadvertently enter regulated data such as PII, ITAR, PCI-DSS into GenAI apps. DSPM can identify this type of data, confirm if it has been shared across different geographies, potentially violating data sovereignty laws, and show how many GenAI apps contain regulated data. Our GenAI Security solution can do this today with ChatGPT Enterprise. And we’ll continue to focus on adding more GenAI platforms in the future.
- How it works: Large-scale pretraining and fine-tuning are current methods used to train GenAI. They are both continually evolving and becoming more efficient as the AI engine ingests and learns from various types of data.
- The Data Risk: When a user asks a GenAI app a question, the response is derived, in one way or another, from data that was previously used for a prompt. As an example, a user at a financial institution might upload a file with credit card information, including customer names and other confidential data, requesting GenAI create a spreadsheet. This uploaded data trains and improves the AI model. If another user, say a student, asks GenAI for a spreadsheet example with credit card information, the response could be the actual data uploaded by the financial institution user.
- How DSPM mitigates risk: Today's DSPM solutions are engineered to provide greater visibility into the types of data being used for prompts. Our GenAI Security solution integrates with OpenAI’s ChatGPT Enterprise API. With this API-level access our DSPM can provide detailed reports on GenAI usage— where users access these tools, which users violate privacy regulations, which users overshare sensitive data, and more as it pertains to ChatGPT Enterprise usage. Our DSPM solution also provides visibility into the files uploaded into ChatGPT Enterprise. This gives organizations the ability to review the activity to design and implement stronger data security policies.
Like all innovations, GenAI will continue to evolve, taking on new functionalities. Leveraging modern DSPM solutions will enable organizations to harness these functions, giving organizations competitive advantages safely, in a way that mitigates data and compliance risks.
With Forcepoint DSPM, organizations can accelerate their AI adoption with confidence, knowing that their sensitive data is secure. Unlike traditional DSPM solutions that cannot see what data is being used in any GenAI application, only Forcepoint has the capability to view and delete data stored in ChatGPT Enterprise chats, including chat histories.
Step up to a DSPM solution that operates at the speed of GenAI. Learn how Forcepoint can safeguard your AI transformation with Gen AI Security.
Carlos Carvajal, Senior Product Marketing Manager at Forcepoint for SD-WAN and Advanced Threat Protection solutions, brings 15 years expertise delivering enterprise solutions, including cloud security, AIOPs, and industrial printing. He has held senior positions at IBM and Canon and holds an MBA...
Read more articles by Carlos Carvajal
Forcepoint is the leading user and data protection cybersecurity company, entrusted to safeguard organizations while driving digital transformation and growth. Our solutions adapt in real-time to how people interact with data, providing secure access while enabling employees to create value.
文章来源: https://www.forcepoint.com/blog/insights/forcepoint-dspm-operating-speed-genai
如有侵权请联系:admin#unsafe.sh