Preventing PII Leakage through Text Generation AI Systems
2023-12-8 09:35:24 Author: securityboulevard.com(查看原文) 阅读量:6 收藏

Do an online search for ways to bypass text generation AI security filters, and you will find page after page of real examples and recommendations on how one can trick them into giving you information that was supposed to be blocked. This remains true despite continuous efforts to improve these filters to the point where these efforts seem practically sisyphean.

The reality is that too many possible ways exist to create prompts for security filters to catch all possible bypasses. Today the bypass may be a prompt that asks for a story, tomorrow it may be a haiku. But, what makes the security bypasses of text generation AI systems particularly problematic is that anyone can do it. Prior to generative AI, hacking a computing system required at least a decent knowledge of basic computing concepts. With generative AI, anyone able to write and some free time can take a shot, and when they succeed, the myriad of social media outlets provide venues for a victory lap as they share with others one more way to trick the system.

In the context of generative AI systems that have access to sensitive or regulated data, this just means that security filters will do little to prevent data leakage, and that they provide virtually no value in meeting compliance requirements.

The only guaranteed way to avoid disclosure of PII and other similarly regulated data in GenAI systems is to make sure that PII was never ingested into the system at all. Or, put it another way, such data should never be made available to the system without being de-identified first. This way, the GenAI can never disclose PII simply because it cannot reveal what it never knew. And, if only the specific PII values in the data are de-identified, the impact on the utility of the GenAI system is minimal, if even noticeable.

While de-identifying PII at a field-level seems daunting at first, many enterprises are already doing this to meet compliance requirements today. Those who have not yet done so likely assumed that it would require onerous code changes. Fortunately, this is where Baffle can help. Baffle provides a proxy based solution for implementing field-level encryption, tokenization, and masking that de-identifies PII values without any application code change. This makes it incredibly easy to ensure that data privacy compliance requirements are met whether the data is used downstream for analytics or GenAI systems.

With Baffle, there is a clear path forward for enterprises to use GenAI in a compliant way, even if the dataset includes PIIs and other regulated data. If you have security concerns about your own upcoming GenAI projects, our sales team can help you identify the best way to address them.

The post Preventing PII Leakage through Text Generation AI Systems appeared first on Baffle.

*** This is a Security Bloggers Network syndicated blog from Baffle authored by Min-Hank Ho, VP Product Management. Read the original post at: https://baffle.io/blog/preventing-pii-leakage-through-text-generation-ai-systems/


文章来源: https://securityboulevard.com/2023/12/preventing-pii-leakage-through-text-generation-ai-systems/
如有侵权请联系:admin#unsafe.sh