Sysdig Extends CNAPP Reach to AI Workloads
2024-4-30 22:0:32 Author: securityboulevard.com(查看原文) 阅读量:1 收藏

Avatar photo

Sysdig today extended its cloud-native application protection platform (CNAPP) to enable cybersecurity teams to identify suspicious activity involving artificial intelligence (AI) workloads.

Maya Levine, a product manager for Sysdig, said the goal is to identify and rank the risk levels of workloads that are publicly exposed to the internet and which might contain exploitable vulnerabilities.

The Sysdig CNAPP is based on the open source Falco threat detection software that is being advanced under the auspices of the Cloud Native Computing Foundation (CNCF). Now that AI models are used more often in production environments, cybercriminals are probing for weaknesses. A recent Sysdig report finds that since December 2023 the deployment of OpenAI packages has nearly tripled. Of the GenAI packages currently deployed, OpenAI makes up 28%, followed by Hugging Face’s Transformers at 19%, Natural Language Toolkit (NLTK) at 18% and TensorFlow at 11%, the report notes.

The overall goal is to enable cybersecurity teams to work more closely with data science teams that typically don’t have much cybersecurity expertise. As a result, they are even more inclined to make mistakes than application developers that have increasingly been adopting best DevSecOps practices to ensure applications are secure before they are deployed.

A Need to Secure AI Models

In fact, the National Security Agency (NSA) in collaboration with multiple security and law enforcement agencies from around the world just shared an advisory that in addition to identifying a need to secure AI models also describes what guardrails should be in place to ensure services such as ChatGPT are used safely.

AIE

Techstrong Podcasts

a sample screen from sysdig.

Organizations embracing AI must ensure the data they share via prompts with a generative AI platform are not being monitored by cybercriminals. A lot of that information could easily find its way into a phishing attack, one that would be even more credible if it appears to be related to something someone is actually working on. The organizations also need to make sure that sensitive data, such as credit card numbers, are not shared with generative AI platforms that might inadvertently use them to train the next iteration of an AI model.

Cybersecurity Challenges

For organizations building or customizing AI models, the cybersecurity challenges are even more demanding. Much like any other application, the components used to build an AI model might have vulnerabilities that could be exploited after an AI model is deployed in a production environment. In addition, cybercriminals might also use stolen credentials to try and poison a model by exposing it to false data that would make it deliberately hallucinate, or they might adjust the weights that determine what output is generated.

Finally, an AI model is arguably going to be among the most valuable intellectual property any organization is likely to have. Rather than sabotaging the AI, cybercriminals might simply opt to steal it altogether.

Each organization needs to determine how best to secure AI tools and platforms. But the number of cybersecurity professionals with AI expertise is limited. In the short term at least, cybercriminals might have the upper hand until existing cybersecurity tools and platforms are updated.

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/04/sysdig-extends-cnapp-reach-to-ai-workloads/
如有侵权请联系:admin#unsafe.sh