Why Legit Security Immediately Joined the New Coalition for Secure Artificial Intelligence (CoSAI)
2024-8-7 18:0:0 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

Get details on CoSAI and why Legit chose to be a part of this forum.

Surging adoption of generative artificial intelligence (GenAI) and large language models (LLMs) is both revolutionizing software development and exposing organizations to risk at an unprecedented pace. This is why, as CTO and Co-Founder of Legit Security, I am proud to announce that we are the first application security posture management (ASPM) vendor to join the newly formed Coalition for Secure AI (CoSAI) — an independent industry forum founded by Google that’s dedicated to advancing comprehensive security measures for AI.

We’re at the dawn of an AI revolution that will profoundly transform the business landscape — especially impacting the way we build and deliver software. According to Gartner®, Inc., by 2025, 80% of the product development lifecycle will make use of generative AI (GenAI) code generation, with developers acting as validators and orchestrators of back-end and front-end components and integrations. Nearly all of our enterprise customers, including many of the world’s largest development shops, have embraced AI and are leveraging it in various capacities throughout the software development lifecycle (SDLC) to build applications faster, more efficiently, and at unprecedented scale. 

But as significant as the positive effects of AI, so too are the risks.

The Rising Tide of AI Threats and Cyber Risk

The reality is that the vast majority of organizations aren’t prepared to secure or mitigate the risks of rapid AI adoption. In a recent survey of technology leaders, IBM found that only 24% of new AI projects actually contained a security component. And the implications of this could be dire unless organizations urgently prioritize security controls and best practices.

We’re already seeing just how vulnerable organizations are to AI models when they aren’t properly monitored, secured, or managed. Developers are mistakenly leveraging malicious AI models available on open-source registries (e.g., Hugging Face) in their own software projects. And even more LLMs and AI models contain bugs and vulnerabilities that have the potential to cause AI supply chain attacks, like the AI Jacking vulnerability Legit discovered earlier this year. Everyday there are more reports of AI security vulnerabilities from prompt injection to inadvertent data disclosure to poor implementations and misconfigurations of LLMs in applications. 

AI security risks go well beyond open-source AI. Leading providers of commercial and proprietary AI products have experienced their fair share of security setbacks themselves. For instance, OpenAI disclosed a vulnerability last year in ChatGPT’s information collection capabilities that attackers could exploit to obtain customers’ secret keys and root passwords.

Why Legit Security Joined CoSAI  

At Legit Security, we’re on a mission to secure the world’s software. By joining CoSAI, we’re aligning ourselves to a trailblazing coalition of industry-leading organizations that all share our commitment to robust AI security. CoSAI’s focus on the following three key areas of AI security resonates deeply with our own strategic objectives:

  1. Software supply chain security for AI systems: As AI models become integral to software development, understanding their provenance and managing third-party model risks are critical. CoSAI’s efforts to extend the framework of SLSA and SLSA provenance to AI models will provide the necessary guidance to evaluate the security of AI software throughout its lifecycle.
  2. Preparing defenders for a changing cybersecurity landscape: The complexity of AI security concerns requires a specialized framework to help security practitioners navigate these challenges. CoSAI’s development of a defender’s framework will equip organizations with the tools and strategies needed to mitigate AI-related risks effectively.
  3. AI security governance: Establishing a robust governance structure around AI security is essential. CoSAI’s work in developing a taxonomy of risks, a checklist, and a scorecard will empower organizations to assess, manage, and monitor the security of their AI products comprehensively.

Moving Forward 

As AI continues to evolve, it’s imperative that the industry’s risk mitigation strategies advance together with it. Legit is committed to contributing to CoSAI’s mission and collaborating with industry peers to ensure the secure implementation, training, and use of AI. This way, all organizations, including our customers, are equipped with the latest guidance and tooling to safeguard their environments today and far into the future. 

This is also why Legit is prioritizing development of AI security capabilities throughout our application security posture management (ASPM) platform, including our announcement earlier today introducing Legit AI Command Center.

Together, we can build a safer, more secure future for AI in software development. We invite you to join us in this pivotal endeavor.

Additional Resources 

  1. Check out our latest release announcing Legit AI Security Command Center.
  2. Learn more about CoSAI by visiting coalitionforsecureai.org
  3. Read more new Legit research on The State of GitHub Actions Security.
  4. See Legit AI Security Command Center in action; schedule a demo today.

*** This is a Security Bloggers Network syndicated blog from Legit Security Blog authored by Liav Caspi. Read the original post at: https://www.legitsecurity.com/blog/why-legit-joined-coalition-for-secure-ai-cosai


文章来源: https://securityboulevard.com/2024/08/why-legit-security-immediately-joined-the-new-coalition-for-secure-artificial-intelligence-cosai/
如有侵权请联系:admin#unsafe.sh