Protect AI announced today it has acquired Laiyer AI, a provider of open source software used to protect large language models (LLMs) from security threats, misuse and prompt injection attacks and manage risk and compliance requirements.
Laiyer AI’s LLM Guard software specifically addresses the detection, redaction and sanitization of inputs and outputs from LLMs. LLM Guard also requires much less inference latency to execute scans, which makes it possible to rely on commercial CPUs rather than more expensive graphical processor units (GPUs).
That software will be made available as an extension of Protect AI that organizations can opt to have commercially supported by Protect AI. The acquisition of Laiyer AI comes on the heels of the launch of a Protect AI gateway designed to make it simpler for organizations to enforce cybersecurity policies for AI models.
Protect AI president Daryan Dehghanpisheh said that approach will enable contributions to continue to be made to the open source project while providing enterprise IT organizations with extensions and the level of service and support they require to adopt any type of open source software. LLM Guard is already the default security scanner for open source projects such as LangChain, a tool many organizations rely on to manage the workflows associated with building AI models.
The acquisition also serves to accelerate the pace at which Protect AI planned to address issues such as compliance as AI regulations become increasingly stringent, noted Dehghanpisheh.
The European Union has been leading the way in terms of defining AI regulations. However, individual states within the U.S. have passed similar regulations, and it’s only a matter of time before the U.S. Congress passes legislation that limits how AI can be applied. Much like organizations rely on governance, risk and compliance (GRC) software to address other compliance requirements, the same capabilities will be required for AI initiatives, said Dehghanpisheh.
In fact, the rise of AI is likely to further accelerate an ongoing convergence of cybersecurity and compliance management that is already well underway in many organizations, he added.
In the meantime, the primary threats to AI models are malicious prompt injections, training data that is deliberately designed to, for example, create erroneous outputs and vulnerabilities in the software used to build AI models that might later be exploited by cybercriminals to activate malware. Most of these attacks can be traced back to stolen credentials that, in addition to compromising a model, also make it possible for cybercriminals to steal a copy of an AI model that represents a significant investment in intellectual property.
It’s still early days when it comes to defining security best practices for machine learning operations (MLSecOps), but it’s clear demand for AI cybersecurity expertise is rising. That presents cybersecurity professionals with new opportunities to apply their skillsets in an area where demand far exceeds the available supply. The challenge, of course, is that with each passing minute, cybercriminals are also becoming more adept at finding ways to compromise those AI models.
Recent Articles By Author