Nudge Security today extended the scope of its namesake security and governance platform to monitor sensitive data shared via uploads and integrations with an artificial intelligence (AI) service, in addition to now being able to identify individuals sharing that data by department or the specific tools used.
In addition, Nudge Security is now making it possible to create playbooks to automatically enforce governance policies whenever there is an incident.
Nudge Security CTO Jaime Blasco said these capabilities extend a security posture platform originally developed to detect usage of software-as-a-service (SaaS) applications that has been extended to AI services. That ability to provide guardrails to employees as they interact with AI tools has now also been extended to the browser level in addition to the client software used to access SaaS applications.
That capability is more crucial than ever in an era where sensitive data is now being regularly shared with sanctioned shadow AI services, he added.
Organizations are, for now, opting to remind employees of the organization’s acceptable use policies but they also have the option to restrict access, noted Blasco. In general, the more regulated the industry is, the more aggressive enforcement of governance policies is likely to be, he added.
It’s not clear to what degree cybersecurity teams are proactively enforcing cybersecurity policies at a time when many employees are actively being encouraged to use AI tools. However, there have already been a number of cybersecurity incidents involving AI platforms, which means it is all but inevitable there will be more. The challenge is that it might become apparent that sensitive data involving intellectual property or personally identifiable information (PII) winds up inadvertently manifesting itself in the output generated by AI service that has been retrained using that data.
Nudge Security does provide condensed summaries of data training policies that highlight how each vendor uses, retains, and handles data but mistakes will always be made as more AI tools and platforms are employed. In fact, Nudge Security research has discovered there are already on average 39 unique AI tools being used per organization.
In total, more than 1,500 unique AI tools have been discovered across a variety of organizations, with more than 50% of SaaS applications having a large language model (LLM) provider in their data subprocessor list, according to Nudge Security.
According to Nudge Security, an average of 70 OAuth grants have been made per employee, many of which enable data sharing.
Those issues will only become more pronounced as organizations adopt more AI agents, an issue that Nudge Security plans to address in a future release.
Fresh off raising an additional $22.5 million in funding, Nudge Security claims annual recurring revenue has increased by a factor of three in the last 12 months. Of course, when it comes to securing SaaS applications and AI services, there is no shortage of competitors. The only thing that remains to be resolved now is how to ensure security at a time when every employee seems bound and determined to share as much of it as possible with little to no regard for the implications.
Recent Articles By Author