Unmanaged software as a service (SaaS) applications and AI tools within organizations are posing a growing security risk as vulnerabilities increase, according to a report from Grip Security.
The anonymized data, pulled from 29 million SaaS user accounts, 1.7 million identities, and 23,987 distinct SaaS applications, indicated that 90% of SaaS applications and 91% of AI tools remain unmanaged, creating significant gaps in security governance.
As the SaaS adoption has surged, the number of applications used by enterprises has increased by 40% over the past two years.
Medium-sized companies led this growth, with a 47% rise in SaaS use, while small and large companies saw increases of 35% and 37%, respectively.
The report also revealed the number of SaaS accounts per employee has risen by 85%, with employees using an average of 13 tools by early 2024, compared to 7 in 2022.
Despite this increase, 73% of provisioned SaaS licenses remain unused, leading to overspending on software and highlighting a lack of effective license management.
Lior Yaari, Grip Security’s co-founder and CEO, cautioned this lack of oversight in both SaaS and AI tools creates security blind spots, leaving organizations vulnerable to potential cyberattacks.
“SaaS has allowed every employee to procure their own IT using just their email address,” he said. “This decentralized IT model creates visibility gaps and compliance risks since it is not subject to the standard security assessments and controls that govern other IT systems.”
He explained whenever a user creates an account in an unsanctioned app, it expands the company’s attack surface and creates a possible entry point for bad actors.
Furthermore, misconfiguration — such as insufficient access controls or lack of strong authentication requirements — increases the likelihood of a potential breach.
“In addition to the visibility challenge, the sheer volume of user-procured SaaS in the age of SaaS also presents a tremendous challenge,” Yaari said.
He noted even a mid-sized company with less than 100 employees would face the challenge of identifying and assessing the risk of hundreds of SaaS apps, and most companies do not have the human capacity to do this effectively.
From Yaari’s perspective, one of the best ways to reduce licensing costs is not to pre-provision SaaS licenses for all employees.
“Many companies have a standard set of apps that are provisioned for every employee,” he said. “Our analysis found that 73% of employees don’t use some or all of their provisioned apps.”
Provisioned unused accounts are extremely vulnerable since no one would notice if a bad actor somehow gained access.
“Provisioning licenses on a more just-in-time basis would reduce licensing costs and reduce security risks for the company,” he said.
AI Risks Include Data Ownership, Compliance
The report also revealed AI adoption is outpacing security governance by a 4:1 margin, with 80% of AI apps not managed through Security Assertion Markup Language (SAML) protocols.
He added there are many risks to companies when it comes to AI, particularly in areas like data security, ownership, compliance and output accuracy.
A major concern is the use of insecure data in AI applications, where many employees upload sensitive information without the protection of advanced security protocols like SAML.
“When apps are only secured by a username and password, the data becomes more vulnerable to breaches,” Yaari said.
This issue is exacerbated by the fact that many AI tools offer free versions, where users unknowingly grant third parties licenses to their data, potentially putting intellectual property at risk.
Compliance is another key challenge, especially as AI apps are increasingly used to process regulated data.
Many employees use AI without notifying IT or compliance teams, leaving companies exposed to potential violations of privacy laws.
“Compliance violations in the EU alone can result in fines of up to 35 million euros or 7% of annual revenue,” Yaari said, highlighting the financial stakes involved.
Furthermore, the potential for AI to produce inaccurate or biased outputs poses a significant risk, especially in sensitive industries like financial services.
“AI-generated decisions could lead to poor outcomes if the output is inaccurate or discriminatory,” Yaari said.
Recent Articles By Author