Security does a great job of sensationalizing attacks. This trend was set from a perspective of awareness and edge cases which the industry deals with as attacks and realized perspectives. While this approach leads to discussions around AI-driven cybersecurity threats the risks are hard to comprehend but certain attack vectors reveal a more nuanced reality. Injection attacks, for example, have received considerable attention, often leading to an overestimation of their immediate impact.
The speed and sophistication with which attackers are leveraging genAI to enhance social engineering might be surprising to some, we have seen this speed before. However, using AI for voice impersonation and data manipulation leads to critical impacts. A notable incident involved Marco Rubio, where a deepfake of his voice was used to create fake messages and join calls over Signal. This perfectly illustrates how AI can execute social engineering attacks with an unsettling degree of realism, making it difficult to distinguish genuine communications from malicious ones. This capability extends beyond voice, impacting text and other communication mediums, raising serious concerns about information integrity and identity verification.
While social engineering itself isn’t new, the inherent complexity of AI systems makes attributing attacks to specific threat actors a formidable challenge. Coupled with the rapid evolution of AI-driven tactics, security teams are constantly playing catch-up, struggling to identify whether an attack originates from a known group like Scattered Spider or a new, yet-to-be-named entity employing similar techniques.
Beyond attack vectors, organizations face a significant challenge in simply managing the proliferation of AI within their own environments, particularly in defining and categorizing AI systems.
Many existing security platforms and frameworks are ill-equipped to accurately classify what constitutes an AI application, especially when AI capabilities are subtly embedded within broader software solutions. For instance, if the Associated Press discloses its use of AI for content generation or aggregation, are they now considered an “AI system” requiring specific security governance? This ambiguity complicates efforts to apply appropriate security controls.
This challenge is further exacerbated by the rise of “shadow AI.” Similar to the shadow IT issues seen in the early days of cloud adoption, employees are increasingly using unmanaged AI tools and services, often without the knowledge or approval of IT or security departments. This creates a massive blind spot, as these unmanaged AI applications can expose sensitive data and create significant vulnerabilities. Without proper governance and security, shadow AI will inevitably lead to security incidents. Expanding on this includes AI agents as an expanding frontier of undocumented digital workers. The convenience and accessibility of AI tools often outweigh perceived security risks for users, creating a continuous cycle of adoption and potential exposure.
To effectively counter these evolving threats, organizations must adapt their security programs. A key area for transformation lies in rethinking approval workflows for AI usage. Traditional security models, often characterized by binary yes/no or allow/block decisions, are too rigid for the dynamic nature of AI. Instead, security teams need to embrace more flexible approaches, potentially including opt-in/opt-out models for certain AI functionalities, especially when customer or regulated data is involved. One strategy I’m implementing with my team is setting clear parameters for what users can and can’t use. The goal is to shift security from being a bottleneck that impedes innovation to becoming an enabler of secure AI adoption with clear lines of in and out of bounds activities.
This also means empowering business units to take on a greater share of security responsibilities. By training and equipping AI Ambassadors within different departments, organizations can decentralize some of the initial security review processes. These ambassadors would be responsible for understanding and adhering to AI governance policies, ensuring that security considerations are integrated from the outset of any new tool brought into the organization. This approach fosters a culture of shared responsibility, enabling faster deployment of AI solutions while maintaining a robust security posture.
The rapid evolution of AI presents a complex array of cybersecurity challenges that demand a strategic and adaptive response. By understanding the true nature of AI-driven threats, addressing the complexities of managing AI within the enterprise, and fostering a culture of shared security responsibility, organizations can not only mitigate risks but also harness the full potential of AI securely. The journey to a truly AI-secure enterprise is ongoing, and requires continuous learning, adaptation, and collaboration across all levels of the organization.