Organizations are increasingly implementing generative AI (GenAI) solutions to boost productivity and introduce new operational efficiencies. Unfortunately, so are cybercriminals, and they’re doing so with alarming effectiveness.
Bad actors can use GenAI to identify and exploit security vulnerabilities, increase the frequency and sophistication of their attacks, and improve their success rates. Barracuda’s research team recorded a 151% increase in malicious emails from October 2022 — when ChatGPT launched — to December 2023.
However, AI technologies such as GenAI can also help cybersecurity professionals fortify their organizations against sophisticated, personalized email attacks, foster a robust cybersecurity culture and enhance threat response.
GenAI began gaining widespread attention and adoption after ChatGPT’s release. However, the cybersecurity industry has already been leveraging AI for years.
In the 1980s, many organizations used rule-based expert systems, which evolved in the late 1990s and early 2000s to include machine learning and behavioral analysis for more accurate threat detection. By the 2010s, security vendors were integrating AI into their next-generation antivirus solutions to enable real-time threat identification, while threat intelligence and automated response systems became crucial components of security strategies. The bad actors adapted and began using AI to help them evade those AI-powered defenses.
As the developers of GenAI tools build guardrails to prevent users from creating malicious content, attackers are creating simple workarounds by using clever prompt engineering to force a large language model into providing the desired output. Some are even using solutions purpose-built for malicious applications.
For example, WormGPT, a new private chatbot service, uses AI to write malicious software without the prohibitions that tools like ChatGPT and Google Gemini enforce. Without any guardrails, cybercriminals could simply ask WormGPT to craft a business email compromise attack (BEC). WormGPT creates it without the spelling and grammar mistakes that typified those attacks in the past and made them easier to detect. They can also incorporate regional cultural references, industry terms, and local brands to further improve their chances of success.
In addition to creating malicious emails, attackers can use GenAI to build fake login pages that closely resemble legitimate websites. Alternatively, AI can help rapidly scale credential stuffing attacks by quickly testing large sets of username and password combinations obtained from data breaches.
AI tools can also enable cybercriminals to create adaptive malware capable of autonomously modifying its behavior or code in response to an organization’s specific security measures to evade detection. Additionally, AI-powered botnets could carry out potentially damaging distributed denial-of-service (DDoS) attacks.
While attackers leverage AI to enhance the sophistication and scale of their cyber threats and create attacks that are harder for traditional systems and processes to detect, cybersecurity professionals can harness the same technology to fortify their organizations’ defenses and mitigate risks.
Security teams can use AI to identify known phishing patterns and signatures and look for anomalies in email behavior and characteristics. Using AI’s natural language processing capability to analyze the content of incoming messages for sentiment, context, tone and potentially malicious intent allows for more accurate and faster detection of personalized phishing attacks, including those created with the help of generative AI techniques.
AI, particularly machine learning algorithms, can analyze vast amounts of data to establish baseline behavior and detect other anomalies that may indicate security threats. These could include unusual network traffic, atypical user behavior or unexpected system activities. AI will then alert cybersecurity personnel to possible malicious activities and may be able to take immediate steps to thwart them.
Deploying AI to monitor user and system behavior to identify unusual or suspicious activities helps detect threats sooner than human analysts are typically able to. AI is also highly effective at detecting insider threats, identifying unusual account access patterns and recognizing deviations from standard communication behavior to halt attacks at an early stage.
AI excels at recognizing complex patterns that human analysts might miss. It identifies patterns associated with specific types of attacks, recognizes evolving attack techniques and predicts future threats based on historical data.
Additionally, machine learning algorithms analyze historical data to predict future threats by anticipating emerging attack vectors, identifying likely targets, and proactively implementing security measures. Speed is the key differentiator here. AI-driven systems operate faster and more efficiently to respond to security threats in real-time while reducing human error and the burden on already overworked IT teams.
Cybercriminals can leverage GenAI technology to escalate the sophistication and impact of their attacks. The reported rise in malicious activity, mainly through phishing and credential theft, underscores the urgent need for robust cybersecurity measures.
While cyber adversaries adapt their strategies to evade detection, AI can also be a potent ally for cybersecurity professionals. Through proactive AI-driven strategies, security teams can mitigate risks, detect anomalies swiftly and preemptively defend against emerging AI-powered threats. By harnessing GenAI, organizations can fortify their defenses against personalized attacks, cultivate a resilient cybersecurity culture and enhance threat response capabilities.