The Importance of Behavioral Analytics in AI-Enabled Cyber Attacks
嗯,用户让我用中文帮他总结一下这篇文章的内容,控制在100个字以内,而且不需要以“文章内容总结”或者“这篇文章”这样的开头,直接写描述即可。好的,我先看看这篇文章讲的是什么。 文章主要讨论了人工智能(AI)在网络安全中的应用,特别是网络犯罪分子如何利用AI进行钓鱼攻击、生成恶意软件等。传统基于规则的安全模型已经不够用了,需要转向行为分析来应对这些AI驱动的威胁。 接下来,我需要把重点提炼出来。首先,AI如何改变网络攻击的方式,比如个性化钓鱼邮件、深度伪造和恶意软件。然后是传统安全模型的不足,比如基于签名的检测和规则系统无法应对不断变化的威胁。最后,文章提到需要动态的行为分析和零信任安全模型来保护身份安全。 现在把这些要点浓缩到100字以内。要确保涵盖AI在攻击中的应用、传统方法的不足以及解决方案。可能需要提到行为分析、实时监控、零信任等关键词。 再检查一下是否符合用户的要求:用中文、简洁明了、不使用特定开头。看起来没问题。 </think> 人工智能正在改变网络犯罪分子的攻击方式,包括生成个性化钓鱼邮件、深度伪造和恶意软件以规避传统检测。传统基于规则的安全模型已不足以应对这些威胁,需转向动态行为分析和实时风险建模以识别异常活动,并结合零信任安全策略保护身份安全。 2026-3-20 10:0:0 Author: thehackernews.com(查看原文) 阅读量:5 收藏

Artificial Intelligence / Data Protection

Artificial Intelligence (AI) is changing how individuals and organizations conduct many activities, including how cybercriminals carry out phishing attacks and iterate on malware. Now, cybercriminals are using AI to generate personalized phishing emails, deepfakes and malware that evade traditional detection by impersonating normal user activity and bypassing legacy security models. As a result, rule-based models alone are often insufficient for identity security against AI-enabled threats. Behavioral analytics must evolve beyond monitoring suspicious activity patterns over time into dynamic, identity-based risk modeling capable of identifying inconsistencies in real time.

Common risks introduced by AI-enabled attacks

AI-enabled cyber attacks introduce very different security risks compared to traditional cyber threats. By relying on automation and mimicking legitimate behavior, AI allows cybercriminals to scale their attacks while reducing obvious signals to remain undetected.

AI-powered phishing and social engineering

Unlike traditional phishing attacks that use generic messaging, AI enables personalized phishing messages at scale using public data, impersonating the writing styles of executives or creating context-aware messages referencing real events. These AI-powered attacks can reduce obvious red flags, slip past some filtering approaches and rely on psychological manipulation instead of malware delivery, significantly increasing the risk of credential theft and financial fraud.

Automated credential abuse and account takeovers

AI-enhanced credential abuse can optimize login attempts while avoiding triggering lockout thresholds, mimicking human-like timing between authentication attempts and targeting privileged accounts based on context. Since these attacks use compromised credentials, they often appear valid and blend into normal login activity, making identity security a crucial component of modern security strategies.

AI-assisted malware

Before cybercriminals could use AI to accelerate malware development and deployment, they had to manually modify code signatures and spend copious time creating new variants. AI can further speed up variation, scripting and adaptation. With modern adaptive malware, cybercriminals can automatically modify code to avoid detection, change behavior based on the environment and generate new exploit variants with little to no manual effort. Since traditional signature-based detection models struggle against continuously evolving code, organizations must start relying on behavioral patterns rather than static indicators.

How traditional behavioral monitoring can fail against AI-based attacks

Traditional monitoring was designed to detect cyber threats driven by malware, known security vulnerabilities and visible behavioral anomalies. Here are some of the ways traditional behavioral monitoring falls short against AI-enabled attacks:

  • Signature-based detection can’t identify modern threats: Signature-based tools rely on known signs of compromise. AI-assisted malware constantly rewrites its own code and automatically generates new variants, making static code signatures obsolete.
  • Rule-based systems rely on predefined thresholds: Many behavioral monitoring systems depend on rules, such as login frequency or geographic location. AI-assisted cybercriminals adjust their behavior to remain within set limits, conducting malicious activity over a longer period of time and mimicking human behavior to avoid detection.
  • Perimeter-based models fail when compromised credentials are involved: Traditional perimeter-based security models assume trust once a user or device is authenticated. When cybercriminals authenticate with legitimate credentials, these outdated models treat them as valid users, allowing them to carry out malicious actions.
  • AI-based attacks are designed to appear normal: AI-based cyber threats intentionally blend in by operating within assigned permissions, following anticipated workflows and executing their activities gradually. While isolated activity may seem legitimate, the main risk is when activity is regarded in tandem with behavioral context over time.

Why behavioral analytics must shift for AI-based attacks

The shift to modern behavioral analytics requires an evolution from simple threat detection into dynamic, context-aware risk modeling capable of identifying subtle privilege misuse.

Identity-based attacks require context

To appear normal, AI-driven cybercriminals often use credentials compromised through phishing or credential abuse, work from known devices or networks and conduct malicious activity over time to avoid detection. Modern behavioral analytics must evaluate whether even the slightest change in behavior is consistent with a user’s typical behavioral patterns. Advanced behavioral models establish baselines, assess real-time activity and combine identity, device and session context.

Monitoring must extend across the entire stack

Once cybercriminals gain access to systems through compromised, weak or reused credentials, they focus on gradually expanding their access. Behavioral visibility needs to cover the full security stack, including privileged access, cloud infrastructure, endpoints, applications and administrative accounts. For behavioral analytics to be more effective against AI-based cyber attacks, organizations must enforce zero-trust security and assume that no user or device should have implicit trust or automatic authentication based on network location.

Malicious insiders may use AI tools

AI tools not only empower external cybercriminals but also make it easier for malicious insiders to act within an organization’s network. Malicious insiders can use AI to automate credential harvesting, identify sensitive information or generate believable phishing content. Since insiders often operate with legitimate permissions, detecting privilege misuse requires identifying behavioral anomalies like access beyond defined responsibilities, activity outside normal business hours and repeated activity within critical systems. Eliminating standing access by enforcing Just-in-Time (JIT) access, session monitoring and session recording helps organizations limit exposure and reduce the impact of compromised accounts and insider misuse.

Secure identities against autonomous AI-based cyber attacks

At a time when AI agents can create convincing social engineering campaigns, test credentials at scale and reduce the hands-on effort required to run attacks, AI-enabled cyber attacks are becoming increasingly automated. Protecting both human and Non-Human Identities (NHIs) now requires more than authentication; organizations must implement continuous, context-aware behavioral analysis and granular access controls. Modern Privileged Access Management (PAM) solutions like Keeper consolidate behavioral analytics, real-time session monitoring and JIT access to secure identities across hybrid and multi-cloud environments.

Note: This article was thoughtfully written and contributed for our audience by Ashley D’Andrea, Content Writer at Keeper Security.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/03/the-importance-of-behavioral-analytics.html
如有侵权请联系:admin#unsafe.sh