Developing Security Protocols for Agentic AI Applications
2025-1-22 09:25:31 Author: securityboulevard.com(查看原文) 阅读量:3 收藏

Agentic artificial intelligence can be a powerful, practical tool for almost any department — as long as its security weaknesses do not let cybercriminals in. Security professionals must develop protocols and procedures to safeguard these autonomous AI systems against hackers who mean to cause harm. But which strategies will work best? 

Is Agentic AI Different From Standard AI? 

Unlike a large language model (LLM), agentic AI does not rely on human-generated prompts or clear instructions. Instead, it acts autonomously on behalf of end users or other systems. It can interact with interconnected or external technologies, connecting to search engines, code executors or other AI systems.

Organizations use agentic AI because it possesses unprecedented accuracy and automation, enabling it to deploy in various use cases successfully. Moreover, since it is designed to complete specific objectives, it seamlessly integrates into the broader workflow. 

Techstrong Gang Youtube

AWS Hub

Agentic AI does share some similarities with other subsets of AI. For instance, an LLM needs a large dataset to produce accurate output. Autonomous algorithms are the same — they need a massive amount of data to know which decisions to make or actions to take. Additionally, like machine learning models, they learn from their environments, adapting over time. 

In short, the core concepts of AI remain. The main difference separating agentic AI and other subsets is agency. The former can operate autonomously, tweaking its workflow as necessary to compensate for variables or dynamic needs. It does not require human intervention — instead, it perceives and adapts to changes itself. 

Security Risks Related to Autonomous AI 

In truth, an intelligent, interconnected system with minimal human oversight is a security risk waiting to happen, and 96% of business executives agree. They believe their risk of a security breach will be more likely within the three years following AI adoption. Although this technology has existed for decades, agentic AI only emerged within the last few years. 

Compromising even a single component in the broader AI system can enable lateral movement, allowing cybercriminals to corrupt the algorithm, exfiltrate sensitive company data or gather information to inform future cyberattacks. An attacker’s tactics might be more subtle. In 2024, research scientists revealed they could have poisoned 0.01% of the samples in two of the most extensive open training datasets for just $60. While this figure may seem inconsequential, the researchers pointed out that a poisoning rate as low as 0.001% is enough. 

If the external or internal data sources feeding the agentic AI are poisoned, its behavior will be impacted. The inaccurate or biased data can increase hallucination frequency, amplify biases or impair decision-making. The model will start to act out in unexpected ways, but without human oversight, its actions may not raise suspicion for some time. 

Manipulation by external threats is a significant concern. Business competitors, financially motivated hackers or nation-state cybercrime groups are likely culprits. At best, their interference would diminish the algorithm’s accuracy — at worst, it could make the model go rogue. In either scenario, financial losses and reputational damage are probable. 

A cyberattack could have catastrophic consequences even if an agentic algorithm handles just a fraction of one department’s workload. If its typical failure rate is 2% on average, human employees could easily compensate. However, if that figure were to suddenly jump to 100%, the abrupt, unexpected downtime would lead to expensive, large-scale failures. 

The Importance of Governance Frameworks 

A governance framework that aligns with national and global standards is essential for mitigating the cybersecurity risks related to agentic AI. These policies and procedures guide implementation and utilization, ensuring both remain ethical, transparent and secure. They must be especially robust to counteract autonomy-associated threats. 

The level of agency given to an agentic system is not the only factor security professionals must consider. Will it be used to complete basic tasks or achieve complex objectives? Should it be deployed in a simple environment or an elaborate one? How this type of technology is implemented determines its potential security risks. 

Layering Security Protocols to Mitigate Risks 

A multifaceted approach to cybersecurity is essential for securing a system as sophisticated and complex as agentic AI. Layering security protocols will help professionals mitigate risks. 

  1. Testing and Validation 

Continuous testing and validation are essential with technology as complex as agentic AI. Security professionals should first establish a baseline. Then, they must conduct tests periodically — at least every few months — to ensure their system remains uncompromised. 

  1. Advanced Encryption  

One of the few ways to secure the information processed, analyzed and transmitted by an intelligent algorithm is to leverage advanced encryption. The three post-quantum encryption standards — Federal Information Processing Standard 203, 204 and 205 — released by the National Institute of Standards and Technology are ideal for future-proofing. 

  1. End-to-End Monitoring 

Continuous monitoring and logging are critical. This way, teams can identify indicators of compromise early on, enabling them to take proactive action before unauthorized behaviors or permanent damage occurs. 

  1. AI System Safeguards 

Autonomy is inherently risky, even if algorithms have low-level agency. Decision-makers should design safeguards to prevent their systems from taking certain actions such as sending emails or disabling security features. After all, even if it was not originally designed to do these things, an attacker could easily change its objectives. 

Keeping Agentic AI Applications Secure 

Agentic AI can be an incredibly powerful asset — like another member of the team. However, it can quickly become a liability due to poorly designed frameworks or lax security protocols. Organizations’ desire for progress should not overshadow the need for caution. Ethics and security must be considered throughout implementation and utilization. 

Recent Articles By Author


文章来源: https://securityboulevard.com/2025/01/developing-security-protocols-for-agentic-ai-applications/
如有侵权请联系:admin#unsafe.sh