Navigating Application Security in the AI Era
2024-3-14 21:0:11 Author: securityboulevard.com(查看原文) 阅读量:8 收藏

When generative AI began exhibiting its programming capabilities, developers naturally turned to it to help them write code efficiently. But with massive amounts of AI-generated code entering code bases for the first time, security leaders are now confronting the potential impact of AI on overall security posture.

Whether AI is being used to insert malicious code into open source projects or the rise of AI-adjacent attacks, AI and application security (AppSec) will only continue to intertwine further in the coming years.

Here are five critical ways AI and AppSec will converge in the coming year.

AI Copilots

As developers increasingly rely on generative AI to streamline tasks, they’ll inevitably start to generate more and more code. This speed and volume might be a blessing for product managers and customers, but from a security perspective, more code always means more vulnerabilities.

For many companies, vulnerability management has already reached a breaking point – backlogs are skyrocketing as thousands of new common vulnerabilities and exposures (CVEs) get reported monthly. Risk alert tools that generate large numbers of non-exploitable findings are an unsustainable solution, considering security teams are already stretched thin. Now, more than ever, organizations will have to streamline and prioritize reactions to security threats, focusing only on the vulnerabilities that represent a genuine, impending risk.

Compliance Complications

AI-generated code and organization-specific AI models have quickly become important parts of corporate IP. This begs the question: Can compliance protocols keep up?

AI-generated code is typically created by puzzling together multiple pieces of code found in publicly available code stores. However, issues arise when AI-generated code pulls these pieces from open source libraries with license types that are incompatible with an organization’s intended use.

Without regulation or oversight, this type of “non-compliant” code based on un-vetted data can jeopardize intellectual property and sensitive information. Malicious reconnaissance tools could automatically extract the corporate information shared with any given AI model, or developers may share code with AI assistants without realizing they’ve unintentionally revealed sensitive information.

In the coming years, compliance leaders must establish a range of rules around how developers are allowed to use AI coding assistants, depending on the level and type of risk the application will be exposed to when deployed.

Automating VEX

Vulnerability exploitability exchange (VEX) is a process that works in conjunction with a software bill of materials (SBOM) to show security teams the exploitable vulnerabilities in their network.

Until now, these artifacts have been manually generated by expensive consultants, making them unsustainable in the long term as data proliferates and more and more CVEs are disclosed. For this crucial process to keep pace with today’s cyberthreats, especially as AI causes a rapid rise in vulnerability numbers (due to both new vulnerabilities in AI infrastructure and AI detection of new vulnerabilities), security leaders must start to automate the task of VEX creation, allowing for real-time, dynamic assessments of exploitability.

The Rise of AI-Adjacent Attacks

AI can be used to deliberately create malicious, difficult-to-detect code and insert it into open-source projects. AI-driven attacks are often vastly different than what human hackers would create – and different from what most security protocols are designed to protect, allowing them to evade detection. As such, software companies and their security leaders must prepare to reimagine the ways they approach AppSec.

On the other side of the coin are cases where AI itself will be the target, not the means of attack. Companies’ proprietary models and training data offer an enticing prize for high-level hackers. In some scenarios, attackers might even covertly alter the code within an AI model, causing it to generate intentionally incorrect outputs and actions. It’s easy to imagine the contexts where such malicious alterations could have disastrous consequences – such as tampering with a traffic light system or a thermal sensor for a power plant.

Fortunately, new solutions are already emerging in response to these new threats and will continue to evolve alongside the models they are built to protect.

Real-Time Threat Detection

AI can be a real game changer for attack detection. When combined with emerging tools that enable deeper visibility into applications, AI will be able to automatically detect and identify abnormal behaviors when they occur and block attacks while they are in progress.

Not only will real-time anomaly and threat detection limit damage from breaches, but it will also make it easier to catch the hackers responsible.

It’s Only the Beginning

As the AppSec community navigates a rapidly shifting digital world, AI will only grow more relevant—both in the challenges it presents and the opportunities it affords. In turn, its relevance to AppSec will grow, requiring a proactive and adaptive approach from security professionals.

It’s up to the AppSec and cybersecurity industry at large to collaborate as they develop robust solutions that harness the immense promise of AI, without compromising on the integrity of applications and the data they use.


文章来源: https://securityboulevard.com/2024/03/navigating-application-security-in-the-ai-era/
如有侵权请联系:admin#unsafe.sh