When AI Writes the Code, What Changes for Security?
嗯,用户让我总结一下这篇文章的内容,控制在一百个字以内。首先,我需要通读整篇文章,抓住主要观点。 文章主要讨论了AI对应用安全(AppSec)的影响。作者提到在RSAC 2026会议上,几位安全领导讨论了他们组织内部的变化。他们指出,传统的安全培训和控制措施已经不再适用,因为现在任何人都可以快速构建应用程序,甚至非开发者也能做到。 接下来,文章提到软件变得“一次性”,即快速创建和销毁,这使得传统的安全控制措施难以应对。安全必须嵌入到开发流程中,而不仅仅是作为额外的步骤。 然后,文章讨论了安全不能作为瓶颈的问题。传统上,安全可能需要审查和批准,但AI驱动的开发速度太快了,不能放慢速度。因此,安全必须与开发流程无缝集成。 另一个关键点是关于事件响应的问题。如果一个AI代理出现问题或被滥用,如何追踪和响应?大多数组织还没有准备好应对这种情况。 最后,文章探讨了AI对就业的影响。虽然有人认为AI会导致工作流失,但也有观点认为它会创造新的机会。关键在于战略思维和问题解决能力的重要性增加。 总结一下,文章的核心是AI改变了软件开发的方式,迫使安全团队重新思考他们的策略和方法。他们需要将安全性嵌入到整个开发流程中,并利用工具来自动化和监控这些过程。 现在我要把这些要点浓缩成大约100个字以内的总结。 </think> RSAC 2026会议上四位安全领袖讨论了AI对应用安全的影响:传统基于培训的安全模式已失效;软件快速迭代使传统控制措施难以应对;安全需嵌入开发流程而非独立步骤;无法依赖人工审查;需建立自动化工具和机制以应对快速变化的环境。 2026-3-31 19:58:39 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

Last week at RSAC 2026, we
hosted four security leaders for a breakfast panel to talk about what’s actually happening inside their organizations.

The conversation went further than the expected “AI is moving fast, security needs to keep up” discussion we’re all familiar with. We were fortunate to have panelists – Temi Adebambo, CISO at Microsoft Gaming; James Robinson, CISO at Netskope; Jim Routh, former CISO at MassMutual and CVS Aetna, among others; and Sean Poris, Head of Global AppSec at a large consultancy – who went beyond the surface level to dismantle the assumptions and raise some new perspectives.

Thanks to all four for a genuinely useful hour.

panel

The control point has already moved

The working model for AppSec has always been the developer. Train them, guide them and build controls around their workflows. That model, which assumed a reasonably defined group of people writing code, is gone.

In many organizations, anyone can build a working app in two minutes using natural language. Not a prototype, but a “product” that can move surprisingly close to production. New hires on one security team were shipping agents by the end of their second week.

“The thought that you’re even going to get them educated around security concepts to the point where they’re applying it,” one panelist said. “I think it’s probably the furthest thing from what’s going to happen.”

You can’t scale a training-based security model to an organization where every employee is effectively a developer. If that’s the case, the only viable path is fully integrating security into the system (tools, assistants, workflows). In other words, security must be part of these processes rather than layered on as a separate step.

Software is becoming disposable and security is not prepared

As one panelist shared, in a single week a team spun up 500 agents. These agents did their respective jobs and then were gone, never to be seen again.

Most of our controls, particularly in AppSec (e.g., scan, review, approval gates), assume the software will endure for some significant period of time. You have control over the code, scan it regularly and fix issues that are uncovered. Rinse and repeat, weekly, monthly, quarterly, or whatever policy dictates.

But with disposable code, how do you govern and secure software that is gone by the time you’ve turned toward it?

Most teams don’t have a real answer yet, but one place to start: establish security guardrails at the agent-class level to inform what an agent can or cannot do. Further, agent identity and permissions management should also be solved structurally at the point of creation, rather than attempting to do so after the fact.

Temi Sean

Security can’t operate as a gate

One panelist used an analogy to help frame security in the context of AI.

Imagine an Amazon warehouse, fully automated. Every station, every conveyor, every handoff sees machines interacting with machines at production speed. Now imagine inserting a manual checkpoint. Inserting a human in the process would not only slow things down but also run the risk of personal injury.

The implication is straightforward: if security has to operate inside the AI development system at production speed, it can’t be a gate. For the first time, he argued, we have a real chance to ship security at the same pace as everything else. That has never been true before.

The IR problem nobody has solved

One of the more uncomfortable moments came when one panelist described bringing a new question to the incident response team.

If an agent goes rogue, taking an action it shouldn’t, accessing something it shouldn’t – how would they know if it was involved and how could the team reconstruct the scenario?

When he first raised it, the IR team heard a different question: not “how do we investigate a rogue agent?” but “how do we use AI as part of our investigations?”

“What I’m asking is: if we have an agent that goes wrong, do we have the telemetry? Do we have the traceability? And if the agent is gone, what then?”

Most organizations do not know.

When the human role shifts

The closing predictions produced the panel’s only real disagreement – and it was a good one.

One panelist made the historical case for optimism: while nearly every new technology revolution comes with predictions of job losses, these expectations nearly always fail to materialize. From the loom to the telephone to the automobile, each innovation generated more jobs than those that were eliminated. And AI will likely prove to be similar by driving massive business growth we can’t begin to anticipate.

Another panelist disagreed, at least partially. His distinction wasn’t about whether jobs disappear overall, but about who might be at risk. For those that entered software development because the field is lucrative, but without passion for building, AI may turn out to be a replacement. And those that don’t adapt and grow to leverage AI may be in a perilous situation.

Finally, two panelists aligned on one theme: value is shifting from execution and toward strategic direction. Writing code is getting cheaper and faster. Knowing what to build, how to frame problems, how to give systems the context they need to produce good outcomes is where the real value will lie.

“Roles will involve execution becoming trivial,” one speaker said. “So it’s not about execution anymore. It’s going to be more about people who can think, have ideas, apply structured logical reasoning.”

The teams navigating this well have one thing in common: they aren’t necessarily the ones with the most advanced policies. They’re the ones that made security easy enough that people want to use it – and instrumented enough that they can see what’s actually happening, not just what they assumed would happen.

*** This is a Security Bloggers Network syndicated blog from Legit Security Blog authored by Dave Howell. Read the original post at: https://www.legitsecurity.com/blog/when-ai-writes-the-code-what-changes-for-security


文章来源: https://securityboulevard.com/2026/03/when-ai-writes-the-code-what-changes-for-security/
如有侵权请联系:admin#unsafe.sh