Oct 20, 2025
David Brauchler says AI red teaming has proven that eliminating prompt injection is a lost cause. And many developers inadvertently introduce serious threat vectors into their applications – risks they must later eliminate before they become ingrained across application stacks.
NCC Group’s AI security team has surveyed dozens of AI applications, exploited their most common risks, and discovered a set of practical architectural patterns and input validation strategies that completely mitigate natural language injection attacks. David's talk aimed at helping security pros and developers understand how to design/test complex agentic systems and how to model trust flows in agentic environments. He also provided information about what architectural decisions can mitigate prompt injection and other model manipulation risks, even when AI systems are exposed to untrusted sources of data.
More about David's Black Hat talk:
Additional blogs by David about AI security:
An op-ed on CSO Online made us think - should we consider the CIA triad 'dead' and replace it? We discuss the value and longevity of security frameworks, as well as the author's proposed replacement.
Finally, in the enterprise security news,
All that and more, on this episode of Enterprise Security Weekly.
Visit https://www.securityweekly.com/esw for all the latest episodes!
Show Notes: https://securityweekly.com/esw-429