AI may have bolted out of the gate before any guardrails could be put in place, but countries are spitting out guidance at a head-spinning rate. The latest? The Principles for the Secure Integration of Artificial Intelligence in Operational Technology guide from CISA, in conjunction with Australia, Canada, Germany, the Netherlands, New Zealand, and the UK, spells out four key principles that critical infrastructure operators should follow to mitigate potentially dangerous security vulnerabilities.
“Despite the many benefits, integrating AI into operational technology (OT) environments that manage essential public services also introduces significant risks—such as OT process models drifting over time or safety-process bypasses—that owners and operators must carefully manage to ensure the availability and reliability of critical infrastructure,” the guide notes.
Denis Calderone, CRO and COO at Suzu Labs, says the guidance “arrives at a critical inflection point” since organizations are rushing AI deployments “into operational environments with various rationales but often without the security rigor these systems demand.” He expects the guidance to change the regulatory landscape.
The guide is built around the Purdue Model framework that seeks to provide understanding of the “hierarchical relationships between OT and IT devices and networks,” which its creators claim is “widely accepted.”
It rests on four principles, spelled out as:
Damon Small, who sits on the board of directors at Xcape, Inc., says the “potential dangers outlined in the guide are very real” since AI systems “can fail or be manipulated in ways that have physical repercussions.
He explains that the guide marks the continuing evolution of the Purdue Model, which industrial operators adopted in the 1990s. “For instance, data drift can gradually undermine control decisions, corrupted sensor data can force models into unsafe states, and adversarial attacks or tampering with the model supply chain can create unforeseen vulnerabilities that bypass traditional safety measures,” says Small. “The crucial difference from IT is the high cost of error; even subtle AI malfunctions can lead to outages, equipment damage, or public safety issues, thus requiring a much higher standard of assurance.”
The guidance’s strong focus on behavioral analytics, anomaly detection, and the establishment of safe operating bounds that can identify AI drift, model changes, or emerging security risks before they impact operations is “encouraging” to Marcus Fowler, CEO of Darktrace Federal.
“This shift from static thresholds to behavior-based oversight is essential for defending cyber-physical systems where even small deviations can carry significant risk,” he says.
James Maude, Field CTO at BeyondTrust, notes that securing remote access remains one of the top priorities for many organizations, especially in high-risk, OT and ICS environments, which need to be kept well away from the public internet.” Organizations, he says, “need to think about how to securely manage privileged access into their critical environments.”
Calderone, who expects the guidance to change the regulatory landscape, says “the harder challenge will be adoption.”
OT environments are notorious, he says, “for ‘if it ain’t broke, don’t fix it’ cultures, and frankly, they’re not typically built for agility either.”
Change management and AI adoption move at different speeds. “Change management in these environments moves deliberately, often for good reason. Meanwhile, bespoke AI solutions are being stood up at breakneck speed by vendors and internal teams racing to capture efficiency gains,” he says. “That mismatch is a recipe for trouble. Organizations that treat this guidance as a checkbox exercise will miss the point entirely.“
Small: “These joint documents, issued by CISA and its partners, don’t create new laws, but they significantly influence the regulatory landscape for critical infrastructure. Utilities and industrial operators use them to inform their architectural decisions, vendor requirements, and audit processes, and regulators often adopt similar language. Even abstract recommendations become concrete through procurement practices (demanding AI transparency, inventories, and fail-safes) and through insurers and boards assessing operators’ adherence to CISA/NIST-style guidance. In essence, secure AI in OT isn’t merely a desirable innovation; it’s the critical factor determining whether infrastructure becomes smarter and more resilient or, conversely, more vulnerable.
Recent Articles By Author