A unified response from leading cybersecurity authorities offers a structured framework for integrating artificial intelligence (AI) into critical operational technology (OT) systems. The new document shifts the conversation from hypothetical risks toward concrete actions, instructing organizations on how to handle AI’s double-edged potential for efficiency and vulnerability. The guidance provides clarity at a moment when both public and private sector stakeholders are rapidly pursuing advanced automation and decision support. By urging careful oversight and transparent procurement processes, the publication aims to influence technology adoption without undermining safety. Proactive conversations with vendors and intentional validation procedures are recommended first steps for organizations looking to align with these new expectations.
When comparing past information, cybersecurity advisories around AI in OT have typically been fragmented, often centering on country- or industry-specific guidelines without broad international consensus. Many previous statements emphasized theoretical threats or the benefits of AI rather than providing actionable security instructions for actual deployment in industrial environments. Unlike earlier cautions, this coordinated guidance moves decisively by labeling standard positions on AI safety versus security, and clearly restricting the use of generative models—an approach not seen in earlier documentation. The call for vendor transparency and human oversight builds on prior warnings, solidifying practices that many had discussed but few had broadly implemented. This new publication fills the previous absence of a shared operational baseline, integrating lessons learned from isolated incidents and pilot deployments, and marks a practical step forward for stakeholder engagement in global critical infrastructure.
How Does the Guidance Distinguish Safety from Security?
The issued guidance stresses that while both safety and security are essential, they are fundamentally different in AI-integrated OT environments. Protection against unauthorized access and data breaches does not automatically guarantee the avoidance of physical accidents or operational hazards. AI’s unpredictable behavior, especially in non-deterministic models such as large language models, can introduce scenarios in which automated recommendations conflict with real-world safety requirements. The agencies cautioned,
“AI such as LLMs almost certainly should not be used to make safety decisions for OT environments.”
The guidance reinforces the need for skilled operators who can consistently monitor and challenge AI recommendations based on actual, on-the-ground observations.
What Role Should Human Operators Play with AI in OT?
The recommendations advocate for keeping human oversight central to any AI deployment in critical systems. AI should serve as an advisory tool, not as a direct controller, to allow personnel the space to validate digital outputs before acting on them. This human-in-the-loop approach helps reduce skill degradation among operators and ensures that safety checks are not delegated entirely to algorithms. The agencies also highlighted,
“Ultimately, humans are responsible for functional safety.”
By maintaining strong cyclic procedures for reviewing machine learning performance and correlating AI outputs with sensor data, organizations can prevent model drift from compromising asset reliability.
How Are Vendors Expected to Respond to the New Guidelines?
OT vendors are now being urged to provide clear details about how AI is embedded into their software solutions. The guidance encourages critical infrastructure owners and operators to request transparency reports, software bills of materials (SBOMs), and explicit disclosures about training data usage. These transparency measures are designed to help organizations weigh security and privacy implications before deploying new features. Agencies recommend that vendors document where AI models are hosted and clarify how customer data is handled during model training or operation. By raising these baseline expectations, the guidance sets a standard for trust between technology providers and end users across industrial sectors.
This joint guidance establishes a unified global approach to the integration of artificial intelligence in critical infrastructure, marking a significant shift from the fragmented, theoretical discussions seen in the past. Organizations now have actionable recommendations, such as seeking push-based rather than persistent connections and requiring transparency from vendors, to reduce risk exposure. By separating the concepts of safety and security and advocating for the enduring role of human operators, the guidance offers practical, realistic steps for balancing technological advancement with operational assurance. Readers considering the adoption of AI in OT should prioritize continuous validation, employee training, and active dialogue with technology providers to protect both assets and people. Monitoring regulatory trends and industry case studies will further help in adapting practices as AI is increasingly woven into complex, high-stakes environments.
