Skip to main content

NCSC Secure Connectivity Principle 4: Use Standardised and Secure Protocols


InfoSec Made Easy

OT Security Leadership | NCSC Guidance Series

Why the protocols your OT systems use to communicate are as important as the networks they run on


Most cybersecurity professionals spend a great deal of time thinking about network architecture — firewalls, segmentation, DMZs, access controls. These are important, and the preceding principles in this series address them directly. But there is a layer of security that sits beneath network controls and is often overlooked in OT environments: the protocols that devices use to communicate with each other. Even a well-segmented, carefully controlled OT network can be undermined by protocols that allow unauthenticated commands, lack encryption, or permit malicious data injection that bypasses perimeter controls.

Principle 4 of the NCSC's Secure Connectivity Principles for Operational Technology addresses this layer explicitly. It requires organizations to evaluate the security properties of the protocols used within and between their OT network environments — and to take a disciplined approach to transitioning away from insecure protocols toward versions and alternatives that provide the cryptographic protections needed for modern connected OT systems.

For security leaders, this principle requires a shift in thinking. Protocol security is not just a technical configuration detail — it is a risk dimension that needs to be assessed as part of every connectivity decision, included in risk documentation, and actively managed over time. The industrial control system protocols that were designed for isolated, air-gapped environments carry specific security limitations that become liabilities when those environments are connected to modern networks. Understanding those limitations — and having a plan to address them — is a core part of responsible OT security governance.

The CIA Triad in OT Protocol Security: An Asymmetric Challenge

One of the most important framing points in the NCSC guidance on protocol security is the observation that OT environments have historically prioritized availability over the confidentiality and integrity of communications. This is not irrational — in an industrial environment, a control system that becomes unavailable due to a security mechanism failing open is far more dangerous than one that maintains availability at the cost of some information security properties. Safety and process continuity have always been the primary design goals of OT communication protocols.

The result is a generation of industrial protocols — Modbus, DNP3, older versions of CIP, OPC DA, and many others — that were designed without authentication, without encryption, and sometimes without even basic input validation. In the isolated, purpose-built networks for which they were designed, these limitations were acceptable because the physical controls around the network provided the security that the protocols lacked. Connect those same protocols to modern networks, and those physical controls no longer provide adequate protection.

The NCSC guidance is careful to acknowledge that the prioritization of availability is legitimate, but insists that all components of the CIA triad — confidentiality, integrity, and availability — must be considered. The key insight is that the appropriate balance between these properties varies by connection type. In field networks where PLCs are communicating with sensors and actuators, authentication and integrity protection are the most critical properties — an attacker who can inject commands into a field network can alter process behavior with potentially serious physical consequences. In north-south traffic at network boundary points, encryption becomes critical to prevent adversaries from gathering operational intelligence about the system's state and behavior.

💡 Pro Tip: When assessing protocol security, do not treat all OT protocols equally. Map each protocol in your environment to the specific CIA properties it currently provides and the specific risks its limitations create. This creates a prioritized list of protocol risk — the highest risk items being those where the lack of authentication or integrity checking is in a location where an attacker could use it to directly affect physical process behavior. Those are the protocols to address first, because they represent the highest potential consequence.

Protocol Validation: Enforcing Known-Good Traffic

Before getting to the question of which protocols to use, the NCSC guidance addresses how to validate the protocols and data that are currently in use. The concept of protocol validation — enforcing that traffic conforms to expected patterns and data formats — is a security control that can be applied regardless of whether you have yet been able to upgrade the protocols themselves.

The guidance describes a known-good model approach: rather than trying to detect all possible malicious traffic, define what legitimate traffic looks like and block everything that does not conform. This is particularly effective in OT environments because OT communication patterns are typically highly predictable and repetitive. A sensor that sends temperature readings every five seconds should always be sending packets of a specific format, from a specific source, to a specific destination, within expected value ranges. Any deviation from that pattern is an anomaly worth investigating.

Schema-based validation applies this principle systematically. At key trust boundaries — the OT/IT boundary, in front of SCADA control software, at protocol gateways — all traffic is validated against defined schemas that specify the expected structure and value ranges of both the protocol headers and data payloads. Traffic that does not conform to the schema is blocked before it reaches the target system. This provides a meaningful control even when the underlying protocol lacks cryptographic protections — because even if an attacker can craft a technically valid Modbus packet, schema validation can detect if the values in that packet are outside expected operational ranges.

The guidance makes a practical point about nested or encoded content that is worth noting: validation needs to go deep enough to catch malicious data that is embedded within apparently valid outer structures. A basic schema check on the outer format of a packet may not catch malicious content embedded in a base64-encoded field. Robust validation decodes and validates nested content, not just the outer envelope.

📋 Example: Schema Validation in OT

A chemical processing facility uses Modbus to communicate temperature and pressure readings from field sensors to a SCADA system. The facility implements schema-based validation at the protocol gateway that sits between the field network and the SCADA server. The validation rules specify: expected source addresses, expected function codes (read-only; write function codes from field devices are blocked), expected register ranges, and acceptable value ranges for each register based on normal operational parameters. When a security test injects a Modbus packet with a write command attempting to change a setpoint, it is blocked at the gateway before reaching the SCADA system — even though the packet is technically a valid Modbus frame. The schema validation has enforced the operational boundary that the protocol itself does not enforce.

Evaluating and Upgrading Industrial Protocols

The NCSC guidance provides specific direction on how to evaluate industrial protocols within an OT environment. The starting point is a clear preference for the latest secure versions of established industrial protocols. The guidance lists specific examples of upgrade paths: from DNP3 to DNP3-SAv5 (which adds authentication), from CIP to CIP Security, from Modbus to Modbus Security, from OPC DA to OPC UA. These upgrade paths exist because the protocol developers have recognized the same limitations the NCSC guidance identifies — and have produced secure variants that address them while maintaining interoperability with existing systems.

The evaluation criteria for industrial protocols in the NCSC guidance focus on four properties. First, protocols should support cryptographic protections for authenticity and integrity — the ability to verify that a message came from a legitimate source and has not been altered in transit. Second, protocols should support crypto agility — the ability to switch and update cryptographic algorithms as standards evolve. This matters because today's secure cryptographic algorithm may not be adequate in five years, and in OT environments where systems have decade-long lifecycles, locking into a protocol whose cryptography cannot be updated is a long-term risk. Third, protocols should prefer open standards and interoperability to facilitate vendor-agnostic solutions. Fourth, where secure protocol versions are not available or not feasible, there must be a documented business case for using insecure protocols, and compensating controls must be documented as part of the risk management framework.

The concept of crypto agility deserves particular attention given the emerging post-quantum cryptography transition. OT systems deployed today may still be operational when quantum computing renders current cryptographic algorithms inadequate. Protocols that support crypto agility — the ability to migrate to new cryptographic algorithms without replacing the entire protocol implementation — provide a degree of future-proofing that extends the useful security life of the investment. The NCSC guidance explicitly calls out post-quantum cryptography migration as a consideration when evaluating protocol crypto agility.

The IT/OT Boundary: Special Attention Required

The protocols used at the IT/OT boundary deserve specific attention because this is where the two security domains with different design philosophies and different risk profiles meet. Industrial control protocols — Modbus, OPC DA, EtherNet/IP, and similar — were designed for OT environments and should remain restricted to isolated OT network segments. They should not be the mechanism by which data crosses the OT/IT boundary.

The NCSC guidance describes the recommended pattern for IT/OT data exchange: OT data is replicated to an OT historian, which then replicates to a DMZ historian via a unidirectional, secure transfer mechanism. IT systems query the DMZ historian through a secure HTTP-based API with strong authentication — they never directly access OT systems, and there is no inbound connectivity from IT to OT. Industrial control protocols stay in the OT network. Data exchange at the boundary uses secure, standardized protocols designed for interoperability: OPC UA over TLS, MQTT over TLS, HTTPS.

This pattern is important for several reasons. It enforces the unidirectional data flow principle at the protocol level, not just the network level. It ensures that the protocol at the boundary has the authentication and encryption properties appropriate for traffic that crosses trust zones. And it prevents IT security events from having direct protocol-level impact on OT systems — even if an IT system is compromised, it cannot issue commands to OT assets because it has no protocol-level connection to them.

💡 Pro Tip: If your organization currently has IT systems querying OT historians directly over industrial protocols, prioritize the migration to a DMZ-based historian model. This is one of the highest-impact architectural improvements you can make, because it enforces protocol-level separation at the IT/OT boundary — closing an attack path that network controls alone may not fully address. Document the migration as a risk-reduction initiative with quantified benefit, and use that framing to secure the engineering resources and project budget needed to execute it.

Building a Protocol Migration Roadmap

Few OT organizations will be in a position to immediately replace all insecure protocols with secure variants. The engineering effort required to upgrade protocol configurations across a complex OT environment is substantial, the testing required to validate that upgrades do not affect operational behavior is time-consuming, and the maintenance windows available in high-availability OT environments may be limited. What is realistic — and what the NCSC guidance calls for — is a documented migration roadmap that identifies insecure protocol use, prioritizes migration based on risk, and establishes a timeline for completing the transition.

The roadmap should address each instance of insecure protocol use: where it is, what risk it creates, what the migration path is, and when the migration will be completed. Instances where migration is not feasible in the short term should have compensating controls documented in the risk management framework — the controls that manage the risk in the interim — and a documented reason for the delay that has been reviewed and accepted by the named senior risk owner.

The NCSC guidance offers a practical tip for building the migration roadmap: use vendor-published resources, including direct support from engineers who understand your current deployment, public materials on migration paths, and the technical documentation for products already owned. Most major industrial protocol vendors have published migration guidance from legacy to secure variants — using these resources can significantly accelerate the planning work needed to build a credible roadmap.

📋 Key Ideas: What Strong Principle 4 Implementation Looks Like

  • All industrial protocols in the OT environment are inventoried and assessed for security properties — authentication, integrity, encryption, and crypto agility.
  • A documented migration roadmap identifies each instance of insecure protocol use and the planned path and timeline to secure variants.
  • Schema-based validation is implemented at key trust boundaries to enforce known-good traffic patterns, even where protocol-level security cannot yet be implemented.
  • Industrial control protocols are restricted to isolated OT network segments; the IT/OT boundary uses secure interoperability protocols (OPC UA over TLS, MQTT over TLS, HTTPS).
  • Insecure protocol use that cannot be immediately remediated is documented with compensating controls and formal risk acceptance by a senior risk owner.
  • Protocol choices consider crypto agility, ensuring the organization can migrate to updated cryptographic algorithms — including post-quantum algorithms — as needed.

Regulatory and Compliance Implications

Protocol security is increasingly on the radar of regulators overseeing critical infrastructure. Frameworks like IEC 62443 — the international standard for industrial cybersecurity — include specific requirements for communication security that address protocol-level controls. Regulatory examinations of OT security are beginning to ask not just whether network segmentation is in place, but whether the protocols traversing that network provide the authentication and integrity protections appropriate for their security context.

Organizations that have a documented protocol inventory, a risk-based assessment of each protocol's security properties, and a migration roadmap for insecure protocols are significantly better positioned in regulatory examinations than those that cannot answer basic questions about what protocols their OT systems use and what security properties those protocols provide. The governance work that Principle 4 requires — inventory, assessment, roadmap, compensating controls — is also the evidence package that demonstrates mature protocol security governance to regulators and auditors.


💭 Final Thought

Protocols are the language of OT communication — and like any language, they carry assumptions about trust that may not hold in the environments where they are now being used. Industrial protocols were designed for a world of isolated networks and physical security perimeters. That world has changed. The protocols have not all kept pace. Principle 4 is the discipline of honestly assessing where the language your OT systems speak is no longer adequate for the security context they now operate in — and taking deliberate, documented steps to close that gap. It will not happen overnight. But the organizations that have a roadmap and are making progress against it are in a fundamentally different security posture than those that have not acknowledged the problem. Start the inventory, build the roadmap, and begin the migration with the highest-risk instances first. The investment in protocol security pays dividends across every other principle in this series.

This article is part of the InfoSec Made Easy series on the NCSC Secure Connectivity Principles for Operational Technology. Read the full series at www.infosecmadeeasy.com.

Popular posts from this blog

Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption

Generative AI has moved faster than nearly any technology security leaders have dealt with. Tools that can generate text, code, images, and data insights are now embedded into productivity platforms, security tooling, development workflows, and business operations—often before security teams are formally involved. For CISOs, this creates a familiar but amplified challenge: innovation is happening faster than governance, and unmanaged generative AI introduces material risk across confidentiality, integrity, availability, compliance, and trust. For aspiring information security professionals, AI governance represents a growing and valuable discipline where strategic thinking matters just as much as technical depth. The good news? We don’t need to invent governance from scratch. NIST’s AI Risk Management Framework (AI RMF) provides a practical, flexible structure that security leaders can use today to govern generative AI responsibly and defensibly. Why Generative AI Governance Matt...

NIST CSF 2.0 – Protect Function Deep Dive: Awareness and Training (PR.AT)

Most organizations don’t fail at cybersecurity because they lack tools. They fail because people do the reasonable thing in an unreasonable situation : Clicking a convincing link Reusing a password to get work done Sharing files the fastest way, not the safest Bypassing controls that slow them down PR.AT exists because humans are not the weakest link—they are the most influential one . NIST CSF 2.0 explicitly recognizes that cybersecurity awareness and training are not “nice-to-have” activities. They are protective controls that reduce risk every single day. Where PR.AT Fits in the Protect Function So far, Protect has focused on structural controls : PR.AA ensures only the right identities have access Controls, permissions, and authentication enforce boundaries PR.AT addresses something different: How people think, decide, and behave when controls are present—or when they fail. No control operates in isolation. People configure it. People use it. People override it. PR.AT is the layer...