Skip to main content

NCSC Secure Connectivity Principle 6: Limit the Impact of Compromise


InfoSec Made Easy

OT Security Leadership | NCSC Guidance Series

Assume breach, contain the blast radius — designing OT environments to survive compromise


There is a maturity progression in how organizations think about security. The first stage is prevention — the belief that with enough controls, attacks can be stopped at the perimeter. The second stage is detection — the recognition that prevention is never perfect, and that the ability to detect when something has gone wrong is as important as trying to prevent it. The third stage, which the most mature security programs operate at, is resilience — designing systems so that when a compromise occurs, its impact is contained, its blast radius is limited, and recovery is achievable without catastrophic operational disruption.

Principle 6 of the NCSC's Secure Connectivity Principles for Operational Technology operates squarely in that third stage. It does not assume that the boundary controls in Principle 5 will never fail. It does not assume that all connectivity will always be secure. It designs the OT environment so that when — not if — a compromise occurs, the attacker's ability to move laterally, cause damage, and disrupt operations is meaningfully constrained by the architecture of the environment itself.

For CISOs and security leaders, this is one of the most important mindset shifts in OT security. The question is not just "how do we prevent attackers from getting in?" but "if an attacker gets in, how much damage can they actually cause, and how do we make that answer as small as possible?" The controls in this principle — segmentation, micro-segmentation, separation of duties, DMZ architecture, and traffic filtering — are the answers to that second question.

The Two Primary Risks: Contamination and Lateral Movement

The NCSC guidance identifies two distinct risks that the controls in Principle 6 are designed to address. Understanding both is important because they represent different attack vectors that require somewhat different defensive responses.

Contamination refers to the introduction of malicious code, compromised data, or insecure configurations into a trusted OT environment. The canonical example is a maintenance laptop that has been compromised and connects to the OT production network — the malware on the laptop propagates across OT systems, undermining their integrity. Contamination can also occur through compromised software updates, infected USB drives, or vendor-supplied components that contain hidden vulnerabilities. The defining characteristic of contamination is that the malicious content enters the environment through what appears to be a legitimate channel — which is precisely what makes it dangerous.

Lateral movement is the process by which an attacker who has gained initial access expands their reach within the environment. After compromising a foothold — perhaps a workstation in the corporate network, or a vendor remote access account — the attacker uses that foothold to map the internal network, compromise additional systems, escalate privileges, and ultimately reach the critical OT assets they are targeting. In a flat OT network, lateral movement can be devastatingly effective: once inside the perimeter, an attacker may find no meaningful internal barriers between their initial foothold and the most critical control systems.

The NCSC guidance makes an important point about the broader scope of lateral movement risk: it is not only an external threat. Insider threats and third-party contractors who already have a foothold on the network can use the same lateral movement techniques as external attackers. Identifying and mitigating "living off the land" techniques — where attackers use legitimate tools and processes to move laterally — is critical because these techniques are harder to detect than novel malware or exploit code.

💡 Pro Tip: When assessing lateral movement risk in your OT environment, map the "blast radius" of each access point. For every account or connection that has access to your OT environment, ask: if this account or connection were compromised, what is the maximum set of OT systems the attacker could reach? This exercise often reveals that a relatively small number of high-privilege accounts or poorly-segmented access paths represent most of the total lateral movement risk. Reducing that blast radius — through privilege reduction, segmentation, and access scope restriction — is one of the highest-value security investments in any OT environment.

Segmentation: The Foundation of Impact Limitation

Network segmentation — dividing the OT network into smaller, functionally isolated zones — is the most fundamental control for limiting the impact of compromise. By restricting communication between zones and enforcing policies about what traffic is permitted to cross zone boundaries, organizations can contain threats within the zone where they originate. An attacker who compromises a device in one zone faces additional barriers before they can reach devices in another zone — barriers that do not exist in a flat network where all devices can freely communicate.

Effective segmentation in OT requires thinking about zones not just in terms of network topology but in terms of functional relationships and trust levels. The IEC 62443 Zones and Conduits model, referenced in the NCSC guidance, provides a useful framework: assets with similar security requirements and functional relationships are grouped into zones, and the communication paths between zones (conduits) are explicitly defined and controlled. This approach makes the segmentation architecture meaningful from a security perspective, rather than simply creating network divisions that do not reflect actual trust relationships.

Gateway devices — OT gateway devices, serial gateways, and network switches — deserve specific attention in segmentation design. The NCSC guidance notes that these devices aggregate multiple assets, making them high-value targets for machine-in-the-middle attacks. A compromised gateway can give an attacker a persistent position in the network that is difficult to detect and provides access to all the assets the gateway connects. These devices should be subject to regular updates, robust configuration management, and timely end-of-life replacement.

Micro-Segmentation: Granular Control Where It Matters Most

Micro-segmentation takes the segmentation concept to a finer level of granularity. Rather than dividing the OT network into broad zones — IT versus OT, or control versus monitoring — micro-segmentation applies controls at the level of individual devices, services, or protocols. The result is an environment where communication paths are tightly defined: a sensor is permitted to communicate with its associated controller, but not with other devices in the same zone; a monitoring system can receive data from control systems but cannot send commands back to them.

The security benefit of micro-segmentation is a direct reduction in lateral movement potential. Even within a zone, an attacker who has compromised one device cannot freely communicate with other devices in the same zone — because each device communicates only with the specific systems it has been explicitly authorized to reach. This does not eliminate the risk of lateral movement, but it makes it significantly harder and slower — which gives monitoring and detection systems more time to identify and respond to the threat before it reaches critical systems.

Micro-segmentation is particularly valuable in environments with mixed trust levels and legacy systems. A zone that contains a mix of current-generation devices with modern security capabilities and legacy devices that cannot be patched or upgraded can use micro-segmentation to isolate the legacy devices from the rest of the zone — protecting the modern systems from contamination while still allowing the necessary operational communication paths to remain active.

📋 Example: Micro-Segmentation in a Manufacturing OT Environment

A manufacturing facility implements micro-segmentation across its production OT network. Each production line's PLCs are isolated in a dedicated micro-segment that permits only: inbound commands from the SCADA system serving that line, outbound telemetry to the OT historian, and inbound firmware updates from the designated update server. Lateral communication between PLCs on different production lines is blocked. The engineering workstations that program PLCs are in a separate micro-segment with access only to the specific PLCs they manage, not to the broader OT network. When an attacker compromises an engineering workstation through a phishing attack on an engineer's credentials, they can reach the three PLCs that workstation manages — but the micro-segmentation prevents them from moving to the rest of the production network. The compromise is significant but contained. Without micro-segmentation, the same initial access would have provided a path to the entire production network.

Separation of Duties: Functional Isolation as Security Control

Separation of duties in OT environments means that no single system, role, or individual has complete control over all aspects of a critical function. This principle is familiar from financial controls — no single person should be able to both authorize and execute a financial transaction — and it applies with equal force in OT security. By dividing responsibilities and access across systems and users, organizations reduce the risk of accidental or malicious actions and limit the impact of any single compromise.

A practically important application of separation of duties in OT is the functional separation between systems involved in control, monitoring, and business operations. The NCSC guidance is explicit: monitoring, analytics, and business systems should not have direct control capabilities over OT assets. These systems should be designed to observe and analyze, not to command or alter operations. A business intelligence system that can read OT data for reporting purposes should not have a protocol-level connection that would allow it to issue commands to OT systems — even in an emergency, even if it would be operationally convenient.

The Browse Down principle is a related concept that the guidance introduces: the administration device used to manage a system should be trusted at least as much as, and preferably more than, the system being managed. This means that privileged access workstations used to manage critical OT systems should have stronger security properties — hardened configurations, up-to-date software, strong authentication — than the systems they manage. If the management device is less trusted than the managed system, a compromised management device becomes an attack path into the managed system with the full privileges of an administrator.

Traffic Control: From Static Rules to Dynamic Inspection

Controlling traffic between OT zones is the operational implementation of segmentation policy. The NCSC guidance describes a hierarchy of traffic control mechanisms, from the simplest to the most sophisticated, noting that the minimum acceptable level is static network filtering between zones.

Host-based controls operate at the individual device level, enforcing rules based on source and destination addresses, ports, protocols, and traffic directionality. These should be the last line of defense for a device — not the primary one — but they provide an important additional layer when other controls fail. The default posture for host-based controls should be deny-all, with only explicitly authorized traffic permitted.

Static network controls — route filtering and access control lists on routers and switches — enforce zone boundaries at the network layer. They are straightforward to implement, well-understood, and effective for enforcing basic perimeter rules. In environments where performance and predictability are critical, the simplicity of static controls is a genuine advantage. Every OT environment should have at least this level of traffic control between network zones.

Dynamic network controls add context-awareness to traffic enforcement. Stateful filtering understands whether a packet is part of a legitimate, established session — allowing return traffic for an authorized request without requiring an explicit rule for the response, and blocking packets that do not correspond to any known legitimate session. Deep packet inspection (DPI) goes further, analyzing the full payload of network packets and making enforcement decisions based on protocol-specific command content — distinguishing between read and write commands, validating data values against expected ranges, and blocking traffic that violates operational norms.

Intrusion detection and prevention systems (IDS/IPS) complement segmentation controls by identifying threats that may bypass or exploit the enforcement controls. IDS provides passive monitoring — alerting operators to suspicious patterns, protocol violations, and behavioral anomalies without blocking traffic. IPS adds active blocking capability, responding to identified threats in real time. Both systems are most effective when deployed at critical network boundaries, where they can monitor traffic crossing between zones and detect compromise attempts before they reach their targets.

💡 Pro Tip: When implementing DPI in OT environments, carefully consider where encrypted traffic creates inspection gaps. DPI that relies on decrypting TLS-protected sessions requires the inspecting device to be fully trusted, because it handles plaintext traffic after decryption. Any compromise of the DPI device exposes the decrypted content of all protected sessions. In OT environments, where the security of operational data may be critical, this trade-off needs to be explicitly assessed and accepted before DPI with TLS inspection is deployed. In many cases, the inspection point should be positioned where unencrypted OT protocol traffic can be inspected directly, rather than requiring decryption of encrypted management traffic.

📋 Key Ideas: What Strong Principle 6 Implementation Looks Like

  • The OT network is divided into functional zones with explicitly defined and controlled communication paths between zones.
  • Micro-segmentation restricts lateral communication within zones, limiting attacker movement even after initial compromise.
  • Monitoring, analytics, and business systems have no direct control capabilities over OT assets — functional separation is enforced architecturally.
  • Administration of OT systems is conducted only from privileged access workstations with security properties at least equivalent to the systems being managed.
  • Static network controls are implemented as a minimum between all OT zones; dynamic controls and DPI are implemented at high-risk boundaries.
  • IDS/IPS is deployed at critical zone boundaries to provide detection and response capability against threats that bypass preventive controls.
  • Gateway and aggregation devices are subject to regular updates and configuration review, given their high value as lateral movement pivot points.

The Business Case for Impact Limitation

For security leaders who need to make the case for investment in segmentation and micro-segmentation, the business argument is grounded in the same risk quantification principles that apply to any OT security investment. The question is not whether a compromise will ever occur — in a sufficiently sophisticated threat environment, the question is when. The question is how much damage that compromise will cause.

A flat OT network with no internal segmentation means that initial access to any device provides a potential path to every device. A well-segmented network with micro-segmentation means that initial access to any device provides access only to that device and its immediate communication partners. The difference in potential impact between those two scenarios — measured in operational disruption, recovery cost, safety risk, and regulatory consequence — is the business case for the investment in segmentation architecture.

Organizations that have implemented segmentation and later experienced a compromise have consistently reported that the segmentation materially limited the scope of the incident. This real-world evidence is valuable in executive conversations about security investment: not abstract risk calculations, but documented cases where architectural controls made the difference between a contained incident and an enterprise-wide operational crisis.


💭 Final Thought

Limiting the impact of compromise is an acknowledgment of reality: in a sufficiently determined and capable threat environment, no preventive control is perfect. The organizations that recover from OT security incidents without catastrophic operational disruption are not the ones that never got compromised — they are the ones that designed their environments so that compromise, when it occurred, was containable. Every zone boundary, every micro-segment, every access restriction, every traffic control rule is a barrier that an attacker must overcome before reaching their real objective. The more barriers there are, the more time defenders have to detect and respond, and the less damage an attacker can cause before that response occurs. Build those barriers deliberately, maintain them rigorously, and test them regularly. They are what stands between a security incident and an operational crisis.

This article is part of the InfoSec Made Easy series on the NCSC Secure Connectivity Principles for Operational Technology. Read the full series at www.infosecmadeeasy.com.

Popular posts from this blog

Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption

Generative AI has moved faster than nearly any technology security leaders have dealt with. Tools that can generate text, code, images, and data insights are now embedded into productivity platforms, security tooling, development workflows, and business operations—often before security teams are formally involved. For CISOs, this creates a familiar but amplified challenge: innovation is happening faster than governance, and unmanaged generative AI introduces material risk across confidentiality, integrity, availability, compliance, and trust. For aspiring information security professionals, AI governance represents a growing and valuable discipline where strategic thinking matters just as much as technical depth. The good news? We don’t need to invent governance from scratch. NIST’s AI Risk Management Framework (AI RMF) provides a practical, flexible structure that security leaders can use today to govern generative AI responsibly and defensibly. Why Generative AI Governance Matt...

NIST CSF 2.0 – Protect Function Deep Dive: Awareness and Training (PR.AT)

Most organizations don’t fail at cybersecurity because they lack tools. They fail because people do the reasonable thing in an unreasonable situation : Clicking a convincing link Reusing a password to get work done Sharing files the fastest way, not the safest Bypassing controls that slow them down PR.AT exists because humans are not the weakest link—they are the most influential one . NIST CSF 2.0 explicitly recognizes that cybersecurity awareness and training are not “nice-to-have” activities. They are protective controls that reduce risk every single day. Where PR.AT Fits in the Protect Function So far, Protect has focused on structural controls : PR.AA ensures only the right identities have access Controls, permissions, and authentication enforce boundaries PR.AT addresses something different: How people think, decide, and behave when controls are present—or when they fail. No control operates in isolation. People configure it. People use it. People override it. PR.AT is the layer...