InfoSec Made Easy
OT Security Leadership | NCSC Guidance Series
Reducing attack surface in OT environments — why how you connect matters as much as whether you connect
In cybersecurity, the concept of attack surface is well understood: the more accessible your systems are to potential adversaries, the more opportunity exists for exploitation. In IT environments, attack surface management has become a mature discipline, with tools, processes, and dedicated teams focused on identifying and reducing unnecessary exposure. In OT environments, the same concept applies — but the stakes, the constraints, and the practical approaches are significantly different.
Principle 2 of the NCSC's Secure Connectivity Principles for Operational Technology is focused on exposure management: proactively identifying, assessing, and mitigating the risks associated with how accessible your OT assets are to external or adjacent networks. The principle is built around a straightforward insight — the more assets you expose at the network edge, the broader your attack surface becomes. And in OT environments, where the consequences of compromise can include physical harm and disruption to essential services, a broad attack surface is not an acceptable risk to carry indefinitely.
For security leaders, this principle is both a framework for thinking about OT exposure and a practical guide to the specific controls that reduce it. It covers everything from architectural principles — like initiating all connections outbound from within the OT environment — to tactical measures like just-in-time access and external attack surface monitoring. Together, these controls form a coherent approach to minimizing the window of opportunity that attackers have to exploit OT connectivity.
Understanding Exposure in OT Contexts
The NCSC guidance defines exposure in terms of where an asset sits within the system architecture and how accessible it is to external or adjacent networks, taking into account the depth of defence controls around it. This is a more nuanced definition than simply "is this system on the internet?" — it recognizes that exposure exists on a spectrum and that the appropriate response depends on the specific exposure context.
An OT asset that is directly accessible from the public internet occupies one end of that spectrum. An asset that is accessible only from within an isolated OT zone, through multiple layers of authenticated controls, sits at the other end. Most real-world OT environments have assets distributed across this spectrum — some well-protected, some surprisingly exposed, and often a significant number that fall somewhere in between, protected by controls that may have been appropriate when they were implemented but have not kept pace with evolving threats.
One of the practical insights in the NCSC guidance is the reminder that network edges in OT can be hard to identify. An unsecured radio link inside an OT network may not appear on network diagrams but still forms part of the network edge. A serial connection to an external monitoring system may have been installed years ago and never formally documented. A vendor-maintained device on a remote section of the network may have been connected in a way that bypasses normal perimeter controls. Exposure management in OT starts with a complete and accurate inventory of all data flows and connectivity — which is why the NCSC's companion guidance on maintaining a definitive view of OT architecture is a prerequisite, not an optional extra.
💡 Pro Tip: Before you can limit your exposure, you have to know what is exposed. Run an external attack surface management (EASM) scan against your public IP address space and compare the results against your documented asset inventory. The gaps between what you expected to find and what the scan discovers are your undocumented exposure — and in every mature OT environment, there are surprises. Schedule this as a regular activity, not a one-time exercise, because your exposure profile changes every time a new connection is added or a vendor makes a configuration change.
The Outbound Connection Principle
One of the most important architectural concepts in Principle 2 is the requirement that all connections with the OT environment should be initiated as outbound connections from within the OT environment. This principle deserves careful attention because it is both powerful in its effect and frequently misunderstood in its implementation.
The reason this matters is straightforward: inbound connections require open ports at the network perimeter. Open ports are a mechanism through which external parties can initiate contact with your OT assets. Every open inbound port represents an opportunity for an attacker to attempt to establish a connection — either by exploiting a vulnerability in the service listening on that port or by using stolen credentials to authenticate as a legitimate user. Eliminating inbound ports eliminates that entire class of attack surface.
But many OT operational requirements seem to require inbound connectivity — vendor support, remote monitoring, software update delivery, business system integration. The NCSC guidance addresses this through the concept of brokered connections. Rather than opening inbound ports to OT systems, external parties connect to an intermediary system — a broker — located in a separate, security-controlled network segment such as a DMZ. The broker then relays the connection to the OT asset, ensuring that the OT system is never directly exposed to external networks and that all access is mediated, monitored, and controlled.
This architecture achieves the operational requirement — external parties can still reach OT assets for legitimate purposes — without creating inbound exposure on the OT network boundary. The security properties of the brokered connection are critical: it must use modern authentication methods, be actively maintained and updated, and provide full logging of all sessions that pass through it.
📋 Example: Brokered Connection in Practice
A manufacturing facility needs to allow an equipment vendor to perform remote diagnostics on a programmable logic controller. Rather than opening a VPN tunnel that terminates directly in the OT network, the facility deploys a secure remote access gateway in a DMZ. When the vendor needs access, they authenticate to the gateway using phishing-resistant MFA. The gateway validates the session, logs all activity, and establishes a relay connection to the specific PLC — while blocking access to all other OT assets. The OT network has no open inbound ports. The vendor gets the access they need. The security team has a complete record of every command executed during the session. This is the brokered connection model working as intended.
Reducing Time of Exposure: Just-in-Time Access
A second powerful concept in Principle 2 is the idea of reducing the duration of exposure, not just its breadth. The guidance makes the point that not all connections need to be continuously active. Where possible, connectivity should be enabled only when it is actually required and disabled when it is not. This just-in-time access model significantly reduces the window of opportunity for attackers to exploit any given connection.
In practice, this means moving away from standing connectivity where persistent connections remain open and instead moving to on-demand access that is explicitly provisioned for a specific task and automatically revoked when that task is complete. For vendor remote access, this might mean a workflow where the vendor requests access through a ticketing system, access is approved and provisioned for a defined time window, and the connection is automatically closed at the end of that window regardless of whether the work is complete.
The operational benefit of this approach is that it directly limits the exposure window. A standing VPN tunnel that is available 24 hours a day provides an attacker with 24 hours of opportunity to exploit it — including the hours when no legitimate user is actively using it. A just-in-time access model that enables connectivity for two hours per week provides an attacker with a fraction of that opportunity. The attack surface has not been eliminated, but it has been dramatically reduced through disciplined access management.
For security leaders building maturity in this area, just-in-time access is also a governance mechanism. When access requires explicit provisioning for each use, there is a natural opportunity to review whether the access is still necessary, whether the scope of access is appropriate, and whether the requesting party is still a trusted entity. This turns access management into a continuous assurance activity rather than a one-time provisioning task.
Managing Obsolete Devices: Connectivity Without Direct Exposure
Obsolete devices present a specific and common challenge in OT exposure management. As addressed in Principle 1, many OT environments contain equipment that is past end-of-life, running unpatched software, and lacking modern security capabilities. The NCSC guidance is explicit: obsolete devices are unsuitable for direct external connectivity beyond the OT network boundary. The risk of exploitation is too high, and the device's own security capabilities are too limited to provide meaningful protection.
But operational reality often means these devices cannot be replaced immediately. They may be embedded in processes that require significant engineering effort to migrate, or procurement and commissioning timelines may extend remediation over months or years. In the interim, these devices may still need to communicate with other systems or receive vendor support. The guidance provides a set of compensating controls for this transition period.
Network segmentation isolates the obsolete device from the wider OT network, limiting the blast radius if the device is compromised. Trusted boundary controls — hardened protocol gateways and jump hosts — mediate all communication to and from the obsolete device, ensuring that any traffic is translated, inspected, and monitored before it reaches the device or leaves from it. Access restrictions ensure that the obsolete device can only communicate with the specific systems it needs to, not with the broader network. And comprehensive logging ensures that all interactions with the obsolete device are recorded and monitored for anomalous behavior.
These are compensating controls, not permanent solutions. The NCSC guidance is clear that organizations should view them as temporary measures while establishing a timeline for asset replacement. The practical implication for security leaders is that every obsolete device with network connectivity should have both a compensating control set and a documented remediation timeline. Without the timeline, compensating controls become permanent workarounds — and the risk they are managing never actually gets resolved.
💡 Pro Tip: When you deploy compensating controls for an obsolete OT device, document both the control set and the intended replacement date in your risk register. Review this list quarterly. If replacement dates slip — and they will — document why and get explicit sign-off from the named senior risk owner to extend the compensating control period. This creates accountability for the delay and ensures that leadership is making an active, documented choice to accept the extended risk rather than simply allowing it to persist by default.
External Attack Surface Management: Seeing What Attackers See
External attack surface management tools take a perspective that is extremely valuable for OT security: they see your environment the way an attacker sees it. These tools index internet-connected assets and protocols, identifying exposed web servers, remote access portals, and — critically — industrial control system components that are accessible from the public internet. If your systems are visible to these scanning services, the NCSC guidance notes, they are highly likely to be found and targeted by malicious actors.
The value of EASM for OT security is in the gap analysis it enables. Your internal asset inventory tells you what you intended to expose. An EASM scan tells you what is actually exposed. The difference — assets you did not know were accessible, ports that should have been closed, services exposed to wider networks than intended — represents your unmanaged exposure. Every item in that gap is a potential attack entry point that exists because of a misconfiguration, an unauthorized change, or a connectivity that was set up and forgotten.
Implementing EASM for OT requires some specific considerations that the NCSC guidance addresses directly. Organizations need to maintain a current list of all public IP addresses, including dynamic addresses used by cloud services. The scanning scope must cover the full public IPv4 and IPv6 ranges belonging to the organization, not just the addresses currently in active use. And where third parties provide connectivity to your OT environment, those endpoints may also warrant inclusion in your exposure management program.
Any device found through an EASM scan should be treated as at risk until investigated. The investigation should establish whether the exposure was intentional, whether the device has appropriate controls for its exposure level, whether it has been updated and hardened, and whether the exposure creates additional risk for connected OT systems. Where investigation reveals inadequate controls for the exposure context, remediation actions should be taken immediately — not scheduled for the next maintenance window.
Wireless Networks: The Invisible Boundary Problem
Wireless connectivity in OT environments deserves specific attention because it changes the nature of physical security as a control. In a wired OT network, physical access to the facility is a meaningful layer of security — an attacker who cannot reach a cable cannot connect to the network. In a wireless environment, that physical boundary no longer constrains network access. Wireless signals extend beyond site boundaries, and an attacker within signal range can attempt to intercept communications or inject traffic without ever entering the building.
This does not mean wireless connectivity should be avoided in OT environments — in many cases it provides significant operational benefits and may be essential for certain monitoring or control applications. But it does mean that wireless networks require security controls that account for their inherently boundary-crossing nature. Signal strength should be tuned to minimize range beyond operational necessity. Authentication should be robust, using modern protocols rather than shared keys. Encryption should be current and properly configured. And monitoring should specifically look for unauthorized devices attempting to connect to or interact with wireless OT systems.
The NCSC guidance emphasizes that understanding the factors affecting signal security is critical to building proportionate controls. This is a principle that applies to all wireless technologies used in OT — from Wi-Fi to Bluetooth to proprietary industrial wireless protocols — each of which has different security characteristics, range profiles, and vulnerability surfaces.
📋 Key Ideas: What Strong Principle 2 Implementation Looks Like
- All OT connections are initiated outbound from within the OT environment; inbound ports at the OT boundary are eliminated.
- Third-party and vendor access uses brokered connections through a hardened DMZ gateway, never direct connections to OT systems.
- Just-in-time access is implemented for all non-continuous connectivity requirements, limiting the exposure window.
- Admin interfaces are accessible only through privileged access workstations; physical access-only administration is enforced for the highest-risk systems.
- Obsolete devices are isolated with documented compensating controls and tracked replacement timelines.
- EASM scanning is run regularly, with results compared against the documented asset inventory and gaps investigated promptly.
- Wireless networks in OT environments are configured with security-first settings and monitored for unauthorized access attempts.
The Compliance Dimension: Why Regulators Care About Exposure
For organizations in regulated sectors, exposure management is not just a security best practice — it is increasingly a regulatory expectation. Regulators examining critical infrastructure organizations want to see evidence that exposed OT assets are identified, inventoried, and actively managed. An organization that cannot demonstrate it knows what OT assets are externally accessible — or that has externally accessible OT systems with no documented controls — presents a significant finding in any serious regulatory examination.
The NCSC guidance provides a useful framework for documenting the exposure management program in a way that satisfies regulatory scrutiny. The combination of a current asset inventory, documented EASM processes, recorded investigation of discovered assets, and tracked remediation of inadequate controls gives regulators the evidence they need to assess the maturity of the exposure management approach. Organizations that build these processes proactively will find regulatory examinations easier and will be in a much stronger position to defend their security posture when questions arise.
💠Final Thought
Every OT connection that is not strictly necessary is an attack surface that does not need to exist. Every connection that is necessary but continuously active is an attack surface that could be smaller with just-in-time access. Every inbound port is an attack surface that could be eliminated with outbound-only architecture. Principle 2 is ultimately about the discipline of minimizing opportunity — not by refusing connectivity, but by shaping it so that attackers have less to work with. That discipline requires continuous attention: exposure changes every time a new connection is added, a vendor makes a configuration change, or a device is deployed without full security review. The organizations that manage exposure well are the ones that treat it as an ongoing operational function, not a one-time audit task.
This article is part of the InfoSec Made Easy series on the NCSC Secure Connectivity Principles for Operational Technology. Read the full series at www.infosecmadeeasy.com.
