InfoSec Made Easy
OT Security Leadership | NCSC Guidance Series
Why OT connectivity complexity is a security problem — and how structured architecture solves it
Walk through the network diagram of most mature OT environments and you will find the same story told in topology: an accumulation of connectivity decisions made over years and decades, each individually justified at the time, collectively creating a tangle of access paths, vendor tunnels, remote monitoring links, and business system integrations that no single person fully understands. Each connection was added to solve a specific operational problem. No one was tasked with managing the cumulative result.
This is the problem that Principle 3 of the NCSC's Secure Connectivity Principles for Operational Technology directly addresses. The connectivity models of OT systems are inherently complex, involving multiple stakeholders, evolving business requirements, and layers of integration that build up over time. Left unmanaged, this complexity becomes a security liability in its own right — not because any individual connection is insecure, but because the aggregate effect is an environment that is difficult to monitor, hard to control, and expensive to defend.
Centralizing and standardizing network connections is the principle's response to that challenge. By consolidating access points, enforcing uniform security controls, and establishing repeatable patterns for how connectivity is implemented, organizations can reduce complexity, improve visibility, and make the security management of OT connectivity operationally sustainable. For security leaders, this principle is about moving from reactive management of an inherited patchwork to proactive architecture that can be understood, controlled, and improved over time.
The Complexity Problem in OT Connectivity
It is worth being specific about why complexity is a security problem, not just an operational inconvenience. The NCSC guidance frames the challenge clearly: as OT connectivity models grow more complex, the attack surface expands. Each additional connection — especially if implemented in an ad hoc or bespoke manner — introduces potential vulnerabilities. The cumulative effect is an environment where monitoring and control become progressively harder, and where the probability that an anomalous connection goes undetected increases with every layer added.
This complexity manifests in several specific ways. Monitoring coverage degrades as the number of distinct access paths grows, because each path requires specific monitoring configuration that may not be consistently implemented. Security policy enforcement becomes inconsistent, because bespoke connections may not go through the controls applied to standardized paths. Change management becomes harder, because changes to one connection may have undocumented impacts on others. And incident response becomes slower, because understanding the scope of a compromise requires reconstructing a connectivity model that was never fully documented.
The business pressure that drives this complexity is real. New vendors need access. Business systems need operational data. Regulatory requirements demand new reporting integrations. Remote monitoring expands to cover more assets. Each of these requirements creates a new connectivity decision — and without a centralized architecture to channel these decisions into, each one tends to be solved individually, in whatever way is fastest or cheapest at the time. The cumulative result is the patchwork that most experienced OT security professionals recognize immediately.
💡 Pro Tip: When you inherit a complex OT connectivity environment, do not try to immediately redesign everything. Start by documenting what exists — every access path, every vendor connection, every integration — and then categorize each one: is it still active? Still necessary? Is it following the organization's current security standards? That categorization exercise typically reveals a significant number of connections that can be simply decommissioned, which reduces complexity without requiring any architectural redesign. Clean up before you build up.
What Centralised Architecture Actually Means
When the NCSC guidance calls for centralized connectivity, it is describing a specific architectural pattern: rather than allowing each new connectivity requirement to create its own access path into the OT network, all external connectivity flows through a small number of hardened, monitored, and consistently managed gateways. These gateways — typically located in a DMZ between the OT network and external networks — become the single point of enforcement for security controls, authentication, and logging.
The practical implication is that instead of deploying separate VPN endpoints within the OT network for each third party, all vendor remote access is centralized through a single secure remote access gateway in the DMZ. This gateway enforces consistent access controls, captures complete session logs for all vendors, and applies the same authentication requirements regardless of which vendor is connecting. The OT network has one well-understood entry point for vendor traffic, rather than many partially-understood points scattered across the environment.
This centralization achieves several security properties simultaneously. Monitoring is simpler because there is one place to instrument, not many. Security controls are consistent because all traffic passes through the same enforcement point. Change management is cleaner because changes to vendor access go through one process. And incident response is faster because the scope of any compromise through the vendor access channel is bounded by the centralized gateway architecture, rather than potentially spanning multiple independent connections.
For organizations with complex legacy connectivity, implementing full centralization may be a multi-year journey. The guidance explicitly acknowledges that legacy products or brownfield deployments may not align with new connectivity patterns, and calls for a clear process to manage interim risks and migrate these systems within a sensible timeframe. This is not a reason to delay — it is a reason to start planning the migration now, with clear milestones and accountability for progress.
The Three Properties of Well-Managed OT Connectivity
The NCSC guidance organizes the goals of Principle 3 around three properties that well-managed OT connectivity should exhibit: flexible, repeatable, and categorized. These three properties work together to create an architecture that can be consistently applied, efficiently maintained, and appropriately tailored to the specific requirements of each connectivity type.
Flexible: OT connectivity architecture must be able to adapt as threats evolve, regulatory requirements change, and technology advances. This means selecting products and solutions that offer ongoing support for new security controls — not locking into approaches that will require complete replacement when the threat model shifts. Flexibility also extends to contractual arrangements with third parties: vendor agreements should allow security requirements to be updated as the organization's security posture matures, rather than locking in yesterday's security baseline for the life of the contract.
The NCSC guidance emphasizes that flexibility requires routine review of threat advisories and an understanding of how adversaries exploit connectivity. This is threat-informed security governance — continuously calibrating the connectivity architecture against the actual threat landscape rather than assuming that an architecture that was adequate last year is adequate today. For security leaders, this means building threat intelligence consumption into the regular review cycle for OT connectivity, not treating it as an optional input.
Repeatable: Connectivity should be robust and reusable, with implementations that minimize bespoke solutions for each use case. New connectivity should avoid duplicating existing routes into the network, reducing the overhead of deployment, maintenance, and security review. When a new vendor requires remote access, the answer should be "use the existing secure remote access gateway with vendor-specific access controls" — not "deploy a new VPN endpoint for this vendor."
Repeatability also means standardizing the security requirements for each type of connectivity so that the security review of a new instance of a known connectivity type is efficient and consistent. Rather than conducting a full security assessment of every vendor remote access request from scratch, organizations with repeatable connectivity patterns can assess each new request against the established standard — and spend their security review effort on the genuinely novel cases that require fresh thinking.
Categorized: Not all OT connectivity carries the same risk or requires the same controls. The guidance identifies the distinction between human-to-human, human-to-machine, and machine-to-machine interactions as a useful categorization that supports targeted, proportionate security controls. A machine-to-machine data replication connection from an OT historian to a DMZ instance may require strong protocol security but carries different risks from a human-to-machine vendor remote access session, which requires strong authentication and session monitoring.
📋 Example: Connectivity Categorization in Practice
An energy utility maps its OT connectivity into four categories: (1) Machine-to-machine operational data flows — historian replication, SCADA telemetry forwarding — controlled through unidirectional transfer mechanisms with data validation; (2) Human-to-machine vendor access — all routed through a centralized secure remote access gateway in the DMZ, requiring phishing-resistant MFA and generating full session logs; (3) Human-to-machine internal engineering access — allowed only from privileged access workstations within the OT network, with separate access tiers for read-only monitoring and control-capable connections; (4) Machine-to-machine software update paths — managed through a patching system in the DMZ, with cryptographic validation of all updates before delivery to OT systems. Each category has a defined control set that is applied consistently across all connections of that type. New connectivity requirements are first categorized, then evaluated against the standard for that category — making security review faster and more consistent.
Managing Vendor Connectivity at Scale
One of the most practically challenging aspects of centralizing OT connectivity is managing a large and evolving population of vendor relationships. Most complex OT environments involve dozens of vendors — equipment manufacturers, system integrators, maintenance contractors, software providers — each of which may require some form of remote access for support, maintenance, or troubleshooting. Managing these relationships individually, with ad hoc connectivity solutions for each vendor, is precisely the pattern that Principle 3 is designed to replace.
A centralized vendor access architecture routes all vendor connectivity through a common gateway with consistent controls. But centralization alone is not sufficient — the governance around vendor access must also be centralized. This means a single process for provisioning vendor access, with documented justification, defined scope, and time-limited authorization. It means consistent security requirements that vendors must meet before access is granted — and contractual mechanisms to enforce those requirements over the life of the vendor relationship.
The NCSC guidance specifically calls for flexibility to be embedded in vendor contractual agreements, to accommodate evolving security requirements. This is a practical point that is frequently overlooked in OT environments: contracts for OT system maintenance are often long-lived, and security requirements agreed at contract signing may be inadequate several years into the contract period. Building in the right to update security requirements — and the obligation for vendors to meet updated requirements — protects the organization's ability to improve its security posture without waiting for contract renewal.
💡 Pro Tip: Review your vendor access list quarterly. For each vendor, ask: Is this access still active? Is it still necessary? Is the vendor still meeting your current security requirements? Is the scope of access still appropriate for their current role? Most organizations that do this for the first time find a significant number of vendor access credentials that are no longer needed — former contractors whose access was never revoked, vendors whose engagement has ended but whose access persists, integrations built for a project that completed years ago. Revoking unnecessary access costs nothing and eliminates real attack surface.
Threat-Informed Connectivity Review
The NCSC guidance emphasizes that controls must be regularly assessed and refined to keep pace with emerging threats, and that this assessment should be threat-informed — based on a routine review of threat advisories and an understanding of how adversaries are actually exploiting OT connectivity. This is a more demanding standard than periodic compliance reviews, which tend to evaluate whether controls exist rather than whether they are effective against current threat methods.
In practice, threat-informed connectivity review means maintaining awareness of how adversaries are targeting OT environments — the specific techniques, the protocols being exploited, the access paths being abused — and evaluating your connectivity architecture against those specific threats. If threat intelligence indicates that adversaries are actively exploiting a specific type of remote access solution used in your environment, that is a signal to accelerate the review and potential replacement of that solution — not to wait for the annual architecture review.
For regulatory compliance professionals, this threat-informed approach is increasingly reflected in regulatory expectations. Frameworks and regulations governing critical infrastructure increasingly expect organizations to demonstrate that their security measures are calibrated against current threats, not just compliant with a static baseline established during a previous assessment cycle. The connectivity governance framework that Principle 3 describes — with regular, threat-informed review and documented update processes — provides the evidence needed to demonstrate this more dynamic form of security governance.
📋 Key Ideas: What Strong Principle 3 Implementation Looks Like
- All external OT connectivity routes through a small number of hardened, centralized gateways in a DMZ — not through ad hoc, individually-managed access paths.
- Vendor remote access is consolidated through a single secure remote access platform with consistent authentication, logging, and access control requirements.
- Connectivity types are categorized, with defined security control sets for each category applied consistently across all connections of that type.
- Vendor contracts include provisions for updating security requirements, ensuring that evolving security standards can be enforced without waiting for contract renewal.
- Legacy connectivity is inventoried, with an active migration plan to bring it into alignment with the standard architecture within a documented timeframe.
- Connectivity architecture is reviewed on a threat-informed basis, with threat intelligence informing when accelerated review or changes are warranted.
The CISO's Governance Role
For security leaders, Principle 3 creates a specific governance responsibility: maintaining the architectural standard that all new connectivity must align with, and enforcing the process that ensures deviations are explicitly assessed and approved rather than silently accumulated. This is not a one-time exercise — it is an ongoing governance function that requires regular attention, clear ownership, and the organizational authority to require business and operational teams to use the standard architecture rather than implementing bespoke solutions.
That organizational authority is not always easy to establish. OT environments are often managed by operations teams who prioritize availability and operational continuity, and who may resist security requirements that add steps to their operational processes. Building the case for centralized connectivity architecture requires the same translation skills described throughout this series: connecting the security benefit to business outcomes that operations leaders care about, including the cost of incident response, the regulatory exposure of undocumented access paths, and the operational resilience that comes from well-understood and consistently managed connectivity.
Aspiring CISOs who master this governance challenge — who can both design the centralized architecture and build the organizational alignment needed to make it stick — are developing one of the most valuable capabilities in modern OT security leadership.
💠Final Thought
Complexity in OT connectivity is not inevitable — it is the cumulative result of individual decisions made without a coherent architectural framework. Principle 3 is the framework that prevents that accumulation from continuing. It does not require that you immediately rebuild everything that has been built; it requires that you establish the standard against which all future decisions will be made, and that you work systematically to bring the legacy environment into alignment. The organizations that do this well are the ones that can honestly say, at any point in time, that they know every connection into their OT environment, why it exists, who is responsible for it, and whether it meets their current security standards. That kind of knowledge is the foundation of real OT security — and it only exists if someone has made it their job to build and maintain it.
This article is part of the InfoSec Made Easy series on the NCSC Secure Connectivity Principles for Operational Technology. Read the full series at www.infosecmadeeasy.com.
