Skip to main content

Zero Trust — From Concept to Board Room Post 3 of 4


Blog Series: Zero Trust — From Concept to Board Room

Post 3 of 4
A Practical Guide for InfoSec Professionals, Aspiring CISOs, and New Security Leaders


In the first two posts of this series we covered what zero trust is, why it matters, how to win executive and board support for it, and how each of the five pillars works. If you have read those posts, you now understand the architecture. What you may not yet know is where you stand within it.

That is what this post is about. Understanding zero trust conceptually is one thing. Knowing your actual current posture — what you have already built, where the gaps are, and how far you realistically are from where you need to be — is something else entirely. That knowledge is what drives everything that comes next: your roadmap, your investment priorities, your sequencing, and your executive narrative. You cannot build a credible program without it.

CISA’s Zero Trust Maturity Model gives us the framework to do that assessment with precision and consistency. In this post I am going to walk you through the four maturity stages, explain what each stage looks like across the five pillars in concrete terms, and give you the practitioner’s perspective on how to actually use this model to evaluate where your organization stands today — and where you need to take it.

 ABOUT THIS SERIES

  • Post 1: Zero Trust 101 — What It Is, Why It Matters, and How to Win Executive Support
  • Post 2: The Five Pillars — Identity, Devices, Networks, Applications, and Data
  • Post 3: The Maturity Model — Assessing Where You Are and Planning Where You Need to Go (you’re here)
  • Post 4: Building and Executing Your Zero Trust Roadmap

Why Maturity Assessment Matters Before Planning

One of the most common mistakes I see in zero trust programs is jumping straight from awareness to roadmap without doing a rigorous current-state assessment first. The organization attends a conference, reads a framework document, and then a team of leaders sits down and starts listing everything they want to build. The result is almost always a roadmap that is aspirational rather than grounded — one that dramatically underestimates the gaps, misjudges the sequencing, and sets expectations with leadership that cannot be met.

The CISA Zero Trust Maturity Model exists precisely to prevent that failure mode. It provides a structured, pillar-by-pillar framework for evaluating your current posture honestly and translating that assessment into a defensible, prioritized view of where to go next. Used well, it does three things that most organizations desperately need: it creates a shared language across your security team and your leadership, it surfaces gaps that are often invisible until someone looks for them explicitly, and it gives your roadmap a baseline to measure progress against over time.

The model is not a compliance checklist. CISA is explicit on this point. It is a guide — a structured way to think about maturity progression that helps you make better investment decisions. The goal is not to score a perfect “Optimal” rating across every pillar by some arbitrary deadline. The goal is to understand where you are, understand where you need to be given your organization’s risk profile and mission requirements, and make deliberate, sequenced progress in that direction.

 Pro Tip Do not attempt to score your organization’s maturity alone or in a single working session. The most accurate assessments involve multiple stakeholders: your identity and access management team, your network engineers, your application owners, your data governance leads, and your SOC. Each group holds a piece of the picture. When those perspectives come together, you get a far more honest view of where the program actually stands — and you build shared ownership of the gaps at the same time.

The Four Maturity Stages

CISA’s model defines four distinct stages of zero trust maturity: Traditional, Initial, Advanced, and Optimal. These are not arbitrary labels. Each stage has specific, operational characteristics that describe how access decisions are made, how much automation is present, how broadly policies are enforced, and how well the pillars are integrated with one another. Understanding what each stage actually looks like in practice is essential before you can accurately assess where your organization falls.

Traditional

Traditional is where most organizations start, and — if we are being honest — where many enterprise environments still sit across significant portions of their architecture. At the Traditional stage, access decisions are based on static rules and manually configured policies. Identity is authenticated with passwords or basic MFA, but validation is largely one-time at login with no continuous verification during the session. Devices are inventoried through physically labeled records and limited software monitoring, with manual patch management and little to no automated compliance enforcement. Network architecture is flat or minimally segmented, with implicit trust between internal systems. Applications are accessed based on network location rather than identity and device context. Data categorization is inconsistent or incomplete, and data access controls are broad.

The defining characteristic of the Traditional stage is manual operation across the board. Lifecycles are configured by hand. Security policies address one pillar at a time without meaningful cross-pillar coordination. Response to threats requires human intervention at every step. Logs are collected inconsistently and analyzed after the fact, with limited correlation across sources. If this description sounds familiar, it should — it accurately describes the majority of enterprise security environments built before the industry fully embraced cloud and remote-work architectures.

Initial

The Initial stage represents an organization that has begun the zero trust journey in a meaningful way but has not yet achieved systematic, enterprise-wide implementation. At this stage, automation is starting to appear in specific areas — non-privileged identity orchestration may be partially automated, some devices are under automated compliance monitoring, and network policy enforcement is being consolidated. MFA is in use and beginning to incorporate multiple validation attributes beyond just credentials. Access decisions for some systems start to incorporate device characteristics and session context.

What distinguishes Initial from Traditional is the presence of intentional zero trust investment. The organization is no longer purely reactive. It has policies that reflect zero trust principles, some of which are being enforced through technical mechanisms rather than just documentation. Visibility is improving — logs are being aggregated from more sources and some automated analysis is running. But integration between pillars is still limited, and privileged access management often remains largely manual. The organization is making progress but the program is not yet coherent across the enterprise.

Advanced

At the Advanced stage, the zero trust architecture is functioning at enterprise scale with meaningful cross-pillar coordination. Phishing-resistant MFA is broadly deployed and identity risk is being assessed using automated analysis and dynamic rules. Device compliance is verified at access time using automated tooling, with vulnerability and configuration management integrated across most environments. Network micro-segmentation is in place and access decisions incorporate both identity and device posture. Application access is available over public networks with strong authentication and continuous session validation. Data is inventoried, classified, and access controls are enforced based on sensitivity classification.

The cross-cutting capabilities are where Advanced organizations begin to look fundamentally different from their Initial-stage counterparts. Visibility and Analytics is moving from basic log aggregation to automated analysis with behavior-based anomaly detection. Automation and Orchestration is handling lifecycle management, access decisions, and initial threat response for most routine scenarios. Governance policies are enforced with automation and updated periodically rather than only when audits demand it. The program has achieved real integration — pillars are sharing signals and informing each other’s decisions in meaningful ways.

 Key Tip For most organizations, reaching the Advanced stage across all five pillars is the realistic and appropriate near-term target. Optimal is aspirational — it requires fully automated, just-in-time everything with continuous cross-pillar interoperability that very few enterprises have achieved. When you are building your roadmap and setting expectations with leadership, use Advanced as your three-to-five year target state. That conversation is honest, achievable, and still represents a dramatic reduction in risk compared to where most organizations start.

Optimal

The Optimal stage represents the most mature expression of zero trust architecture — fully automated, continuously adaptive, and comprehensively integrated across all pillars and all environments. At Optimal, identity is continuously validated throughout every session using phishing-resistant MFA and real-time behavioral analytics. Access is granted just-in-time and just-enough, tailored to the specific action being performed against the specific resource being accessed, with automated orchestration handling every aspect of the identity lifecycle. Devices are monitored in near-real time with status information continuously correlated against identity and access decisions. Supply chain risk management is automated and integrated into operations that can tolerate supply chain failures without cascading impact.

At Optimal, the architecture is largely self-managing. Dynamic policies update automatically based on observed behaviors and environmental changes. Threat responses execute through automated playbooks without waiting for human review of routine events. Cross-pillar interoperability is continuous and comprehensive, producing a level of situational awareness that earlier stages can only approximate. Data is protected end-to-end with enterprise-wide encryption, dynamic access controls, and predictive analytics that can identify suspected exfiltration before it completes.

To be clear: Optimal is a destination, not a near-term target for most organizations. CISA acknowledges that progress toward Optimal is incremental and that each stage requires greater levels of complexity and effort. What matters is that every investment you make moves your program in that direction — and that you can demonstrate that movement to leadership in clear, measurable terms.

How to Assess Your Current Maturity

Knowing what the stages look like is the prerequisite. Accurately placing your organization within them is the hard part. Here is how I approach maturity assessment in practice.

Score Each Pillar Independently

Start by evaluating each of the five pillars separately. Within each pillar, work through the specific functions CISA defines — Authentication, Identity Stores, Risk Assessments, and Access Management in the Identity pillar, for example — and honestly assess which maturity stage description most accurately reflects your current state for each function. Resist the temptation to assign your organization a single overall maturity score. The reality in most enterprises is that maturity is uneven: you may be at Advanced in Identity and barely at Initial in Data. That uneven picture is the actual picture, and it is the one your roadmap needs to reflect.

Document your evidence. For each function, identify what controls are actually in place, how they are being enforced, and what gaps exist between your current state and the next maturity stage. The evidence-based approach is important both for accuracy and for defensibility when you present your assessment to leadership or auditors.

Evaluate the Cross-Cutting Capabilities

After you have assessed each pillar, evaluate the three cross-cutting capabilities separately. Visibility and Analytics, Automation and Orchestration, and Governance each have their own maturity progression within the CISA model, and they frequently lag behind individual pillar controls. An organization can deploy MFA, segment its network, and classify its data — and still have underdeveloped cross-cutting capabilities because it never invested in the telemetry infrastructure, automation platform, and governance mechanisms that tie those controls together. Assess these honestly. If your cross-cutting capabilities are at Initial while your individual pillar controls are at Advanced, your effective zero trust maturity is closer to Initial, because the pillars are not coordinating the way they need to.

 Pro Tip When presenting your maturity assessment results to leadership, use a heat map format: five pillars across the top, the three cross-cutting capabilities alongside, and a color-coded maturity score for each. Red for Traditional, yellow for Initial, orange for Advanced, green for Optimal. The visual impact is immediate and the conversation it generates — why are we red here, what would it take to get to yellow — is exactly the conversation you need to have to secure investment. Numbers on a slide create questions. A heat map creates urgency and shared understanding.

Identify Your Most Critical Gaps

Once you have a complete, pillar-by-pillar and function-by-function assessment, your next task is gap analysis. Not every gap carries equal weight. The gaps that matter most are the ones that represent the greatest risk exposure to the organization given its specific threat profile, industry, and regulatory environment. A financial services organization may have a very different risk-weighted gap priority than a manufacturing company or a healthcare provider. Your gap analysis needs to be anchored in your actual risk landscape, not just in the generic framework.

I find it useful to evaluate gaps on two dimensions: risk impact and implementation feasibility. High-impact gaps that are relatively feasible to close should be your first priorities. High-impact gaps that are complex and expensive to close need to be in your roadmap but must be sequenced realistically. Low-impact gaps, regardless of feasibility, belong at the bottom of the list. This two-dimensional prioritization is what separates a security program that makes progress from one that perpetually defers its most important work.

Define Your Target State

The assessment and gap analysis answer the question of where you are. Defining a target state answers the question of where you need to go — and these are not always the same as where the framework ideally wants you to be. Your target state should reflect your organization’s actual risk profile, your regulatory obligations, your business model, and the operational constraints you are working within. An organization with a highly distributed remote workforce and heavy cloud dependency may need to prioritize the Identity and Data pillars aggressively. An organization with significant operational technology environments may need to address network segmentation as an early priority regardless of where it falls in a generic framework sequence.

Define target maturity stages for each pillar over a realistic timeframe — typically a three-year horizon. Be specific: you are targeting Advanced in Identity within eighteen months, Advanced in Devices within twenty-four months, Initial-to-Advanced in Data within thirty-six months. These targets become the backbone of your roadmap in the next post. They also become the metrics against which you demonstrate progress to leadership over time.

 Key Tip Your target state needs to be risk-informed, not just framework-informed. Before you finalize the targets for each pillar, ask yourself: if we achieved this maturity level in this pillar and no further, what specific threats would we still be exposed to? What regulatory requirements would remain unmet? What business scenarios would still represent unacceptable risk? Those questions will tell you whether your targets are ambitious enough — and sometimes they will tell you that the framework’s generic guidance does not fully capture what your specific environment requires.

Common Assessment Pitfalls and How to Avoid Them

After working through maturity assessments with a number of teams, I have seen the same mistakes come up repeatedly. Here are the ones that cause the most damage — and how to avoid them.

Assessing documentation rather than reality. The most common failure mode is assessing what policies say rather than what controls actually do. Your zero trust assessment needs to evaluate implemented, enforced controls — not policy documents and procedure narratives. If MFA is policy but not yet enforced on legacy applications, your Authentication function score in the Identity pillar should reflect the enforcement gap, not the policy intent. This matters because roadmaps built on documentation-based assessments systematically underestimate the scope of work required and overestimate the maturity of the starting point.

Averaging maturity across functions within a pillar. If your Identity pillar is at Advanced for Authentication but at Traditional for Risk Assessments, your effective Identity maturity is closer to Traditional-to-Initial for access decisions that depend on risk assessment. Do not average these scores and call yourself “somewhere in the middle.” Document the uneven picture and prioritize the lagging functions, because they represent the actual weak points attackers will target.

Treating the assessment as a one-time event. Your zero trust maturity is not static. Threat landscapes evolve, architectures change, new technologies introduce new gaps, and the organization’s risk profile shifts with every major business change. Your maturity assessment should be a living document, updated at least annually and reviewed quarterly against your roadmap milestones. Organizations that treat the assessment as a project deliverable rather than an ongoing management tool consistently find themselves making roadmap decisions based on outdated baselines.

Doing the assessment in isolation from the business. Zero trust maturity is not purely a security question. It is a question about how the business operates, how users work, what data the organization depends on, and what risks are acceptable in support of business objectives. An assessment done entirely within the security team without business stakeholder input will miss context that is critical to accurate scoring and to prioritization. Involve application owners, data stewards, business unit leaders, and IT operations in the process. Their perspective will improve your assessment and their participation will build the cross-functional ownership that program execution requires.

Using the Assessment to Tell a Story to Leadership

The maturity assessment is not just a technical exercise. It is one of the most powerful tools you have for communicating with executive leadership and the board. Done well, it transforms the abstract concept of zero trust into a concrete picture of where your organization is exposed, how far it needs to travel, and what the investment required to get there looks like.

When I take a maturity assessment to leadership, I focus on three things. First, the current state heat map — the visual representation of where each pillar and cross-cutting capability sits today. Second, the risk translation — for each significant gap, a plain-English description of what an attacker could do today that this gap enables, and what a specific recent incident in the industry demonstrates about that risk. Third, the target state — what the organization will look like at the end of the roadmap period, expressed not in maturity scores but in risk outcomes: what threats we will be able to resist that we cannot resist today, what compliance requirements we will be able to meet, what operational benefits the architecture will deliver.

The maturity model gives you the analytical rigor. The leadership narrative gives that rigor business impact. Both are necessary. Neither is sufficient without the other.


 Final Thought

There is a version of zero trust maturity assessment that organizations use as a trophy — a document that gets produced, presented to leadership, filed away, and never seriously used again. That version produces exactly zero security improvement. The version that actually matters is the one where the assessment is honest enough to be uncomfortable, specific enough to be actionable, and alive enough to drive decisions week after week as the program executes. The security leaders who use this model well do not just know their scores — they know exactly which function in which pillar represents their organization’s single most dangerous gap right now, and they are doing something about it. That specificity is what separates programs that move from programs that drift. Know where you are. Know what it costs you to stay there. Then use the roadmap to change it.

Up Next in This Series
Post 4: Building and Executing Your Zero Trust Roadmap →

Popular posts from this blog

Winning the Room: How to Gain and Keep Executive Support

Blog Series: Your First 90 Days as a CISO Post 4 of 4 A Plain-English Guide for New, Aspiring, and Future Security Leaders Here's a truth that many talented security professionals discover too late: you can be technically brilliant, deeply experienced, and genuinely committed to protecting the organization — and still fail as a CISO if you don't have executive support. Security programs require funding. They require organizational authority. They require the ability to make decisions that sometimes create friction for other business units. They require the backing to hold lines when the pressure to cut corners for speed or convenience is intense. None of that happens without the support of the people at the top of the organization. And yet, earning and keeping executive support is exactly the area where security leaders most often struggle. The technical skills that make someone a great security professional don't automatically translate into the c...

Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption

Generative AI has moved faster than nearly any technology security leaders have dealt with. Tools that can generate text, code, images, and data insights are now embedded into productivity platforms, security tooling, development workflows, and business operations—often before security teams are formally involved. For CISOs, this creates a familiar but amplified challenge: innovation is happening faster than governance, and unmanaged generative AI introduces material risk across confidentiality, integrity, availability, compliance, and trust. For aspiring information security professionals, AI governance represents a growing and valuable discipline where strategic thinking matters just as much as technical depth. The good news? We don’t need to invent governance from scratch. NIST’s AI Risk Management Framework (AI RMF) provides a practical, flexible structure that security leaders can use today to govern generative AI responsibly and defensibly. Why Generative AI Governance Matt...