Skip to main content

IAM Metrics in Practice: Real Numbers, Real Scenarios, Real Conversations


A companion post to: IAM Metrics That Actually Matter: Proving Risk Reduction and Value to Every Level of the Organization

The previous post laid out the framework: which IAM metrics matter, why they matter, and how to use them to tell a risk reduction and value story that resonates at every level of the organization. But frameworks without numbers are just theory. Security leaders need to see what these metrics actually look like when you run them against a real environment — the before states, the after states, the calculations, and the language you use to present them.

This post walks through each major metric category with concrete examples drawn from the kinds of environments I have seen across more than two decades in this field. The numbers are composites — realistic representations of what organizations at different maturity levels actually look like — not a single case study. But they are close enough to reality that you should be able to map them directly to your own environment and your own conversations.

Use this as a working reference. Take the calculation approaches and adapt the numbers to your actual data. Then take the narrative language and adapt it to your audience. That combination — credible data and clear narrative — is what turns an IAM metrics program from a reporting exercise into a management tool.

 SCENARIO CONTEXT

The examples in this post follow a composite organization: a mid-size financial services firm with approximately 3,200 employees, a hybrid cloud environment, and a regulatory footprint that includes SOX and state-level data privacy requirements. They are twelve months into a structured IAM program buildout, moving from a largely manual, ticket-driven access management approach toward an integrated IGA and PAM environment. Their experience is representative of what organizations in the Initial-to-Advanced maturity range typically encounter.

Orphaned and Inactive Account Reduction: The Numbers

At the start of the program, this organization ran their first comprehensive identity discovery. What they found was not unusual, but it was alarming: 3,847 accounts in Active Directory that had no corresponding active employee in the HR system. Of those, 612 had been accessed within the past 90 days — meaning someone was actively using credentials that, by definition, should not have existed. Another 1,200 accounts had last login timestamps older than 180 days. The remaining 2,035 had never logged in at all, artifacts of provisioning processes that created accounts before systems were actually stood up, or test accounts from projects that were never cleaned up.

Twelve months into the program, after integrating the IGA platform with the HR system of record and running four quarterly cleanup campaigns, the numbers looked like this:

Metric Month 0 (Baseline) Month 12 Change
Total orphaned accounts 3,847 43 ↓ 98.9%
Accounts active in past 90 days with no HR record 612 0 ↓ 100%
Inactive accounts (>180 days no login) 1,200 214 ↓ 82.2%
Avg. days to disable separated employee account 14.3 days 3.2 hours ↓ 97.8%

Here is how you present this to leadership: “At the start of this program, we had 612 accounts in our environment that were actively being used by credentials with no corresponding employee in HR. We had no visibility into who was using them or why. Each of those represented an unmonitored access pathway into our systems. Today, that number is zero. We have eliminated 99 percent of our orphaned account inventory and automated the separation process so that a terminated employee’s access is revoked within hours of their HR record being updated — not the 14 days it used to take. That is 14 days of unnecessary exposure, eliminated.”

 Pro Tip When you discover that active accounts exist with no HR record, investigate before you disable. In some environments, service accounts and shared credentials are provisioned to contractor identities that live outside the HRIS. Document what you find, create a remediation plan, and bring leadership a complete picture. “We found 612 active accounts with no HR record and here is what we learned about each category” is far more credible than “we found 612 problems and deleted them.”

Over-Privileged Account Rate: What the Numbers Look Like

Before implementing role-based access controls and automated provisioning, this organization had no structured way to define what access a given role should have. Access was granted by ticket. When someone needed something, they asked for it. When they no longer needed it, they almost never gave it back. The result was predictable: an environment where access accumulated over time, with no systematic review.

The initial access review campaign, run through the new IGA platform, produced a striking result. Out of 3,200 active user accounts reviewed, 2,144 — 67 percent — held at least one entitlement that was outside their defined role profile or that their manager could not justify during the certification process. Of those, 410 included elevated permissions to financial systems, customer data repositories, or administrative consoles.

After twelve months of role engineering, automated provisioning aligned to role templates, and two additional certification cycles:

Metric Month 0 Month 12 Change
Accounts with out-of-role entitlements 67% (2,144) 19% (608) ↓ 72%
Accounts with elevated access to sensitive systems beyond role 410 61 ↓ 85%
Total entitlements removed during the period 18,400+ Access surface reduced

The executive narrative here is about blast radius. When an attacker compromises a credential, the damage they can do is directly proportional to how much access that credential carries. Before this program, a compromised account from an average employee in the finance department might have carried access to six or seven systems beyond what their current role required — artifacts of previous project assignments and role changes. After role engineering and access cleanup, that same account carries access to exactly what the role requires, and nothing more. You have not just reduced the number of over-privileged accounts. You have fundamentally reduced what an attacker can do with any given compromised identity.

MFA Coverage: The Calculation That Matters Most

At program start, this organization had MFA deployed but not enforced uniformly. General user MFA enrollment was approximately 48 percent — meaning just under half of users had enrolled but were not necessarily required to use it for every access path. Privileged access — administrative consoles, cloud management portals, privileged workstations — had no MFA requirement at all. VPN authentication used password-only credentials.

This is a surprisingly common starting point. Organizations deploy MFA as an option and celebrate enrollment numbers. But enrollment is not enforcement, and partial coverage leaves critical gaps that attackers actively seek out. Here is what the coverage picture looked like at baseline versus month twelve:

Access Category Baseline MFA Enforcement Month 12 MFA Enforcement
General user accounts (Microsoft 365, internal apps) 48% enrolled, ~30% enforced 99.1% enforced
Privileged / administrator accounts 0% (password only) 100% enforced (phishing-resistant)
VPN / remote access 0% (password only) 100% enforced
Cloud management consoles (AWS, Azure) 22% (spot enforcement) 100% enforced

The board narrative on MFA is one of the easiest to make compelling because the data is so unambiguous. Microsoft’s internal telemetry, published in their annual Digital Defense Report, consistently shows that MFA prevents more than 99 percent of automated credential-stuffing and password spray attacks. At baseline, this organization had 70 percent of its general user population and 100 percent of its privileged administrators exposed to those attacks with nothing but a password standing between an attacker and their environment. Today, that gap is closed. That is not an abstract security improvement — that is a concrete, documented reduction in one of the most common attack patterns in the financial services sector.

 Key Tip Distinguish between MFA methods in your reporting. SMS-based MFA is significantly weaker than authenticator app-based MFA, which is significantly weaker than phishing-resistant MFA (FIDO2/passkeys, certificate-based). When you report MFA coverage to the board, note where phishing-resistant methods are enforced — particularly for privileged access. Attackers have already shifted to SIM-swapping and real-time phishing proxy attacks that defeat SMS MFA. Your metrics should reflect where your controls are actually strong, not just where something is technically in place.

Help Desk Cost Reduction: The ROI Calculation

This is the metric that most reliably gets the CFO’s attention, and it is also one of the easiest to calculate with precision if you have access to your help desk ticketing data. Here is how the numbers worked out for this organization.

Before deploying self-service password reset (SSPR) and an automated access request portal, the help desk was handling an average of 4,840 password-related and access-related tickets per month. Through a data pull from ServiceNow covering the six months prior to program start, they calculated an average handle time of 12 minutes per ticket for password resets and 28 minutes for access request tickets. Using a blended fully-loaded cost rate for help desk staff of $58 per hour, the per-ticket cost averaged out to approximately $22 for password resets and $27 for access requests.

Ticket Category Monthly Volume (Before) Monthly Volume (After) Cost / Ticket Monthly Savings
Password resets 3,200 640 $22 $56,320
Access request tickets 1,640 390 $27 $33,750
Total 4,840 tickets/mo 1,030 tickets/mo $90,070/mo
($1,080,840/yr)

That is more than $1 million in annualized help desk cost reduction — from a single program workstream. The SSPR deployment cost approximately $85,000 in licensing and implementation. The automated access request portal cost approximately $140,000. Total investment: $225,000. First-year return: $1,080,840. Payback period: approximately 2.5 months.

When you present this to a CFO or a board finance committee, you are not talking about security anymore. You are talking about operational efficiency with a documented, validated ROI that would clear any reasonable capital investment threshold. The security value — the risk reduction that comes from faster, more accurate provisioning and a cleaner access environment — is a bonus on top of a business case that already stands on its own financial merits.

 Pro Tip Have your IT operations or help desk manager validate the ticket volume numbers and your finance team validate the cost-per-ticket calculation before you present this to the CFO. When you walk into that meeting and say “these numbers were reviewed by IT operations and finance,” the credibility of the entire presentation increases dramatically. Self-reported ROI from the team that deployed the solution is a claim. Cross-validated ROI from operations and finance is evidence.

Access Provisioning Time: The Productivity Story

New hire provisioning time is a metric that lives at the intersection of security, IT operations, and HR — which makes it uniquely powerful for building cross-functional allies. Before the IAM program, this organization had a largely manual provisioning process: the hiring manager submitted a request form, IT helpdesk created the account, then opened tickets with each application team for system access. The process involved four to seven different teams, had no automated workflow, and averaged 7.3 days from an employee’s start date to having full access to the systems their role required.

That 7.3-day gap has a real cost. A new employee who cannot access the systems they need to do their job is either idle, pulling colleagues away from their own work to shadow them, or accessing systems through borrowed credentials — which is its own security problem. For an organization hiring approximately 55 new employees per month at a blended average salary of $72,000 per year, 7.3 days of reduced productivity per hire represents meaningful economic waste.

Provisioning Metric Before After (Month 12)
Average time to full provisioning (new hire) 7.3 days 4.5 hours
Average time to update access on role change (Mover) 5.1 days 6.2 hours
Estimated monthly productivity recovery (55 hires/mo) ~401 hours recovered
Provisioning tickets opened per new hire (avg) 6.8 tickets 0 tickets (automated)

The security dimension of provisioning speed is equally important to communicate. Every day a new employee waits for proper access is a day they may find informal workarounds — shared credentials, personal cloud storage, screen-sharing with colleagues to access restricted systems. Those workarounds are security events waiting to happen. Faster, more accurate automated provisioning eliminates the incentive for workarounds while simultaneously ensuring access is aligned to the role from day one.

Audit Findings: The Compliance Narrative

For organizations with meaningful regulatory exposure, the audit findings trend is one of the most direct connections between IAM program investment and documented risk reduction. Here is what the trajectory looked like for this organization across their SOX IT General Controls review and their internal audit cycle:

Audit Finding Category Year Prior to Program Year 1 of Program Change
High / Critical severity IAM findings 4 0 ↓ 100%
Medium severity IAM findings 7 2 ↓ 71%
Low severity IAM findings 5 3 ↓ 40%
Access review completion rate (SOX) 61% 97% ↑ 59%
Avg. remediation time for access findings (days) 47 days 8 days ↓ 83%

Zero high-severity audit findings related to IAM is a significant accomplishment in a regulated financial services environment. Audit findings at that severity level do not just consume remediation resources — they trigger regulatory attention, require management representation letters, and in some cases can affect the overall audit opinion. The shift from four high-severity findings to zero is not just a metric. It is a material change in the organization’s regulatory risk posture, and the General Counsel will recognize it immediately when you present it in those terms.

 Key Tip When presenting audit finding trends, include the remediation cost for prior-year findings alongside the current-year picture. If each high-severity finding required 40 hours of internal resources to document and remediate, plus external audit time, and you eliminated four of them, that is a quantifiable avoidance cost that belongs in your value story. Audit remediation is expensive. Preventing findings is valuable. Make sure leadership knows both.

Putting It Together: A Sample Board Reporting Snapshot

Board reporting should be concise, visual where possible, and narrative-forward. Here is what a one-page IAM program update might look like for this organization at month twelve, translated into the language of governance rather than operations:

 IAM Program — Board Quarterly Update (Month 12)

Identity Attack Surface

Orphaned accounts reduced from 3,847 to 43 (99% reduction). Over-privileged accounts reduced from 67% to 19% of the user population. 18,400+ excess entitlements removed from active accounts. Direction: Improving.

Authentication Strength

MFA enforced across 99.1% of general user accounts and 100% of privileged and administrative access. Phishing-resistant MFA deployed for all administrator accounts. VPN and cloud console access now require MFA at every session. Direction: Target achieved for privileged access; general user enforcement sustained.

Regulatory and Compliance Posture

Zero high-severity IAM audit findings in the most recent SOX cycle (down from 4). Total IAM findings reduced from 16 to 5. Access review completion rate: 97% (up from 61%). Remediation time for identified findings: 8 days average (down from 47). Direction: Improving materially.

Program Return on Investment

Documented operational cost reduction of $1.08M annually from help desk automation. New hire access provisioning time reduced from 7.3 days to 4.5 hours, recovering an estimated 401 staff-hours of productivity per month. Total program investment in year one: $225K for automation tooling. Payback period: 2.5 months.

Identity-Related Incident Activity

Two credential-compromise incidents in the quarter. Both involved contractor accounts; both were detected within 4 hours through SIEM alerting. Neither resulted in data exposure. No privilege escalation activity detected. Direction: Stable; detection capability improving.

Notice what is not in that board snapshot: ticket counts, platform configurations, connector integrations, certification campaign mechanics. Those belong in the operational report to the security leadership team. What is in the board snapshot is risk direction, compliance posture, and business value — in plain language with clear directional indicators. That is the discipline of board-level IAM reporting. It takes practice, but once your leadership team gets used to seeing this format, they will start asking for it — and that is when you know your metrics program has truly landed.


 Final Thought

The numbers in this post are not magic. They are the result of organizations doing the hard work of establishing baselines, running disciplined programs, and measuring what matters consistently over time. What I want you to take away is not any specific number, but the habit of thinking in before-and-after terms from the very first day of any IAM initiative. Capture your baseline before you deploy anything. Document what you found, what you fixed, and what it cost. Build the narrative as you go, not retroactively after someone asks for it. The CISO who can walk into a budget conversation twelve months from now and say “here is exactly what we delivered and here is the data to prove it” has already won the next budget cycle. The CISO who has to reconstruct the value story after the fact is always playing defense. Start measuring on day one. The numbers will take care of themselves.

Popular posts from this blog

Winning the Room: How to Gain and Keep Executive Support

Blog Series: Your First 90 Days as a CISO Post 4 of 4 A Plain-English Guide for New, Aspiring, and Future Security Leaders Here's a truth that many talented security professionals discover too late: you can be technically brilliant, deeply experienced, and genuinely committed to protecting the organization — and still fail as a CISO if you don't have executive support. Security programs require funding. They require organizational authority. They require the ability to make decisions that sometimes create friction for other business units. They require the backing to hold lines when the pressure to cut corners for speed or convenience is intense. None of that happens without the support of the people at the top of the organization. And yet, earning and keeping executive support is exactly the area where security leaders most often struggle. The technical skills that make someone a great security professional don't automatically translate into the c...

Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption

Generative AI has moved faster than nearly any technology security leaders have dealt with. Tools that can generate text, code, images, and data insights are now embedded into productivity platforms, security tooling, development workflows, and business operations—often before security teams are formally involved. For CISOs, this creates a familiar but amplified challenge: innovation is happening faster than governance, and unmanaged generative AI introduces material risk across confidentiality, integrity, availability, compliance, and trust. For aspiring information security professionals, AI governance represents a growing and valuable discipline where strategic thinking matters just as much as technical depth. The good news? We don’t need to invent governance from scratch. NIST’s AI Risk Management Framework (AI RMF) provides a practical, flexible structure that security leaders can use today to govern generative AI responsibly and defensibly. Why Generative AI Governance Matt...