Skip to main content

NIST CSF 2.0 – Identify Function Deep Dive: Improvement (ID.IM)


Most cybersecurity programs don’t fail because they lack controls.

They fail because they fail to learn.

Incidents happen. Audits surface gaps. Assessments reveal weaknesses.
Yet many organizations treat these moments as interruptions instead of inputs.

That is exactly why Improvement (ID.IM) exists in the NIST Cybersecurity Framework (CSF) 2.0 Identify function. ID.IM ensures the organization systematically learns from experience and uses that learning to strengthen governance, risk management, and strategic execution.

In CSF 2.0, improvement is no longer implied—it is explicit, measurable, and expected.

This post covers:

  • What ID.IM is in NIST CSF 2.0

  • How mature organizations operationalize continuous improvement

  • Metrics that demonstrate learning, not just activity


What Is NIST CSF 2.0 Improvement (ID.IM)?

ID.IM focuses on identifying opportunities for improvement in cybersecurity governance, risk management, and controls based on:

  • Incidents and near misses

  • Risk assessments

  • Audits and assessments

  • Exercises and testing

  • Threat intelligence and environmental changes

The key shift in CSF 2.0 is that improvement is positioned as a governance responsibility, not a SOC or GRC afterthought.

If lessons are learned but not acted on, improvement does not exist.


Why Cybersecurity Improvement Commonly Stalls

Across enterprises of every maturity level, the same friction points appear:

  1. Lessons learned, lessons forgotten
    Reports are written but never tracked to completion.

  2. No ownership
    Findings exist, but accountability does not.

  3. Competing priorities
    Improvement work loses to urgent operational demands.

  4. No measurement
    Organizations cannot demonstrate that they are getting better—only busier.

  5. Fear of visibility
    Teams avoid surfacing weaknesses to prevent scrutiny.

CSF 2.0 ID.IM directly counters these behaviors by requiring intentional learning cycles.


How to Implement ID.IM in a Practical, Scalable Way

1. Treat Every Major Event as an Improvement Input

Improvement inputs should include:

  • Security incidents and near misses

  • Tabletop and simulation exercises

  • Penetration tests and red team activities

  • Internal audits and external assessments

  • Material risk register changes

The goal is not perfection—it is pattern recognition.


2. Formalize a Cybersecurity Improvement Backlog

Mature organizations maintain a visible, prioritized improvement backlog that:

  • Tracks findings across all sources

  • Assigns clear owners and deadlines

  • Maps each improvement to a CSF category or control area

  • Aligns improvements to risk reduction

This turns learning into managed execution, not good intentions.


3. Link Improvements to Risk and Business Outcomes

Improvements should clearly answer:

  • Which risk does this reduce?

  • Which business outcome does this protect?

  • What happens if it is delayed?

This framing ensures improvement work competes fairly for funding and attention.


4. Ensure Improvement Is Governed, Not Just Executed

Strong ID.IM programs include:

  • Regular review of improvement status at governance forums

  • Executive visibility into overdue or blocked actions

  • Clear escalation paths when improvement stalls

Improvement without oversight is optional.
Improvement with governance is inevitable.


5. Feed Improvements Back Into Strategy and Architecture

ID.IM is not only tactical.

Mature organizations:

  • Adjust security roadmaps based on recurring themes

  • Refine risk appetite statements

  • Update policies and standards

  • Evolve architecture patterns

This is where improvement compounds over time.


Metrics That Show Real Improvement

ID.IM metrics must demonstrate learning velocity and execution discipline.

Foundational Metrics

  • % of incidents with completed post-incident reviews

  • % of exercises resulting in documented improvement actions

  • Time to log improvement items after discovery

  • Improvement backlog size by category

These show process existence.


Execution & Accountability Metrics

  • % of improvement actions completed on time

  • Average age of open improvement items

  • % of overdue improvement actions escalated

  • Improvement actions with assigned executive owners

These show management seriousness.


Outcome & Maturity Metrics

  • Reduction in repeat findings year over year

  • Reduction in repeat incident root causes

  • Control effectiveness improvement trends

  • Risk reduction tied to completed improvements

These prove the organization is actually getting better.


What Good Looks Like

A CSF 2.0-aligned ID.IM capability means:

  • The organization learns from incidents instead of repeating them

  • Weaknesses are surfaced early, not hidden

  • Improvement work is visible and tracked

  • Strategy evolves based on evidence, not assumptions

  • Security maturity increases year over year

In these environments, improvement becomes cultural—not forced.


Final Thoughts from the CISO Chair

Cybersecurity is a contact sport.
You only improve by engaging, observing, and adjusting.

NIST CSF 2.0 makes improvement explicit because resilience depends on it.

Controls may stop yesterday’s attack.
Learning stops tomorrow’s failure.

If your program does not systematically improve, it is quietly decaying—no matter how many tools are deployed.

Comments

Popular posts from this blog

Asset Management - Physical Devices - What do you have? Do you know?

Asset management and inventorying your physical systems, we all know we should do it, and I am sure most try.  I am not going to talk about the should have, would have or could have. Instead, I am going to focus on the risks associated with the NIST CSF control ID-AM.1.   The control simply states, “Physical devices and systems within the organization are inventoried.”  At the simplest level, this control is saying that the organization inventories all physical systems that are apart of the information system. In my opinion, the control is foundational because how can you secure something if you don't know it exists.  If you are not inventorying your systems, how do you know if they have adequate controls to protect the data and network.   If you had a breach of data, would you know what type of data was involved, or would you even know if you had a breach?  To further extend this, how can you perform a risk assessment on the system to understand and relay ...

Vulnerability Management… It’s easy - Planning

I am sure you have had either consultants, vendors, or heard at a conference that vulnerability management is foundational security control.  While I agree that it is an essential control, I also understand that it is challenging to implement.  Vulnerability management is not just to pick a tool, scan, and fix issues.  Many components make it a complicated journey.  This series will attempt to help break it down and give you ideas on how this complex service and be delivered effectively.    Planning   Objective When you start, I recommend creating a targeted objective and set of measures against your objective.   Ensure that you keep in mind your organization’s culture, politics, and risk appetite as you are developing your objective.   I have seen some target just “critical” systems for regulatory compliance, whereas others have targeted their entire enterprise.   No matter your scope, keep in mind your team’s current resource...

The Detect Function in NIST CSF 2.0: The Risk of Seeing Too Late—or Too Much

In NIST Cybersecurity Framework 2.0 (CSF 2.0) , the Detect function represents the organization’s ability to identify the occurrence of a cybersecurity event in a timely and reliable manner . While Protect focuses on reducing the likelihood of compromise, Detect determines how quickly and how accurately an organization recognizes that something has gone wrong. For CISOs and security leaders, detection is where many programs quietly fail. Not due to a lack of tools, but due to poor signal quality, unclear objectives, and misalignment with business impact. Detection that is late, noisy, or misunderstood can be as damaging as no detection at all. Official NIST CSF 2.0 guidance is available here: https://www.nist.gov/publications/nist-cybersecurity-framework-csf-20 What the Detect Function Is (and What It Enables) Under CSF 2.0, the Detect (DE) function focuses on outcomes related to: Continuous monitoring Anomalies and event detection Security logging and analysis Threat intelligence ...