Skip to main content

NIST CSF 2.0 – Identify Function Deep Dive: Improvement (ID.IM)


Most cybersecurity programs don’t fail because they lack controls.

They fail because they fail to learn.

Incidents happen. Audits surface gaps. Assessments reveal weaknesses.
Yet many organizations treat these moments as interruptions instead of inputs.

That is exactly why Improvement (ID.IM) exists in the NIST Cybersecurity Framework (CSF) 2.0 Identify function. ID.IM ensures the organization systematically learns from experience and uses that learning to strengthen governance, risk management, and strategic execution.

In CSF 2.0, improvement is no longer implied—it is explicit, measurable, and expected.

This post covers:

  • What ID.IM is in NIST CSF 2.0

  • How mature organizations operationalize continuous improvement

  • Metrics that demonstrate learning, not just activity


What Is NIST CSF 2.0 Improvement (ID.IM)?

ID.IM focuses on identifying opportunities for improvement in cybersecurity governance, risk management, and controls based on:

  • Incidents and near misses

  • Risk assessments

  • Audits and assessments

  • Exercises and testing

  • Threat intelligence and environmental changes

The key shift in CSF 2.0 is that improvement is positioned as a governance responsibility, not a SOC or GRC afterthought.

If lessons are learned but not acted on, improvement does not exist.


Why Cybersecurity Improvement Commonly Stalls

Across enterprises of every maturity level, the same friction points appear:

  1. Lessons learned, lessons forgotten
    Reports are written but never tracked to completion.

  2. No ownership
    Findings exist, but accountability does not.

  3. Competing priorities
    Improvement work loses to urgent operational demands.

  4. No measurement
    Organizations cannot demonstrate that they are getting better—only busier.

  5. Fear of visibility
    Teams avoid surfacing weaknesses to prevent scrutiny.

CSF 2.0 ID.IM directly counters these behaviors by requiring intentional learning cycles.


How to Implement ID.IM in a Practical, Scalable Way

1. Treat Every Major Event as an Improvement Input

Improvement inputs should include:

  • Security incidents and near misses

  • Tabletop and simulation exercises

  • Penetration tests and red team activities

  • Internal audits and external assessments

  • Material risk register changes

The goal is not perfection—it is pattern recognition.


2. Formalize a Cybersecurity Improvement Backlog

Mature organizations maintain a visible, prioritized improvement backlog that:

  • Tracks findings across all sources

  • Assigns clear owners and deadlines

  • Maps each improvement to a CSF category or control area

  • Aligns improvements to risk reduction

This turns learning into managed execution, not good intentions.


3. Link Improvements to Risk and Business Outcomes

Improvements should clearly answer:

  • Which risk does this reduce?

  • Which business outcome does this protect?

  • What happens if it is delayed?

This framing ensures improvement work competes fairly for funding and attention.


4. Ensure Improvement Is Governed, Not Just Executed

Strong ID.IM programs include:

  • Regular review of improvement status at governance forums

  • Executive visibility into overdue or blocked actions

  • Clear escalation paths when improvement stalls

Improvement without oversight is optional.
Improvement with governance is inevitable.


5. Feed Improvements Back Into Strategy and Architecture

ID.IM is not only tactical.

Mature organizations:

  • Adjust security roadmaps based on recurring themes

  • Refine risk appetite statements

  • Update policies and standards

  • Evolve architecture patterns

This is where improvement compounds over time.


Metrics That Show Real Improvement

ID.IM metrics must demonstrate learning velocity and execution discipline.

Foundational Metrics

  • % of incidents with completed post-incident reviews

  • % of exercises resulting in documented improvement actions

  • Time to log improvement items after discovery

  • Improvement backlog size by category

These show process existence.


Execution & Accountability Metrics

  • % of improvement actions completed on time

  • Average age of open improvement items

  • % of overdue improvement actions escalated

  • Improvement actions with assigned executive owners

These show management seriousness.


Outcome & Maturity Metrics

  • Reduction in repeat findings year over year

  • Reduction in repeat incident root causes

  • Control effectiveness improvement trends

  • Risk reduction tied to completed improvements

These prove the organization is actually getting better.


What Good Looks Like

A CSF 2.0-aligned ID.IM capability means:

  • The organization learns from incidents instead of repeating them

  • Weaknesses are surfaced early, not hidden

  • Improvement work is visible and tracked

  • Strategy evolves based on evidence, not assumptions

  • Security maturity increases year over year

In these environments, improvement becomes cultural—not forced.


Final Thoughts from the CISO Chair

Cybersecurity is a contact sport.
You only improve by engaging, observing, and adjusting.

NIST CSF 2.0 makes improvement explicit because resilience depends on it.

Controls may stop yesterday’s attack.
Learning stops tomorrow’s failure.

If your program does not systematically improve, it is quietly decaying—no matter how many tools are deployed.

Popular posts from this blog

Winning the Room: How to Gain and Keep Executive Support

Blog Series: Your First 90 Days as a CISO Post 4 of 4 A Plain-English Guide for New, Aspiring, and Future Security Leaders Here's a truth that many talented security professionals discover too late: you can be technically brilliant, deeply experienced, and genuinely committed to protecting the organization — and still fail as a CISO if you don't have executive support. Security programs require funding. They require organizational authority. They require the ability to make decisions that sometimes create friction for other business units. They require the backing to hold lines when the pressure to cut corners for speed or convenience is intense. None of that happens without the support of the people at the top of the organization. And yet, earning and keeping executive support is exactly the area where security leaders most often struggle. The technical skills that make someone a great security professional don't automatically translate into the c...

Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption

Generative AI has moved faster than nearly any technology security leaders have dealt with. Tools that can generate text, code, images, and data insights are now embedded into productivity platforms, security tooling, development workflows, and business operations—often before security teams are formally involved. For CISOs, this creates a familiar but amplified challenge: innovation is happening faster than governance, and unmanaged generative AI introduces material risk across confidentiality, integrity, availability, compliance, and trust. For aspiring information security professionals, AI governance represents a growing and valuable discipline where strategic thinking matters just as much as technical depth. The good news? We don’t need to invent governance from scratch. NIST’s AI Risk Management Framework (AI RMF) provides a practical, flexible structure that security leaders can use today to govern generative AI responsibly and defensibly. Why Generative AI Governance Matt...