Skip to main content

Posts

Stop Scanning. Start Managing Exposure: The CISO's Guide to Continuous Threat Exposure Management

Picture this: It is a Tuesday afternoon. Your vulnerability management team pulls up the weekly report. Sixty-three thousand open vulnerabilities across your environment. Your patch team closes out five hundred this week — a solid sprint by any measure. Everyone nods. The meeting ends. You walk out feeling like you are making progress. Three weeks later, an attacker exfiltrates six months of customer data through a misconfigured cloud storage bucket. No CVE assigned. Not on any scan report. Not even on your radar. That gap — the one between what your vulnerability scanner sees and what an attacker actually exploits — is exactly the problem that Continuous Threat Exposure Management is designed to close. And if you are leading a security program today without a CTEM strategy in place, you are managing the wrong list. What CTEM Actually Is (And What It Isn’t) Gartner introduced the term Continuous Threat Exposure Management in 2022, and the security industry has been both energized and c...
Recent posts

White House National AI Policy Framework: What CISOs Need to Know and Do Now

The White House released its National Policy Framework for Artificial Intelligence on March 20, 2026, and every CISO needs to read past the headlines. The document is not a law. It is not a regulation. It is a set of legislative recommendations directed at Congress — non-binding by design — outlining how the Trump administration believes the federal government should approach AI governance. What it is, practically speaking, is the clearest signal yet of where federal AI policy is headed and how that trajectory should reshape your organization’s approach to AI risk management, compliance planning, and governance program design. The framework follows Executive Order 14365, signed in December 2025, which directed federal agencies to identify and challenge state AI laws that conflict with national AI strategy. Together, these actions set up the central tension that enterprise security leaders now have to navigate: a federal posture that is explicitly moving toward preempting state-level AI...

IAM Metrics in Practice: Real Numbers, Real Scenarios, Real Conversations

A companion post to: IAM Metrics That Actually Matter: Proving Risk Reduction and Value to Every Level of the Organization The previous post laid out the framework: which IAM metrics matter, why they matter, and how to use them to tell a risk reduction and value story that resonates at every level of the organization. But frameworks without numbers are just theory. Security leaders need to see what these metrics actually look like when you run them against a real environment — the before states, the after states, the calculations, and the language you use to present them. This post walks through each major metric category with concrete examples drawn from the kinds of environments I have seen across more than two decades in this field. The numbers are composites — realistic representations of what organizations at different maturity levels actually look like — not a single case study. But they are close enough to reality that you should be able to map them directly to your own en...

OpenClaw and Personal AI Assistants: Emerging Threats and What CISOs Need to Do Now

OpenClaw became the fastest-growing GitHub project in history almost overnight. It crossed 300,000 stars in early 2026, surpassing milestones that took Linux and React years to reach. That kind of adoption velocity is a signal security teams cannot afford to miss — because it means OpenClaw is almost certainly already running inside your organization, on devices you manage, connected to accounts and data your security program is responsible for protecting. The security community has described OpenClaw as “an absolute nightmare” from a risk perspective. That assessment is accurate, and understanding why requires understanding what OpenClaw actually is and how it operates — because it is not a chatbot. It is something with fundamentally different security implications. What OpenClaw Actually Is OpenClaw markets itself as “the AI that actually does things.” That description is technically precise and should raise immediate flags for any security practitioner. Where traditional AI tools an...