Skip to main content

Shadow AI — What New CISOs Need to Do Before It Bites Them


It was a Tuesday afternoon when one of our business analysts dropped a file into the shared drive with a note attached: “Used Claude to summarize all the customer complaint data — saved me three hours!” She was thrilled. I was not.

Not because she had done something wrong. She hadn’t — not intentionally. But sitting inside that AI summary was a condensed version of thousands of customer records, support tickets, and PII that had just been sent to an external large language model through a free consumer account. No data processing agreement. No approved vendor. No logging. No way to know what had been retained on the other side.

That moment was my first real encounter with Shadow AI at scale. And if you’re a new CISO, I can almost guarantee you’ve already got it happening in your organization right now — you just might not know it yet.

Shadow AI Is the New Shadow IT, and It Moves Faster

We’ve been talking about Shadow IT for two decades. Employees spinning up personal Dropbox accounts, using Gmail for work docs, running their own SaaS tools without IT approval. Most organizations eventually got their arms around it — at least enough to reduce the blast radius when something went wrong.

Shadow AI is a different animal. The tools are free or nearly free. They’re embedded in browsers, in productivity suites, in coding environments. They work invisibly. And they’re genuinely useful — that’s what makes them so dangerous from a governance standpoint. People aren’t being reckless; they’re being efficient. The business analyst who processed your customer data thought she was doing the right thing for the team.

The speed of adoption is the problem. Shadow IT took years to reach critical mass inside most organizations. Shadow AI is already there. According to multiple enterprise surveys in the past year, somewhere between 60 and 75 percent of employees are using AI tools at work — many of which have never been reviewed, approved, or even acknowledged by security or IT. That number is almost certainly higher in knowledge-worker-heavy organizations.

As a new CISO, you don’t have the luxury of getting comfortable before this hits you. You need a plan on day one.

What’s Actually at Risk When Employees Use Unsanctioned AI

Let me be concrete about what Shadow AI exposure looks like in practice, because it’s easy to wave at it generally and not actually address the specific risks.

Data exfiltration through prompts. When an employee pastes a sales contract, a financial model, or a patient record into an AI tool to get a summary or rewrite, that data may be sent to a third-party model provider, stored in training logs, or retained by the vendor under terms the employee never read. Your DLP tools almost certainly aren’t watching for this. Most are tuned for email and USB drives, not AI prompt submissions.

Intellectual property leakage. Source code, product roadmaps, go-to-market strategies — your engineers and product teams are feeding this into AI tools every day. Some of those tools are free consumer versions with no enterprise data protection guarantees. You may be losing trade secrets without a single security alert firing.

Regulatory and compliance exposure. If you’re in healthcare, finance, or any other regulated vertical, your employees sending protected data to an unapproved AI vendor may be creating HIPAA, GDPR, or PCI violations without realizing it. Regulators are actively developing guidance on AI data handling, and “we didn’t know” won’t be a defense for long.

AI-generated outputs used as fact. This one doesn’t get enough attention. Employees are using AI tools to draft reports, summarize research, and make recommendations. When those outputs contain hallucinations or biased results and get acted on, the downstream consequences fall on your organization. Legal, financial, reputational — all of it.

Third-party model risk you can’t see. Your approved vendor list is your control surface. Shadow AI blows a hole right through it. When your employees are using tools you’ve never vetted, you have no visibility into where those models run, what data they retain, whether they’ve been trained on appropriately sourced data, or what their incident response posture looks like.

How to Find Out What’s Already Happening in Your Organization

Before you can govern something, you have to see it. Here’s how to start building visibility without creating a culture of paranoia or burning political capital on day one.

Start with your network and proxy logs. Most AI tools hit recognizable domains — api.openai.com, claude.ai, gemini.google.com, copilot.microsoft.com, and dozens of others. Pull a week of outbound traffic and filter for known AI endpoints. What you find will surprise you. In my experience, organizations that think they have “a few people using ChatGPT” often have hundreds of daily active users hitting multiple AI services.

Run a quick anonymous employee survey. Frame it as understanding how people are doing their work, not catching anyone. You want people to tell you the truth. Ask which tools they use regularly, which ones they use for work tasks, and whether they’ve ever entered any kind of business or customer data. The candor you get from an anonymous survey will outpace what you’d get from any technical scan.

Talk to your most productive departments. Shadow AI concentrates in your highest-output teams — sales, engineering, finance, marketing, legal. The people who’ve found AI genuinely useful are often the ones using it most freely. Have a few conversations. Listen without judgment. You’ll get a clearer picture of the real use cases than you’ll get from logs alone.

Check your SaaS portfolio for embedded AI. Tools you already pay for — Microsoft 365, Salesforce, Slack, Google Workspace — have been adding AI capabilities at a rapid pace. Some of those are covered under your existing agreements. Many are not. Pull the release notes from your major SaaS vendors over the past 12 months and audit what AI capabilities have been added and whether they’re within scope of your data agreements.

Building an AI Use Policy That People Will Actually Follow

Here’s the trap most new CISOs fall into: they respond to the Shadow AI problem with a blanket prohibition. “No AI tools without explicit approval.” Full stop.

I understand the instinct. It’s clean, it’s defensible, and it eliminates ambiguity. But it doesn’t work. Employees who are already getting real productivity value from these tools will route around a blanket ban. They’ll use their personal devices. They’ll work from home without VPN. You’ll push the behavior underground, lose whatever visibility you had, and still have all the risk.

The policy you need is one that channels behavior rather than blocks it. Establish clear data classification rules for AI use — define what data is and isn’t appropriate to use with AI tools, and communicate it in plain language. A tiered model works well: public information and internal non-sensitive data can go into approved AI tools; anything customer-facing, regulated, or confidential stays off those platforms until you have appropriate vendor agreements in place.

Build an approved AI tools list — and make it easy to use. One of the reasons people reach for unapproved tools is that the approved path is too slow or too painful. If your procurement and security review process takes four months, employees aren’t going to wait. Streamline the review process for AI tools, create a fast-track path for low-risk use cases, and actively communicate what’s approved and why.

💡 Pro TipMake the policy about data, not about tools. “Don’t use ChatGPT” is easy to route around. “Don’t enter customer PII into any AI tool that isn’t on our approved list” is about a specific behavior with a specific risk. People understand the why. They’re more likely to comply — and more likely to flag a concern when they’re unsure.

The Technical Controls That Actually Move the Needle

Policy is essential, but controls are what catch the cases where policy fails.

Update your DLP rules for AI destinations. If your Data Loss Prevention tooling isn’t monitoring for data being sent to AI endpoints, fix that now. Add the major consumer AI domains to your DLP watchlist. Flag uploads and large text submissions to those domains. You don’t necessarily need to block everything — but you need to know what’s happening.

Implement browser-level controls for unmanaged AI access. For corporate managed devices, browser extensions and endpoint management tools can restrict access to unapproved AI services. This is especially useful for preventing consumer versions of tools that have enterprise-grade alternatives you’ve already approved. If you’ve paid for Microsoft Copilot, there’s no reason employees need to be using the free ChatGPT tier with no data protections.

Build AI into your vendor risk management program. Every AI tool that touches any company data needs to go through your VRM process. That means understanding where data goes, what’s retained, whether the vendor has signed a DPA, and what their security posture looks like. Create an AI-specific addendum to your standard vendor questionnaire that covers model training data, output logging, and incident response procedures.

🔑 Key TipGet logging on your approved AI tools. For AI tools you’ve sanctioned, make sure you have usage logs that you can query — who’s using what, when, what types of prompts, what data classifications are involved. This isn’t surveillance. It’s the same audit trail you’d want for any other business-critical application.

Getting the Organization Behind You — From Employees to the Board

Shadow AI isn’t just a security problem you can solve from the security team. It’s a business behavior problem that requires buy-in at every level of the organization.

With the board, frame Shadow AI as business risk, not technology risk. Directors respond to liability, regulatory exposure, and reputational damage — not to technical explanations about prompt injection or data retention policies. Tell them that employees are actively using AI tools the company hasn’t vetted, that this creates potential regulatory and IP exposure, and that you have a plan to get it under control. Quantify what you can: “We’ve identified X employees sending data to unapproved AI services monthly.” Make it concrete.

With the executive team, position yourself as an enabler. Your CEO and business unit leaders want AI productivity gains — that’s the whole conversation in most boardrooms right now. If you walk in as the person trying to slow AI adoption down, you’ll lose. Walk in as the person who’s going to help the business adopt AI safely and at scale. Propose an approved AI program that gives the organization more of what it wants while reducing the risk. Be the yes — just a structured yes.

With employees, lead with empathy. Most of the people using Shadow AI tools aren’t trying to create security problems. They’re trying to do their jobs better. Acknowledge that. Make the approved path genuinely easier than the unapproved one. Train people on the specific risks using real stories, not abstract warnings. Real examples land.

💡 Pro TipWith HR and Legal, make them partners from the start. Shadow AI policy enforcement is a lot harder without HR alignment. Cases that require discipline need HR involved early. Legal needs to be part of developing your policy so it’s defensible. Don’t treat this as a security program in isolation — build the coalition.

Key Points

  • Shadow AI is already present in your organization. The question isn’t whether it’s happening — it’s whether you have visibility into it.
  • The real risks are data exfiltration through prompts, IP leakage, regulatory exposure, and third-party model risk you can’t audit.
  • Blanket bans don’t work. Build policy that channels behavior toward approved tools rather than prohibiting AI outright.
  • Update your DLP and vendor risk management programs specifically for AI — your existing controls almost certainly have blind spots here.
  • Getting organizational support requires different framing for different audiences: business risk for the board, enablement for the business, empathy and clarity for employees.

Pro Tips

  • Run a Shadow AI discovery sprint before you build your policy. Two weeks of network log review and a few candid employee conversations will tell you more than a year of theoretical risk modeling.
  • Partner with your most AI-enthusiastic business leaders first. Find the VP of Engineering or Sales leader who’s already pushing AI adoption hard. Make them your champion for the approved AI program, not your adversary.
  • Negotiate enterprise data agreements proactively. For any AI tool gaining real traction, get ahead of the vendor conversation. Negotiate zero-retention agreements, data processing addenda, and audit rights before the tool becomes too embedded to move off of.
  • Treat AI hallucination risk as a business process risk. Work with department heads to identify where AI-generated outputs are being used in decisions and build human review checkpoints into those workflows.
  • Create a safe reporting channel for AI-related incidents. Build a non-punitive path for employees to flag mistakes. You want to know about problems before they become breaches.

Pitfalls to Avoid

  • Don’t conflate approved AI tools with safe AI tools. Approval is the beginning of the work, not the end. Every approved tool still requires proper configuration, data classification alignment, and usage governance.
  • Don’t build your AI policy in a vacuum. If Legal, HR, and the business aren’t in the room when you write it, you’ll write something they’ll undermine the moment they see it.
  • Don’t ignore AI embedded in tools you already use. The biggest Shadow AI risk for most organizations isn’t employees signing up for ChatGPT — it’s the AI capabilities baked into your existing SaaS stack that nobody noticed getting turned on.
  • Don’t make the approved AI process harder than using a consumer tool. If getting a tool approved takes four months and seven signatures, you’ve already lost.
  • Don’t let this become a one-time project. The AI tool landscape changes faster than almost any other technology sector. Build an ongoing review cadence into your program from day one.

💭 Final Thought

Shadow AI isn’t a problem you’re going to eliminate. The tools are too useful, too accessible, and too embedded in how people work now. Your job isn’t to stop it — it’s to shape it. The CISOs who handle this well will be the ones who lean into AI adoption rather than against it. They’ll build programs that give employees more of what they’re already reaching for, with guardrails that protect the organization without killing productivity. You have a window right now. Most organizations are still in the early, messy phase of AI adoption where behaviors haven’t fully calcified. Get your visibility, build your policy, and get your controls in place before the problem outgrows your ability to manage it. Because trust me — it will outgrow you fast if you wait.

If this gave you something useful, subscribe to InfoSec Made Easy for more straight-talk on what it actually takes to lead in security today. Share it with a peer who’s just stepped into a CISO role — they need this conversation now, not six months from now. And drop a comment below: what’s your biggest Shadow AI challenge right now? I read every one.

Popular posts from this blog

Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption

Generative AI has moved faster than nearly any technology security leaders have dealt with. Tools that can generate text, code, images, and data insights are now embedded into productivity platforms, security tooling, development workflows, and business operations—often before security teams are formally involved. For CISOs, this creates a familiar but amplified challenge: innovation is happening faster than governance, and unmanaged generative AI introduces material risk across confidentiality, integrity, availability, compliance, and trust. For aspiring information security professionals, AI governance represents a growing and valuable discipline where strategic thinking matters just as much as technical depth. The good news? We don’t need to invent governance from scratch. NIST’s AI Risk Management Framework (AI RMF) provides a practical, flexible structure that security leaders can use today to govern generative AI responsibly and defensibly. Why Generative AI Governance Matt...

NIST CSF 2.0 – Protect Function Deep Dive: Technology Infrastructure Resilience (PR.IR)

Modern enterprises depend on technology everywhere . From cloud workloads to on-prem servers, from network devices to IoT sensors, businesses operate on the assumption that infrastructure “just works.” But what happens when it doesn’t? Critical applications go offline Customers can’t access services Production lines grind to a halt Data is temporarily unavailable or corrupted PR.IR – Technology Infrastructure Resilience – exists because availability, redundancy, and recoverability are as important as confidentiality and integrity . If systems fail and cannot recover, even perfectly configured identity and data controls won’t save the organization. How PR.IR Fits Into the Protect Function So far in Protect, we’ve focused on: PR.AA – Identity and access PR.AT – Human awareness and training PR.DS – Data protection PR.PS – Platform security PR.IR addresses the next question: “Even with strong access, trained people, protected data, and secure platforms, how do we ensure technology cont...