Risk type: Shadow AI
Shadow AI
Shadow AI is when employees use unapproved AI tools (chatbots, extensions, copilots) for work—often by pasting sensitive data into prompts without visibility or controls.
Quick answer
The fastest way to reduce AI risk is to control what can be typed, pasted, and uploaded in the browser. Combine governance (approved tools and data boundaries) with browser-layer enforcement. When users browse unknown destinations as part of AI workflows, isolation reduces endpoint exposure by running web content in an isolated container and streaming only rendered output; sessions are deleted after use.
When you need this
- Employees paste internal data into AI prompts to move faster.
- You need policy enforcement in the browser, not just training and documents.
- You want to allow AI productivity while preventing sensitive data loss.
Last updated
2026-01-29
Affected tools
- Browser AI extensions
- Unapproved chatbots
- Embedded AI widgets
- Free-tier AI tools
How it usually happens in the browser
- Employees install AI browser extensions that summarize pages, write emails, or generate code.
- Teams use free AI tools in the browser because they’re fast and don’t require procurement.
- Sensitive data is pasted into prompts to speed up tasks (support tickets, logs, contracts).
- Outputs are copied into production systems without review, creating security and compliance risks.
- Admins lack visibility because usage happens in the browser across many domains and tools.
What traditional defenses miss
- Procurement and SaaS management tools don’t see browser extensions and ad-hoc web usage easily.
- Network-based controls can’t reliably classify prompt content inside encrypted sessions.
- Training doesn’t stop behavior when the productivity gain is immediate and the risk feels abstract.
- Policies aren’t enforceable without technical controls at the browser layer.
Mitigation checklist
- Define an allowlist of approved AI tools and workflows; treat everything else as untrusted.
- Restrict extension installs and enforce a strict allowlist for browser extensions.
- Use browser-layer controls to block sensitive paste/upload into unapproved AI tools.
- Provide an approved alternative that’s easy to access (so teams don’t bypass controls).
- Measure adoption: track which tools people try to use and where exceptions are needed.
How isolation helps
- Isolation helps by enforcing safer defaults for unknown web destinations and reducing endpoint risk when users explore new tools.
- It can be used as a control boundary for unapproved AI sites, while approved tools remain available under policy.
- Disposable sessions reduce residual state from risky exploration and help avoid long-lived session drift.
FAQs
Why do employees use shadow AI?
Because it’s fast and easy. If the approved path is slower, people will bypass it—especially under deadlines.
Should we ban AI entirely?
Most orgs get better results by approving a safe set of tools and enforcing guardrails, rather than an outright ban.
What’s the first control to implement?
Restrict AI-related browser extensions and block sensitive data paste/upload into unapproved AI tools.
Does isolation give visibility into prompts?
Isolation primarily changes where web content runs. Visibility and enforcement for prompt content usually require additional browser-layer controls and governance.
References
- NIST AI Risk Management Framework (AI RMF) — NIST
- Chrome Enterprise: Policies — Google
- Cloudflare: Browser Isolation — Cloudflare