Skip to main content
Security Research

What Is Shadow AI? Why Your Company's Biggest Security Threat Is the Browser Tab

Shadow AI is the unauthorized use of AI tools like ChatGPT, Claude, and Gemini inside organizations. Employees paste proprietary data into browser-based AI tools with no security controls. This guide explains the risks and how browser-level controls address them.

Estimated reading time: 12 min read
A laptop on a dark desk with red data threads rising from its glowing screen into darkness above

Right now, someone in your organization is pasting proprietary source code into ChatGPT to debug a function. Someone else is uploading a confidential financial model to Claude to summarize the key findings. A third person is feeding customer support transcripts into Gemini to draft response templates. They are doing this in browser tabs. Your security team has no visibility into any of it.

This is shadow AI. It is not a theoretical future risk. It is the most common data leakage vector in knowledge-work organizations today. And the reason it is so difficult to control is that it happens entirely within the browser, the one surface where most security tools have the least visibility.

What Shadow AI Is

Shadow AI is the use of artificial intelligence tools within an organization without the knowledge, approval, or oversight of IT and security teams. It typically involves employees accessing browser-based AI services (ChatGPT, Claude, Gemini, Copilot, Perplexity, and others) to improve their productivity, using these tools to process sensitive data that the organization has not authorized for AI consumption.

The term borrows from "shadow IT," which described the unauthorized use of software and cloud services within organizations. Shadow AI follows the same pattern but with higher stakes, because AI tools process, learn from, and in some cases retain the data that users input.

Why Shadow AI Is Exploding in 2026

The conditions for shadow AI are nearly perfect in 2026. Every factor that made shadow IT difficult to control applies to AI, amplified by the speed of AI tool adoption.

  • AI tools are browser-native. No software to install. No IT approval needed. An employee opens a browser tab, navigates to chat.openai.com, and starts pasting data. The barrier to entry is zero.
  • Free tiers are generous. ChatGPT, Claude, Gemini, and Perplexity all offer free tiers that are powerful enough for most work tasks. Employees do not need to expense anything or request procurement approval.
  • Productivity gains are immediate. An employee who discovers they can draft a report in 10 minutes instead of 2 hours is not going to stop because of an AI policy they may not have read. The incentive to use the tool is too strong.
  • Enterprise AI governance lags behind adoption. Most organizations are still drafting AI usage policies while their employees are already using AI tools daily. The governance framework has not caught up with the reality on the ground.
  • Personal devices and remote work.Employees working from personal laptops on home networks bypass corporate network controls entirely. The AI usage happens outside the organization's security perimeter.

A 2024 Microsoft study found that 78% of AI users were bringing their own AI tools to work. By 2026, that percentage has only grown as AI capabilities have expanded and more specialized tools have entered the market.

The Real Risks of Shadow AI

The risk is not that employees are using AI. The risk is that they are using AI without any controls around what data enters the AI system.

Data Leakage Through Prompts

When an employee pastes source code, financial data, customer records, or internal documents into an AI chat, that data is transmitted to the AI provider's servers. Depending on the provider's data retention policy and the user's plan tier, that data may be used to train future models, stored in logs, or accessible to the provider's employees for quality assurance. Samsung, for example, banned ChatGPT in 2023 after employees inadvertently leaked proprietary semiconductor designs through AI prompts.

Intellectual Property Exposure

Proprietary algorithms, unreleased product features, competitive analyses, and trade secrets pasted into AI prompts become data that the organization no longer exclusively controls. The legal implications of IP exposure through AI prompts are still evolving, but the risk is concrete.

Compliance Violations

Regulated industries face specific obligations around data handling. Patient health information (HIPAA), financial data (SOC 2, PCI DSS), personal data of EU residents (GDPR), and legal communications (attorney-client privilege) all have restrictions on where data can be processed and stored. Pasting regulated data into an AI tool that processes it on third-party servers can constitute a compliance violation regardless of the employee's intent.

No Audit Trail

When employees use AI tools through personal accounts in browser tabs, the organization has no record of what data was sent, which AI tool received it, or what response was generated. In the event of a data breach investigation or compliance audit, there is no log to examine.

For research on how AI conversations can be intercepted even when encrypted, see Your Encrypted AI Conversations Aren't as Private as You Think.

Why This Is a Browser Problem

Shadow AI happens in the browser. Not in a native application. Not through an API integration. In a browser tab. An employee opens chat.openai.com or claude.ai in Chrome, types or pastes sensitive data, and hits enter. The data leaves the organization through an HTTPS connection that looks identical to any other web traffic.

This means the control point for shadow AI is the browser. Not the network (the traffic is encrypted). Not the endpoint (no software was installed). Not the email gateway (no email was sent). The browser is where the data enters the AI tool, and the browser is where controls need to exist.

For the comprehensive guide on AI security at the browser level, see the AI security guide library. For the dedicated shadow AI security page, see Shadow AI: Browser-Level Controls.

What Traditional Security Tools Miss

  • VPNs encrypt traffic between the device and the VPN server. They do not inspect the contents of HTTPS requests. An employee pasting source code into ChatGPT through a VPN looks like normal encrypted web traffic.
  • Traditional DLP tools monitor file downloads, email attachments, and USB transfers. They do not monitor text pasted into browser-based AI chat interfaces. The data exfiltration vector is a copy-paste operation in a browser tab, which most DLP tools do not intercept.
  • Firewalls and DNS filtering can block access to AI domains entirely. But this creates a game of whack-a-mole: employees switch to lesser-known AI tools, use personal devices, or access AI through proxied URLs. Blocking also eliminates the legitimate productivity benefits that AI tools provide.
  • CASB (Cloud Access Security Brokers) provide visibility into sanctioned cloud applications. Shadow AI tools are, by definition, unsanctioned and often accessed through personal accounts, putting them outside CASB visibility.

How Organizations Are Responding

Policy-Only Approaches

Many organizations start with an AI acceptable use policy: a document that defines which AI tools are approved, what data can be shared with them, and what the consequences of violations are. Policies are necessary but insufficient. Without enforcement mechanisms, policies rely on employee compliance, which history shows is unreliable when the incentive to use the tool is strong.

Blocking AI Domains

Some organizations block access to AI tool domains at the firewall or DNS level. This stops the most obvious usage but creates several problems: employees use personal devices to bypass the block, new AI tools appear faster than blocklists can be updated, and legitimate AI usage (which many organizations want to encourage) is eliminated along with the unauthorized usage.

Browser-Level Controls

The emerging approach is to address shadow AI at the layer where it happens: the browser. Browser-level controls can monitor AI tool usage, enforce data loss prevention policies on browser-based interactions (blocking paste operations into specific sites, preventing file uploads to AI tools), and isolate AI sessions in contained environments where data exfiltration is controlled.

For the foundational technology behind browser-level controls, see What Is Browser Isolation? The Complete 2026 Guide.

Browser Isolation as a Shadow AI Control

Browser isolation provides a control framework for AI tool usage that addresses the limitations of policy-only and blocking approaches.

  • Isolate AI tool sessions. When employees access AI tools through an isolated browser session, the interaction happens in a controlled environment. The organization gains visibility into which AI tools are being used without blocking them outright.
  • Enforce DLP at the browser layer. Browser-level DLP can control clipboard operations (paste restrictions), file upload restrictions, and download controls within AI tool sessions. This prevents the most common data exfiltration vector (copy-paste of sensitive content) without blocking AI usage entirely.
  • Create audit trails. Isolated AI sessions can be logged, providing the organization with a record of which tools were accessed, by whom, and when. This addresses the audit gap that shadow AI creates.
  • Destroy session data on close. When the isolated AI session ends, all session data (cached prompts, responses, authentication state) is destroyed with the environment. No AI session residue persists on the device.

This is not about banning AI. It is about enabling AI usage with appropriate controls. The goal is to let employees access the productivity benefits of AI tools while preventing the data leakage, compliance violations, and IP exposure that uncontrolled usage creates.

For the technical architecture behind browser isolation, see How Legba's Browser-Native Isolation Actually Protects You.

Where Legba Fits

Legba provides browser-native isolation through a Chrome extension. For shadow AI specifically, it offers two relevant capabilities:

  • Isolated browsing sessions for AI tools.Employees can access AI tools through Legba's isolated sessions, which provide ephemeral environments that are destroyed on close. Session data, prompts, and authentication state do not persist.
  • OpenClaw sandbox for AI agents.For teams running autonomous AI coding agents (like OpenClaw), Legba provides a fully isolated cloud sandbox. The agent executes in a contained environment with no access to the user's local machine. See How to Run OpenClaw Safely for the detailed guide.

$10 per month for individual use. MSP platform available for organizations managing AI security across teams. No infrastructure changes required.

The Whisper Leak research, the browser isolation pillar guide, and the OpenClaw sandbox guide.

Control Shadow AI at the Browser

Legba isolates AI tool sessions so your team gets the productivity without the data leakage. Ephemeral sessions. Browser-level DLP. $10 per month.

About the Authors