AI security is the fastest-growing attack surface in enterprise IT. As organisations deploy AI tools at scale, attackers are adapting — and the data tells a clear story. Key statistics on prompt injection, AI data breaches, shadow AI exposure, and enterprise response, sourced from IBM, OWASP, Gartner, Forrester, and more.
of enterprise employees now use AI tools regularly — making AI the fastest-adopted enterprise technology category in history.
of organisations have experienced at least one AI-related security incident, up sharply as AI adoption outpaces security controls.
employees per team use shadow AI tools without IT approval on average — creating invisible data flows across every department.
projected size of the global AI security market by 2028, driven by enterprise demand for AI governance and threat detection.
of data breaches in 2025 involved an AI tool or AI-generated content — making AI a primary vector for modern enterprise breaches.
Prompt injection is the top LLM security risk per OWASP's 2025 edition — an attack class with no equivalent in traditional software security.
of LLM applications tested in 2025 were vulnerable to some form of prompt injection — meaning most AI deployments are exploitable today.
increase in AI agent hijacking attacks year-over-year in 2025, as autonomous AI agents create new attack surfaces without traditional defences.
customer-facing AI chatbots leaked sensitive information when tested adversarially — a critical risk for enterprises deploying public-facing AI.
of employees admit to pasting work documents into public AI tools — exposing confidential data to third-party AI providers with no enterprise data controls.
unsanctioned AI tools are in use at the average enterprise — the vast majority invisible to IT, security, and compliance teams.
of incidents involve source code shared with external AI tools, making it the #1 category of data exposed via shadow AI.
of employees can have their data exposed simultaneously in a single shadow AI incident — the blast radius is far larger than traditional data leaks.
of organisations will face AI-specific regulatory requirements by end of 2026, covering data handling, transparency, and AI system accountability.
EU AI Act enforcement begins for high-risk AI systems — carrying fines of up to 7% of global annual revenue for non-compliance.
average cost of an AI-related data breach, reflecting regulatory notification obligations, investigation complexity, and reputational damage.
of organisations have a formal AI governance policy in place — leaving 69% exposed to regulatory action as AI-specific laws take effect.
of CISOs rank AI security as a top-3 priority for 2026, reflecting the rapid shift of AI threats from theoretical to operational.
average time to detect an AI security incident — nearly twice the detection window for traditional cyberattacks, due to lack of AI-native monitoring.
reduction in AI security incidents for organisations with AI governance platforms, compared to those relying on policy alone.
CAGR for AI security budget through 2028, as enterprises respond to growing AI threat volumes and incoming regulatory mandates.
The data is unambiguous: AI security is no longer a future concern — it is a present operational reality. 65% of organisations have already experienced an AI-related security incident (IBM, 2025), and 43% of data breaches now involve an AI tool or AI-generated content. The attack surface is expanding faster than enterprise defences can adapt.
The technical threat landscape has shifted fundamentally. OWASP's LLM Top 10 (2025 edition) identifies prompt injection as the primary risk — and academic testing confirms 86% of LLM applications are currently vulnerable. As organisations deploy AI agents that take autonomous actions, the consequences of a single compromised prompt can cascade across entire systems.
Meanwhile, the insider risk from shadow AI remains severe. With 97 unsanctioned AI tools in use at the average enterprise and 73% of employees admitting to pasting work documents into public AI tools, data leakage is happening continuously — largely undetected. Average detection time for AI security incidents is 197 days.
The regulatory window is closing. EU AI Act enforcement begins August 2026, carrying fines up to 7% of global annual revenue. Only 31% of organisations have formal AI governance policies. The organisations that act now — with real-time AI visibility and governance platforms — will avoid both the financial and reputational cost of AI security failures.
Statistics on this page are sourced from publicly available research, analyst reports, vendor studies, and regulatory publications from 2024–2026. Primary sources include IBM, OWASP, Gartner, Forrester, Cyberhaven, MarketsandMarkets, and Aona AI's own platform data. Where multiple data points exist for a topic, the most recent or most widely cited figure is used. All figures relate to enterprise usage unless otherwise stated. Projected figures are noted as such.
Last updated: March 2026 — This page is updated quarterly. Next update: June 2026.
Real-time AI visibility, prompt injection defence, shadow AI detection, and compliance reporting — all in one platform. Deployed in under 5 minutes.