The attacker has new tools.
So should you.
AI-augmented attacks are not theoretical. Deepfake-enabled BEC, autonomous vulnerability discovery, and shadow-AI data leakage are happening to UK firms this quarter. Here's what's changing in the threat landscape and how to respond.
AI-aided phishing and BEC
Phishing remains the dominant initial-access vector against UK firms, and LLM-drafted lures have measurably increased attacker yield. Deepfake voice and video have moved from research demos into production fraud, with AI-supported social engineering and synthetic-media vishing both rising sharply through 2025. Business email compromise continues to route through UK banking infrastructure as a primary intermediary destination.
$2.77B in BEC losses, 2024
Source: FBI IC3 2024 Annual Report, Apr 2025
Out-of-band callback verification on any payment or credential-change request — regardless of channel — paired with FIDO2 phishing-resistant MFA and DMARC enforced at p=reject.
Autonomous vulnerability scanning and exploit generation
AI is now embedded across the attack lifecycle. Threat actors have moved beyond coding assistance to integrating LLM APIs directly into malware for just-in-time code generation that defeats signature-based detection. Offensive AI platforms are now competitive with elite human researchers on public bug-bounty leaderboards, and NCSC assesses AI will almost certainly raise both volume and impact of attacks through 2027.
204 nationally significant incidents — a 129% increase year-on-year
Source: NCSC Annual Review 2025, Oct 2025
Behaviour-based detection (EDR/XDR) over signature-based tooling, with CISA KEV-prioritised patching and a 72-hour SLA on critical CVEs facing external systems, supported by continuous attack-surface management.
Prompt injection against agentic systems
Prompt injection is OWASP's top LLM risk and is named by Microsoft as one of four primary attack vectors against AI systems. Indirect prompt injection — where malicious instructions are hidden inside retrieved emails, documents or web pages rather than user input — has been confirmed exploitable in production environments. Blast radius scales with agent capability: once an LLM can browse, retrieve, write or execute code, embedded instructions become real exploit primitives.
Prompt injection present in 73%+ of production AI deployments assessed in 2025
Source: Redbot Security analysis, 2025
Treat retrieved content as untrusted: enforce least-privilege scoping on agent tools (read-only by default, writes require human confirmation), sandbox external content before it enters the context window, and monitor agent action logs as privileged-user sessions.
Model and supply-chain weaponisation
The AI model supply chain is a primary attack surface. Public model registries host hundreds of thousands of unsafe or suspicious artefacts, with techniques including pickle deserialisation exploits, namespace hijacking of deleted maintainer accounts, and coordinated uploads of malicious agent skills. Slopsquatting — weaponising LLM hallucination of nonexistent package names — gives attackers predictable, pre-registerable targets in `requirements.txt` and `package.json` files.
~352,000 unsafe findings across 51,700 Hugging Face models
Source: Protect AI, 2025
Pin models and packages by hash rather than name, maintain an allow-listed internal mirror for both code dependencies and AI model artefacts, and require human review on every AI-suggested dependency before it enters a manifest file.
AI-assisted insider data exfiltration
AI assistants collapse the friction of insider exfiltration — pasting a contract, customer list or source code into a public LLM is a five-second action with no email attachment, no USB event, and no DLP signature on most current configurations. Shadow-AI breaches now carry a measurable cost premium over baseline incidents, and a majority of breached organisations lack proper AI access controls. The Samsung ChatGPT leaks remain the canonical reference incident in ENISA, NCSC and ICO guidance.
Shadow-AI breaches cost +$670k on average versus baseline
Source: IBM Cost of a Data Breach 2025, Jul 2025
Sanction enterprise-tier AI tools (M365 Copilot with UK/EU data residency, ChatGPT Enterprise, Anthropic for Work), block public LLM domains at the egress proxy except through the enterprise broker, and deploy browser-extension DLP for paste events — with an approved alternative for staff so bans do not drive usage underground.
Shadow-AI risk inside organisations
Shadow AI is the dominant AI risk for mid-market firms because adoption is outpacing governance. Around a third of UK businesses are using, adopting or considering AI, but only around a quarter of those have any cyber security practices in place to manage AI risk. The majority of paste events to GenAI tools originate from unmanaged personal accounts, placing most data flow outside any DLP control plane — and most organisations report being effectively blind to AI data flows.
68% year-on-year increase in shadow GenAI usage inside enterprises
Source: Menlo Security 2025 State of Browser Security
Run a four-step control loop: discover existing AI usage via egress logging and SaaS discovery; sanction an approved enterprise tier with UK/EU data residency; block unsanctioned domains at the egress proxy; and train staff on what data is acceptable, with a no-blame route for accidental disclosure.