Your employees are using AI.
Do you know which tools?
Most teams don't. They don't know what data is flowing to which model providers, what the vendor terms of service actually say, or what their regulator would think.
The AI Usage Audit answers all three — in a written report your board can read in ten minutes.
Speak to a consultantWhat's in every AI Usage Audit
Nine findings categories. Mapped to your sector's regulatory state. Calibrated against the AI threat landscape.
Shadow tool discovery
Network and endpoint scans to identify every AI tool in active use across the organisation — sanctioned and unsanctioned.
Data egress mapping
Trace where employee prompts, attachments, and outputs are going. Identify regulated data flowing to model providers.
Vendor T&Cs review
Read what your employees clicked through. Commercial terms, IP retention, training-data rights, data-residency clauses.
Employee usage patterns
Anonymous behaviour analysis — which roles use AI most, for what tasks, with what data classifications.
Regulatory mapping
EU AI Act categorisation, ICO guidance alignment, sector overlays (FCA, NHS DSPT, SRA) — what each finding triggers.
AI-attack readiness
Where your existing controls handle AI-augmented threats (deepfake BEC, prompt injection of agent integrations) — and where they don't.
Training-gap analysis
Which employee groups need AI safety training first, prioritised by data sensitivity and current usage volume.
Internal policy gap
Compare current AI policies (if any) to what your sector and regulators expect. Concrete drafting recommendations.
Controls inventory
Technical controls in place to constrain AI use — DLP, network egress rules, conditional access. What's working, what's missing.
For firms where the regulator will ask
The AI Usage Audit is built for regulated UK mid-market firms where "we don't know" is not a defensible answer when AI use shows up in an inspection, a client questionnaire, or a contract review.
"Most boards I speak to could not, today, name three AI tools their employees are using. The AI Usage Audit closes that gap before the next regulator visit does."
How an AI Usage Audit works
Six steps. Designed to be minimally disruptive to your operations. Typical duration: five to eight business days from kickoff to debrief.
Discovery call
A senior consultant call. We learn the AI landscape your firm already has — known tools, suspected shadow usage, regulatory drivers.
Scoping & agreement
Written scope covering: which systems we inspect, which employee groups we interview, what data classifications are in play.
Discovery & inspection
Network telemetry analysis, endpoint scans, anonymised employee survey, vendor T&Cs collection. Typical duration: five to eight business days.
Risk categorisation
Every finding mapped to: data classification involved, regulatory exposure, vendor T&Cs implications, and a severity rating.
Report & debrief
An executive register of AI tools in use, a risk-ranked exposure register, and a live debrief with the consultant who ran the audit.
Remediation roadmap
Prioritised next steps. Many clients flow into our AI Governance & Compliance engagement to execute the roadmap.
What clients do with the findings
Most clients flow into our ongoing AI Governance & Compliance engagement to execute the remediation roadmap. Some take the report and run it internally. Both work — we'll tell you which we think fits.
See AI Governance & Compliance