This report examines the current state of AI governance in UK law firms, drawing on the SRA Compliance Officers Thematic Review (December 2025), Clio Legal Trends Report 2024, Thomson Reuters data, The Law Society Annual Statistics Report 2024, and regulatory developments through Q1 2026.
The picture is stark: AI adoption is near-universal, governance is not, and the regulatory window is closing.
Executive Summary
| Finding | Data Point | Source | |---------|-----------|--------| | AI adoption is near-universal | 96% of UK firms use AI[^1] | Clio Legal Trends Report 2024 | | Compliance officers are unprepared | 1 in 36 COLPs can describe all general regulatory obligations[^2] | SRA Compliance Officers Thematic Review, Dec 2025 | | The majority of firms lack adequate governance | ~69% of COLPs could not describe more than half their general obligations[^2] | SRA Compliance Officers Thematic Review, Dec 2025 | | The market is massive | GBP 44 billion UK legal services market[^3] | Law Society 2024 | | Regulatory pressure is escalating | ICO consultation on AI guidance launched March 2026[^4] | ICO | | EU enforcement is imminent | Full EU AI Act enforcement 2 August 2026[^5] | Regulation (EU) 2024/1689 | | No governance tools exist | Zero purpose-built AI governance tools for legal | Market analysis, Q1 2026 |
The core finding: UK law firms have adopted AI at extraordinary speed but have not built the governance infrastructure to demonstrate compliance. The gap between adoption and governance is the defining risk of 2026.
Part 1: The Adoption Picture
96% of UK Firms Now Use AI
According to the Clio Legal Trends Report 2024 (UK data)[^1], 96% of UK law firms now use AI in some capacity. This includes:
- General-purpose tools (ChatGPT, Claude, Copilot) used informally by fee earners
- Embedded AI features in existing legal technology (research, document management, practice management)
- Purpose-built legal AI tools (research platforms, contract review, drafting assistants)
- Experimental pilot programmes evaluating agentic AI capabilities
The 96% figure includes all forms of AI use, from casual ChatGPT queries to enterprise-grade deployments. What it doesn't capture is whether that use is governed.
The Adoption Spectrum
Not all AI use is equal. Based on our analysis of industry data and conversations with UK law firms, we estimate firms broadly fall into four tiers[^6]:
Tier 1: Ad-hoc use (estimated 40-50% of firms) Fee earners using general-purpose AI (ChatGPT, Claude) for research, drafting, and summarisation without firm-level governance. No AI policy, no oversight, no audit trail. The firm may not even know the extent of AI use.
Tier 2: Sanctioned tools, minimal governance (estimated 30-35% of firms) Firms that have approved specific AI tools (CoCounsel, Lexis+ AI, Harvey) but lack systematic governance. There may be a usage policy, but no verification of AI outputs, no compliance checking, and no audit trail. The COLP knows AI is being used but has limited visibility into how.
Tier 3: Formal AI policy with manual governance (estimated 10-15% of firms) Firms with written AI policies, designated AI champions, and manual review processes. Governance exists on paper but relies on human diligence rather than systematic infrastructure. Audit trails are manual — assembled before inspections, not generated automatically.
Tier 4: Systematic AI governance (estimated 1-5% of firms) Large firms with dedicated innovation teams, formal AI risk registers, staff training programmes, and some level of automated governance. Even at this tier, independent verification of AI outputs and regulatory compliance checking are typically manual or absent.
What Firms Are Using AI For
Based on industry data and market analysis:
| Use Case | Adoption Level | Governance Maturity | |----------|---------------|-------------------| | Legal research | Very high | Low — most firms trust AI outputs without independent verification | | Document review / summarisation | High | Low — no systematic checking of AI analysis accuracy | | Contract drafting and review | High | Low-Medium — some firms use playbook-based review, but no regulatory compliance checking | | Email and correspondence drafting | Very high | Very low — largely ungoverned ad-hoc use | | Due diligence | Medium | Low — source tracing exists in some tools, but no governance layer | | Compliance monitoring | Low | Very low — most firms still rely on manual monitoring | | Client-facing AI use | Low | Variable — client consent and transparency practices emerging |
Part 2: The Governance Gap
The SRA Compliance Officers Thematic Review: Key Findings
The SRA's Compliance Officers Thematic Review (December 2025) is the most significant recent indicator of governance readiness across the profession[^2]. The SRA visited 25 firms and interviewed 36 compliance officers about their general regulatory obligations — not AI-specific ones. The findings are striking:
1 in 36 COLPs understand their general obligations. Only one compliance officer out of 36 interviewed could fully outline all of their general regulatory responsibilities as a COLP. If compliance officers cannot describe their baseline obligations, AI governance — which layers additional complexity on top — is even further behind.
The majority could not describe even half their general obligations. Approximately 69% of COLPs interviewed could not describe more than half of the compliance requirements set out in the SRA Code of Conduct for Firms.
Common gaps identified by the SRA:
- No formal AI usage policy
- No risk assessment for AI tools
- No process for verifying AI outputs
- No training on AI-specific regulatory obligations
- No audit trail of AI-assisted work
- No client transparency about AI use
- No assessment of AI impact on confidentiality
Mapping the Gap to SRA Rules
The governance failures identified in the Thematic Review map directly to existing SRA Code of Conduct for Firms requirements:
| SRA Rule | Requirement | Common AI Governance Failure | |----------|------------|---------------------------| | Rule 2.1 | Effective governance structures | No AI-specific governance framework or responsible person | | Rule 2.2 | Records demonstrating compliance | No audit trail of AI-assisted work or governance decisions | | Rule 2.5 | Identification and management of material risks | No AI risk assessment or AI risk register | | Rule 4.2 | Competent, timely, appropriate service | AI outputs not verified for accuracy before delivery to clients | | Rule 4.3 | Staff competence and up-to-date knowledge | No AI-specific training or competence assessment | | Rule 6.3 | Duty of confidentiality to current clients | Client data entered into AI tools without adequate safeguards | | Rule 6.4 | Duty of confidentiality to former clients | AI training data or context windows may expose former client information | | Rule 6.5 | Confidentiality when acting for two or more clients | AI tools may surface information across matter boundaries |
The COLP Readiness Problem
The finding that only 1 in 36 COLPs can fully describe their general regulatory obligations deserves particular attention. COLPs are the individuals the SRA holds personally responsible for their firm's compliance. If the COLP doesn't understand even their baseline obligations, AI governance — which adds further complexity — is a significant blind spot.
What COLPs need:
- Clear understanding of which SRA rules apply to AI use
- Real-time visibility into AI-assisted work across the firm
- Ability to demonstrate compliance to the SRA on demand
- Evidence that is generated systematically, not assembled ad-hoc
- Risk-based prioritisation of AI governance issues
What COLPs typically have:
- A vague sense that AI governance is important
- An AI policy document that may or may not reflect actual practice
- No visibility into the extent or nature of AI use across the firm
- No systematic audit trail
- No way to generate compliance evidence quickly
Part 3: The Regulatory Landscape
Already in Force
SRA Code of Conduct for Firms — Rules 2.1, 2.2, 2.5, 4.2, 4.3, and 6.3-6.5 are all directly applicable to AI use. These are not new rules. What's new is enforcement focus.
UK GDPR / Data Protection Act 2018 — Client data processed by AI tools must comply with existing data protection requirements. Automated decision-making provisions (Article 22 UK GDPR) may apply depending on how AI outputs are used.
2026 Timeline
| Date | Development | Impact | |------|------------|--------| | December 2025 | SRA Compliance Officers Thematic Review published[^2] | Established baseline — majority of COLPs unprepared for general compliance, let alone AI governance | | March 2026 | ICO consultation on AI automated decision-making guidance launched[^4] | Consultation open until 29 May 2026; final guidance expected Summer 2026 | | H1 2026 | SRA dedicated supervision of ~75 largest firms includes AI focus[^7] | Proactive supervisory oversight at major firms increasingly covers AI governance | | August 2, 2026 | EU AI Act full enforcement[^5] | High-risk AI obligations mandatory; penalties up to EUR 15M or 3% of turnover for non-compliance | | Late 2026 | Expected SRA follow-up supervisory activity | Firms identified in thematic review may face follow-up |
EU AI Act: Why UK Firms Are Affected
The EU AI Act has extraterritorial reach (Article 2). UK firms are affected if:
- They serve EU-based clients
- They deploy AI systems that produce outputs used in the EU
- They provide legal services related to EU law
Under Annex III, paragraph 8, AI systems used in the "administration of justice" — including legal research, case analysis, and applying law to facts — are classified as high-risk. This triggers:
- Conformity assessment (Article 43)
- Risk management system (Article 9)
- Technical documentation (Article 11)
- Record-keeping/audit trails (Article 12)
- Transparency obligations (Article 13)
- Human oversight (Article 14)
Maximum penalties[^8]: For violations of prohibited AI practices (Article 5): EUR 35 million or 7% of total worldwide annual turnover. For other high-risk AI violations (including Annex III obligations most relevant to legal AI): EUR 15 million or 3% of turnover. For providing incorrect information to authorities: EUR 7.5 million or 1% of turnover.
Part 4: The Insurance and Client Dimension
PII Insurers Are Asking Questions
Professional indemnity insurers are beginning to include AI governance in their risk assessment frameworks. Questions appearing in PII renewal questionnaires and panel assessments include:
- Does your firm have a formal AI usage policy?
- How do you verify AI-generated legal outputs before delivery to clients?
- What audit trail exists for AI-assisted work?
- How is client confidentiality protected when using AI tools?
- What training have staff received on AI-specific risks?
Firms that cannot answer these questions satisfactorily face potential premium increases, coverage restrictions, or — in extreme cases — coverage gaps for AI-related claims.
Client Procurement Is Driving Adoption
Panel tender questionnaires from major corporate clients and financial institutions increasingly include AI governance sections. Questions include:
- What AI tools does your firm use for our matters?
- How do you govern AI-assisted legal work?
- What oversight exists for AI outputs before they reach us?
- Can you provide an audit trail of AI use on our matters?
For firms competing for panel positions, AI governance is becoming a competitive differentiator — not just a compliance requirement.
Part 5: The Market Gap
No Purpose-Built Governance Tools
As of Q1 2026, there are zero purpose-built AI governance tools designed specifically for law firms. The market splits into:
Legal AI productivity tools — Research platforms, contract review, drafting assistants, practice management AI. These make lawyers more productive but include no governance infrastructure: no independent verification, no regulatory compliance checking, no audit trails mapped to SRA requirements, no COLP dashboards.
Enterprise AI governance platforms — General-purpose AI governance and risk management tools designed for any industry. These exist but aren't built for legal: they don't understand SRA rules, legal citation formats, BAILII, legislation.gov.uk, or the specific regulatory environment of UK law firms.
In-house governance efforts — Some Top 20 firms have built internal governance processes and tooling. These are manual, expensive to maintain, and not available to mid-market firms.
The Mid-Market Squeeze
The governance gap is sharpest at UK mid-market firms (50-500 fee earners):
- Enough AI use to create governance risk — these firms use AI extensively for research, contracts, and drafting
- Enough regulatory exposure — SRA-regulated, potentially EU AI Act affected
- Not enough budget or headcount to build governance infrastructure internally
- Not served by existing tools — too sophisticated for basic AI policies, too small for enterprise consulting engagements
This is the segment where the regulatory risk is highest and the commercial opportunity for governance tooling is clearest.
Part 6: Recommendations
For COLPs and Compliance Officers
- Audit current AI use across the firm — You cannot govern what you don't know exists. Many firms underestimate the extent of informal AI use by fee earners.
- Map AI use to SRA rules — Use the table in Part 2 to identify which rules are engaged by your firm's AI use.
- Establish an AI risk register — Document each AI tool, its use case, who uses it, what data it processes, and what governance is in place.
- Implement verification processes — Until automated tools exist, establish manual processes for verifying AI outputs against primary legal sources.
- Prepare for the SRA GenAI Good Practice Note — When it arrives, you'll need to demonstrate compliance quickly. Start building the evidence base now.
For Innovation Directors and IT Leaders
- Evaluate governance alongside productivity — When assessing AI tools, ask about audit trails, compliance checking, and verification — not just speed and accuracy.
- Plan for EU AI Act compliance — If your firm serves EU clients, the August 2026 deadline requires conformity assessment infrastructure.
- Assess data sovereignty requirements — Ensure AI tools process UK client data on UK infrastructure.
- Budget for governance infrastructure — AI productivity tools are only part of the investment. Budget separately for governance.
For Managing Partners
- Treat AI governance as a strategic priority — This is not an IT project. It's a regulatory compliance, insurance, and client-retention issue.
- Review PII implications — Discuss AI governance with your PII broker before renewal.
- Use governance as a competitive advantage — Firms that can demonstrate governed AI use will win panel positions and client confidence.
- Act before the regulator — The SRA GenAI Good Practice Note and ICO Statutory Code are coming. Being ahead of regulation is cheaper than reacting to it.
Methodology and Sources
This report synthesises data from publicly available sources. Where tier percentages or adoption estimates are provided, these represent LegalAI Space's own analysis based on available data and should be treated as indicative rather than definitive. All primary source citations are footnoted below.
This report is published by LegalAI Space as part of our commitment to transparency about the AI governance landscape. LegalAI Space is building AI agents for legal teams with a governance layer that makes every output verifiable, compliant, and audit-ready.
We welcome feedback and additional data points. Contact hello@legalaispace.com or book a research conversation with Founder Daman Kaur.
Sources
[^1]: Clio, Legal Trends Report 2024 (UK data). The 96% figure includes all forms of AI integration, from general-purpose tools to purpose-built legal AI platforms.
[^2]: SRA, Compliance officers: A thematic review, December 2025. The SRA visited 25 firms and interviewed 36 compliance officers about their general regulatory obligations (not AI-specific). Only one COLP could outline all of their regulatory responsibilities. The majority (approximately 69%) could not describe more than half the relevant compliance requirements. We cite this finding because it demonstrates the baseline compliance readiness gap — if COLPs struggle with general obligations, AI-specific governance is even less mature. See also SRA Risk Outlook: AI in the Legal Market (November 2023).
[^3]: The Law Society, Annual Statistics Report 2024. GBP 44 billion represents the total value of UK legal services.
[^4]: ICO, Consultation on Draft Guidance About Automated Decision-Making, March 2026. Consultation open until 29 May 2026; final guidance expected Summer 2026. Statutory footing under the Data (Use and Access) Act 2025. See also ICO AI and Biometrics Strategy Update, March 2026.
[^5]: EU AI Act, Regulation (EU) 2024/1689, published OJ L 2024/1689, 12 July 2024. Article 113 sets out the phased enforcement timeline with full Annex III high-risk enforcement from 2 August 2026.
[^6]: Tier percentages are LegalAI Space estimates based on analysis of available industry data, SRA findings, and research conversations with UK law firms. These are indicative and should not be cited as authoritative survey data.
[^7]: The SRA maintains dedicated supervisory relationships with approximately 75 of its largest and highest-profile regulated firms as part of its ongoing regulatory oversight. AI governance is emerging as a focus within this supervisory framework. See Compare the Cloud: UK Mid-Market Legal Firms and the SRA.
[^8]: EU AI Act, Article 99 — Penalties. Three tiers: (a) Prohibited AI practices (Article 5): up to EUR 35M or 7% of worldwide turnover; (b) Other obligations including high-risk AI (Annex III): up to EUR 15M or 3% of turnover; (c) Incorrect information to authorities: up to EUR 7.5M or 1% of turnover.
[^9]: SRA Code of Conduct for Firms, effective 25 November 2019. SRA Standards and Regulations. Rule references: 2.1 (governance), 2.2 (compliance records), 2.5 (risk management), 4.2 (competent service), 4.3 (staff competence), 6.3-6.5 (confidentiality).
[^10]: UK GDPR, retained EU Regulation 2016/679 as amended by the Data Protection Act 2018. Article 22 governs automated individual decision-making including profiling.
[^11]: Eurostat, "20% of EU enterprises use AI technologies", December 2025. EU enterprise-wide AI adoption figure; legal sector-specific EU data is limited.