Most conversations about AI governance in law firms start with the SRA. They should also include your professional indemnity insurer.
Here's why: the SRA operates on a principles-based framework — it sets standards and investigates when things go wrong. Your PII insurer operates on a risk-based framework — it prices coverage based on how likely things are to go wrong and how much it will cost when they do. And AI is rapidly changing the insurer's calculation.
If your firm uses AI for legal work — and at this point, most UK firms do[^1] — then your PII renewal is about to get more interesting. The questions are getting more specific, the expectations are getting more demanding, and the distance between "we have an AI policy" and "we can demonstrate governance" is starting to show up in premium calculations.
The "Silent AI" Coverage Gap
Most law firm professional indemnity policies were written before AI became a routine part of legal practice. They neither explicitly include nor explicitly exclude AI-related losses. This is what the insurance industry calls "silent AI" — and it's a problem for both firms and insurers[^2].
What "silent AI" means in practice:
If a client suffers a loss because your firm relied on an AI-hallucinated case citation, and the client brings a professional negligence claim, the question becomes: does your PII policy respond?
The answer is almost certainly yes under most current SRA Minimum Terms and Conditions — the underlying duty being breached is the duty of care, competence, or confidentiality, regardless of what tool was involved. The SRA's minimum terms do not currently contain AI-specific exclusions[^3].
But "almost certainly yes under current terms" is not the same as "definitely yes going forward." The insurance market is in transition, and the direction of travel is clear.
What Insurers Are Doing Now
The shift in PII underwriting is happening across several dimensions simultaneously.
New Questions at Renewal
PII renewal questionnaires are evolving to include AI-specific sections. Based on published industry commentary and insurer guidance, the questions appearing in 2025–2026 renewals include[^4]:
- Does your firm have a formal AI usage policy? (The baseline question — most firms can answer yes, even if the policy is basic.)
- Which AI tools are approved for use within the firm? (Moving from "do you have a policy?" to "what specifically have you sanctioned?")
- How do you verify AI-generated legal outputs before delivery to clients? (The verification question — this is where most firms struggle to give a specific, evidence-based answer.)
- What audit trail exists for AI-assisted work? (The evidence question — can you show, not just describe, what happened?)
- What training have staff received on AI-specific risks? (The competence question — are your people equipped for this?)
- How is client confidentiality protected when using AI tools? (The data question — where does client data go?)
- What happens when an AI output is flagged as uncertain or incorrect? (The escalation question — is there a documented process, or is it ad hoc?)
The first two questions are easy. The next five are where governance separates from policy.
Supplemental Documentation
Some insurers are now requiring supplemental AI documentation at renewal — not just questionnaire answers, but evidence of controls. This mirrors what happened in cyber insurance: the initial questions were about whether you had a security policy, then they became about whether you could demonstrate specific controls (multi-factor authentication, endpoint protection, incident response plans)[^5].
Risk-Tier Implications
Insurers use risk tiers to price coverage. Firms with stronger governance controls get better terms; firms with weaker controls pay more or face coverage restrictions. AI governance is entering this calculation.
As Legal Futures reported, insurers expect to see evidence of how firms are adapting to AI and preparing for the future. Firms that can demonstrate governance may secure insurance on better terms; firms that cannot may find themselves outside preferred-risk tiers[^4].
This doesn't mean your firm will be refused coverage. It means your firm may pay more for it — or face conditions that require governance improvements before the next renewal.
The Claims Scenarios Insurers Are Modelling
Insurance underwriters think in claims scenarios. What could go wrong, how likely is it, and how much would it cost? For AI in legal practice, the scenarios are becoming well-defined.
Scenario 1: The Hallucinated Citation
A lawyer relies on an AI-generated research memo containing fabricated case authorities. The fabricated authorities are not independently verified. The research memo informs advice to a client. The client acts on the advice and suffers a loss. The client brings a professional negligence claim.
Why insurers care: Over 1,200 cases of AI hallucination in legal proceedings have been documented[^6]. This is not a theoretical risk — it's a documented, accelerating pattern. The claims arising from hallucination-related negligence are starting to materialise.
Scenario 2: The Confidentiality Breach
A fee earner inputs client-identifying information into a general-purpose AI tool that is not approved for use with confidential data. The data is processed on servers outside the firm's control, potentially used for model training, and the firm cannot demonstrate data was handled in accordance with the client's confidentiality expectations.
Why insurers care: This scenario engages both professional negligence (breach of confidentiality) and data protection liability (GDPR breach). The SRA rules on confidentiality (Rules 6.3–6.5) are strict, and the remediation costs for a confidentiality breach involving AI are potentially significant[^7].
Scenario 3: The Competence Failure
A firm relies on AI-assisted due diligence for a transaction. The AI misidentifies a material liability. The error is not caught because the firm's review process relies on the AI's own confidence scoring rather than independent verification. The deal completes, the liability crystallises, and the client claims professional negligence.
Why insurers care: The quantum of loss in transactional work can be very large. An AI error in a due diligence review that leads to an undiscovered liability could generate a claim running into millions.
Scenario 4: The Regulatory Investigation
The SRA investigates a firm's AI governance following a complaint or a thematic review. The firm cannot produce records demonstrating compliance with Rules 2.1, 2.2, or 2.5. The regulatory outcome — a fine, conditions on the firm's authorisation, or a public rebuke — increases the firm's risk profile for its next PII renewal.
Why insurers care: Regulatory action is a leading indicator of claims risk. A firm that has been found wanting by the SRA is a firm that insurers will price differently.
The Cyber Insurance Parallel
If you want to understand where PII and AI governance are heading, look at what happened in cyber insurance between 2015 and 2022.
Phase 1 (2015–2017): Generic questions. "Do you have a cybersecurity policy?" Nearly everyone said yes. Premiums were low. Claims were rising but hadn't reached crisis levels.
Phase 2 (2018–2019): Specific questions. "Do you use multi-factor authentication? Do you have endpoint detection and response? Do you have an incident response plan?" Firms that couldn't answer specifically started paying more.
Phase 3 (2020–2021): Evidence-based underwriting. Insurers required evidence, not just answers. Security assessments, penetration test reports, third-party audits. Firms without documented controls faced coverage restrictions or exclusions.
Phase 4 (2022–present): Coverage conditions. Specific cybersecurity controls became coverage conditions. If you don't have MFA and you have a breach related to stolen credentials, your claim may be denied or reduced[^8].
Where is PII and AI? We estimate the market is currently in Phase 2 — moving from generic policy questions to specific governance questions. The trajectory toward Phase 3 (evidence-based underwriting) and Phase 4 (coverage conditions) is underway. Firms that build governance infrastructure now will be ahead of the curve when the requirements tighten.
The Business Case for AI Governance (Beyond Compliance)
If regulatory compliance doesn't concentrate the mind, perhaps the financial case will.
Lower Premiums / Better Terms
Firms that can demonstrate AI governance — not just describe it — are better insurance risks. Better insurance risks get better terms. As AI governance becomes a standard underwriting factor, the premium differential between governed and ungoverned firms will widen.
Coverage Certainty
In a "silent AI" environment, coverage for AI-related claims is uncertain. Firms with documented governance have a stronger basis for arguing that their AI use was prudent and governed, which strengthens their position if a claim arises and coverage is disputed.
Client and Panel Tender Advantage
Clients asking "how does your firm govern AI?" in panel tenders are asking, in part, because their insurers are asking them about their supply chain governance. A firm that can demonstrate AI governance to a client is helping that client satisfy its own risk management obligations.
Regulatory Positioning
Firms that can show governance to their PII insurer can show governance to the SRA. The evidence is the same — audit trails, verification records, compliance dashboards. Building for one audience satisfies both.
What to Do Before Your Next PII Renewal
1. Know What You'll Be Asked
Review the seven questions listed above. For each one, assess whether your firm can provide a specific, evidence-based answer — not just a description of intent, but proof of operation.
2. Conduct a Governance Audit
Map every AI tool used across the firm — including general-purpose tools that fee earners may be using without formal approval. For each tool, document: what it's used for, what data it processes, what verification occurs, and what audit trail exists.
3. Establish a Verification Process
If your firm doesn't have a process for independently verifying AI-generated legal outputs, establish one before renewal. This doesn't have to be automated (yet) — even a manual verification checklist documented per matter is better than nothing.
4. Generate Evidence
Start keeping records now. Every verification check, every training session, every policy review, every AI risk assessment — document it. SRA Rule 2.2 requires records demonstrating compliance. Your PII insurer will want the same evidence[^7].
5. Talk to Your Broker
Have a proactive conversation with your PII broker about AI governance before your renewal comes around. Ask what questions to expect, what evidence will be most valuable, and what governance steps would most improve your risk profile.
6. Treat This as Infrastructure, Not a Project
The firms that will be best positioned — for their PII renewal, for SRA scrutiny, and for client confidence — are the ones that generate governance evidence as a by-product of normal operations. That requires infrastructure, not a one-off compliance exercise.
LegalAI Space is building AI agents for legal teams with a governance layer that makes every output verifiable, compliant, and audit-ready — generating the evidence your COLP, your insurer, and your clients need. Join the waitlist or book a research conversation with Founder Daman Kaur.
Sources
[^1]: Clio, Legal Trends Report 2024 (UK data). 96% of UK law firms now use AI in some capacity, including general-purpose tools, embedded AI features, and purpose-built legal AI platforms.
[^2]: Kennedys, "Silent AI cover: the unforeseen risks for insurers", 2025. Discusses the "silent AI" phenomenon where policies neither explicitly include nor exclude AI-related losses, and the challenges this creates for both insurers and policyholders.
[^3]: The SRA Minimum Terms and Conditions of Professional Indemnity Insurance set the baseline coverage requirements for SRA-regulated firms. As of early 2026, these minimum terms do not contain AI-specific exclusions. However, individual insurers may impose additional conditions or exclusions beyond the minimum terms. Firms should verify current coverage terms with their broker.
[^4]: Legal Futures, AI and law firm risk — the view of professional indemnity insurers. Insurers expect evidence of how firms are adapting to AI, including identified accountable persons, documented procedures, and governance controls. Firms that demonstrate proactive governance may secure insurance on better terms. See also IA Magazine, "How Generative AI Is Reshaping Professional Liability Risk for Law Firms", March 2026.
[^5]: The cyber insurance parallel is well-documented in insurance industry literature. For a summary of how cyber insurance underwriting evolved from generic questions to evidence-based controls, see the trajectory described in industry publications from Marsh, Beazley, and CFC Underwriting between 2018 and 2023.
[^6]: Damien Charlotin, AI Hallucination Cases Database. Over 1,200 documented court cases where AI-generated hallucinated content has been identified. See our analysis in 1,200 AI Hallucination Cases and Counting.
[^7]: SRA Code of Conduct for Firms, effective 25 November 2019 (current version effective 11 April 2025). SRA Standards and Regulations. Rules 2.1 (governance), 2.2 (compliance records), 2.5 (risk management), 6.3–6.5 (confidentiality). These obligations are directly relevant to both SRA compliance and PII underwriting assessments.
[^8]: Kennedys, "From innovation to indemnity — AI promises efficiency but has potential to leave solicitors with increased risk and costs", 2025. Discusses how AI exposes solicitors and insurers to new risks and the trajectory toward more specific AI-related underwriting requirements.