All posts
AI Governance8 min read

Why an AI Policy Is Not AI Governance

Every legal AI vendor tells you to write an AI policy. That's necessary — but it's not governance. Here's the difference, why it matters, and what your firm actually needs to demonstrate compliance.

By Daman Kaur

Ask a law firm managing partner how they govern AI, and you'll almost always get the same answer:

"We have an AI policy."

It's the right instinct. It's also radically insufficient.

An AI policy is a document that says what should happen. AI governance is the infrastructure that proves what did happen. The gap between those two things is where regulatory risk, insurance risk, and client confidence all break down.

This matters now because the SRA, PII insurers, and clients are no longer asking "do you have a policy?" They're asking "can you demonstrate compliance?" Those are fundamentally different questions — and a policy document answers only the first one.

What an AI Policy Does

Let's be clear: an AI policy is necessary. Every firm that uses AI should have one. A well-drafted AI policy:

  • Sets boundaries — which tools are approved, which are prohibited, which require additional safeguards
  • Establishes expectations — how AI outputs should be reviewed, what training staff need, who is responsible
  • Creates a paper trail of intent — evidence that the firm has considered AI governance at a strategic level
  • Satisfies the first question — from the SRA, from insurers, from clients in panel tenders

If your firm doesn't have an AI policy, write one. There are good templates available from the Law Society, Clio, and others[^1]. This is table stakes.

But here's the problem: a policy tells people what to do. It does not tell you whether they're doing it.

What an AI Policy Does Not Do

An AI policy does not:

  • Enforce itself. It sits in a shared drive or an intranet page. Fee earners may read it once. Compliance depends entirely on individual diligence, every time, on every matter.
  • Verify AI outputs. A policy that says "all AI-generated citations must be checked" does not check them. It relies on every individual lawyer, on every piece of work, independently verifying every citation against primary sources.
  • Generate compliance evidence. When the SRA asks your COLP to demonstrate compliance with Rule 2.2 — records demonstrating compliance[^2] — a policy document is not a record. It's a statement of what records should exist.
  • Provide real-time visibility. Your COLP cannot look at a policy and know how many AI-assisted matters were processed this week, what verification occurred, or which items were flagged.
  • Prevent shadow AI. More than half of legal professionals report their firm has no AI policy or that they are unaware of one. Among those who do have policies, only 40% are using legal-specific AI solutions — the rest are using general-purpose tools that may or may not comply with firm policy[^3].

The uncomfortable truth: most AI policies describe a governance standard that the firm cannot currently prove it meets.

The Gap Between "We Have a Policy" and "We Can Prove Compliance"

This gap maps directly to what regulators, insurers, and clients actually require.

What the SRA Requires

The SRA Code of Conduct for Firms doesn't ask for policies. It asks for something more demanding:

| SRA Rule | What It Requires | What a Policy Provides | The Gap | |----------|-----------------|----------------------|---------| | Rule 2.1 | Effective governance structures, arrangements, systems and controls | A document describing intended structures | No evidence the structures exist or function | | Rule 2.2 | Records to demonstrate compliance with the firm's obligations | A document describing what records should exist | No actual records of AI-assisted work | | Rule 2.5 | Identify, monitor and manage all material risks | A section on AI risk identification | No continuous monitoring, no risk register updates | | Rule 4.2 | Competent, timely, appropriate service | A section on AI output review | No evidence that review actually occurs | | Rule 4.3 | Staff competence and up-to-date knowledge | A section on AI training requirements | No training records, no competence assessment | | Rule 6.3 | Duty of confidentiality to current clients | A section on data handling | No data flow monitoring, no access controls evidence |

The SRA's December 2025 Compliance Officers Thematic Review[^4] found that just 1 in 36 COLPs could fully describe their firm's general regulatory obligations. If compliance officers struggle to describe baseline obligations, the gap between an AI policy and demonstrable AI governance is even wider.

What PII Insurers Require

Professional indemnity insurers are moving beyond "do you have a policy?" to asking specific, evidence-based questions[^5]:

  • How do you verify AI-generated legal outputs before delivery to clients?
  • What audit trail exists for AI-assisted work?
  • What training have staff received on AI-specific risks?
  • What happens when an AI output is flagged as uncertain — who is notified, and is that documented?

An AI policy can describe the answers to these questions. It cannot prove the answers are true.

What Clients Require

Panel tender questionnaires increasingly include AI governance sections. The question is not "send us your AI policy" — it's "demonstrate how you govern AI-assisted work on our matters." Clients want evidence of controls, not descriptions of intentions.

Policy-as-PDF vs. Policy-as-Code

Here's a useful way to think about the distinction.

Policy-as-PDF is a document that describes rules. Someone reads it, understands it, and is expected to follow it. Compliance depends on human diligence. Evidence is assembled after the fact — often right before an audit or inspection.

Policy-as-code is rules encoded into the system that processes AI work. The rules are checked automatically. Every check generates a record. Compliance evidence is a by-product of normal operations, not a separate exercise[^6].

Consider a specific example: your AI policy says "All AI-generated case citations must be verified against primary legal databases before being included in client work."

  • Under policy-as-PDF: Each lawyer is expected to check each citation. Whether they did or didn't is between them and their conscience. If the COLP is asked to prove this happens, they'd need to interview every fee earner or hope someone kept notes.
  • Under policy-as-code: Every AI-generated citation is automatically checked against BAILII and legislation.gov.uk. A verification record is generated for each check. The COLP dashboard shows pass/fail rates by team, by practice area, in real time.

Both approaches start with the same rule. One relies on human diligence; the other enforces the rule and proves it was enforced.

The Governance Maturity Spectrum

In our conversations with UK law firms, we see four broad levels of AI governance maturity:

Level 1: No governance AI is used ad-hoc. No policy, no oversight, no visibility. The firm may not know the extent of AI use. This is where the majority of firms started — and where some remain.

Level 2: Policy exists A written AI policy has been drafted and circulated. Staff know (or should know) what's expected. But compliance relies on individual diligence. No audit trail, no verification process, no COLP dashboard. This is where most firms sit today.

Level 3: Processes enforced Manual governance processes are in place: verification checklists, sign-off workflows, training logs, periodic audits. This is significantly better than Level 2, but it's resource-intensive. Evidence is assembled rather than generated. It works at small scale but breaks down as AI use grows.

Level 4: Infrastructure governs Governance is built into the system. AI outputs are automatically verified, compliance-checked, and audit-trailed. The COLP has real-time visibility. Evidence is a by-product, not a project. The gap between "what should happen" and "what does happen" is closed by design.

The jump from Level 2 to Level 3 requires process design and management attention. The jump from Level 3 to Level 4 requires tooling — governance infrastructure that automates what manual processes cannot sustain at scale.

What Your Firm Should Do

1. Keep your policy. It's necessary. It sets expectations. It satisfies the first question from regulators and insurers. Just don't mistake it for governance.

2. Audit the gap. Ask your COLP: "If the SRA asked you to demonstrate — not describe, but demonstrate — compliance with Rule 2.2 for AI-assisted work, what evidence could you produce today?" The answer reveals the gap between policy and governance.

3. Identify where runtime enforcement changes outcomes. Look at the specific rules your policy describes: citation verification, confidentiality safeguards, competence checks. For each one, ask: "Is compliance currently dependent on individual diligence, or is it enforced by a system?"

4. Move toward evidence-by-default. The goal is not to eliminate human judgement — it's to ensure that when human judgement is applied, there's a record. When verification occurs, there's a certificate. When compliance is checked, there's a log.

The firms that will be best positioned for the SRA's GenAI Good Practice Note[^7], for their next PII renewal, and for their next panel tender are the ones that can show governance, not just describe it.


LegalAI Space is building AI agents for legal teams with a governance layer that makes every output verifiable, compliant, and audit-ready. Join the waitlist or book a research conversation with Founder Daman Kaur.


Sources

[^1]: Multiple organisations provide AI policy templates and guides for law firms, including the Law Society's generative AI essentials guide, Clio's law firm AI policy template, DISCO's how to build a defensible AI policy, and the Association for AI in Legal (A4L). These are valuable starting points for firms that don't yet have an AI policy.

[^2]: SRA Code of Conduct for Firms, effective 25 November 2019 (current version effective 11 April 2025). SRA Standards and Regulations. Rule 2.2 requires firms to "keep and maintain records to demonstrate compliance with your obligations under the SRA's regulatory arrangements."

[^3]: ACEDS + Secretariat, 2025 Legal AI Report, 2025. The report found that 53% of legal professionals say their firm has no AI policy or are unaware of one, and only 40% are using legal-specific AI solutions — down from 58% in 2024 — indicating increasing reliance on general-purpose tools that may not comply with firm policies.

[^4]: SRA, Compliance officers: A thematic review, December 2025. The SRA visited 25 firms and interviewed 36 compliance officers about their general regulatory obligations (not AI-specific). Only one COLP could outline all of their regulatory responsibilities. We cite this finding because it illustrates the baseline compliance readiness gap — if COLPs struggle with general obligations, AI-specific governance maturity is likely even lower.

[^5]: Legal Futures, AI and law firm risk — the view of professional indemnity insurers. Insurers now expect evidence of how firms are adapting to AI, including identified accountable persons, documented procedures, and governance controls. See also IA Magazine, "How Generative AI Is Reshaping Professional Liability Risk for Law Firms", March 2026.

[^6]: "Policy-as-code" is a concept drawn from infrastructure and security governance, where compliance rules are encoded as machine-readable specifications that can be evaluated automatically. In the legal AI context, this means encoding regulatory requirements (such as SRA rules) as programmatic rules that AI outputs are automatically evaluated against — generating compliance evidence as a by-product of normal operation.

[^7]: The SRA is developing a GenAI FAQ and Good Practice Note on AI use and client data, as outlined in the SRA's AI Policy and Regulation webinar, 4 February 2026, hosted by the Society of Asian Lawyers. These resources were described as forthcoming "in the coming months." In the interim, the SRA encourages firms to document all AI use, conduct regular risk assessments, and ensure staff are trained in responsible AI use.