All posts
COLP12 min read

The COLP's AI Governance Checklist: Every SRA Rule Mapped to AI

A practical, rule-by-rule checklist for COLPs to assess and demonstrate AI governance compliance. Mapped to SRA Code of Conduct for Firms rules 2.1, 2.2, 2.5, 4.2, 4.3, and 6.3–6.5 — with specific actions for each obligation.

By Daman Kaur

If the SRA visited your firm tomorrow and asked your COLP to demonstrate — not describe, but demonstrate — how the firm governs its use of AI, what evidence could you produce?

This is not a hypothetical. The SRA's December 2025 Compliance Officers Thematic Review found that just 1 in 36 COLPs could fully describe their general regulatory obligations[^1]. If compliance officers struggle with baseline obligations, AI-specific governance is almost certainly a blind spot.

This checklist maps every relevant SRA Code of Conduct for Firms rule to specific AI governance actions. It's designed to be practical — something a COLP can work through systematically, identify gaps, and prioritise remediation.

How to use this checklist: Work through each section rule by rule. For each item, assess whether your firm can currently demonstrate compliance with evidence, not just intent. Items you cannot evidence are governance gaps that need addressing.


Rule 2.1 — Effective Governance Structures

"You have effective governance structures, arrangements, systems and controls in place that ensure… you comply with all the SRA's regulatory arrangements"[^2]

What this means for AI: Your firm needs a defined governance structure for AI — not just a policy document, but identified people, processes, and systems responsible for ensuring AI is used in compliance with regulatory requirements.

Checklist

  • [ ] AI governance lead identified. A named individual (or role) is responsible for AI governance across the firm. This may be the COLP, a dedicated AI lead, or a member of the management team — but the responsibility must be explicitly assigned.

  • [ ] AI governance reporting line established. The AI governance lead reports to senior management on AI governance matters. There is a defined frequency (quarterly at minimum) for governance reporting.

  • [ ] AI tools inventory maintained. A register of all AI tools used within the firm — both approved and known-unapproved — is maintained and updated at least quarterly. For each tool, the register records: what it's used for, who uses it, what data it processes, and what governance controls are in place.

  • [ ] Governance arrangements documented. The governance structures, arrangements, systems, and controls for AI are documented and accessible to relevant staff — not just in a policy document, but in operational procedures that people actually follow.

  • [ ] Regular governance review scheduled. AI governance arrangements are reviewed on a defined cycle (at least annually, more frequently given the pace of change). The review assesses whether existing controls remain appropriate and whether new AI tools or use cases have introduced unaddressed risks.


Rule 2.2 — Records Demonstrating Compliance

"You keep and maintain records to demonstrate compliance with your obligations under the SRA's regulatory arrangements"[^2]

What this means for AI: You need records that prove your AI governance actually operates — not just that it was designed. This is where the gap between policy and governance is most visible: a policy is a statement of intent, a record is evidence of action.

Checklist

  • [ ] Audit trail of AI-assisted work. Every matter where AI was used in a material way has a record of: which AI tool was used, what task it performed, what output it produced, and what human review occurred.

  • [ ] Verification records. Where AI outputs include legal citations, case references, or statutory references, there is a record of whether and how these were verified against primary sources (BAILII, legislation.gov.uk, or equivalent).

  • [ ] Staff AI training records. Training delivered on AI governance, approved tools, and AI-specific risks is documented — including who attended, when, and what was covered. Training is not a one-off; records reflect ongoing competence development.

  • [ ] AI incident log. Instances where AI produced incorrect, misleading, or non-compliant output are logged — including what went wrong, how it was detected, what remediation occurred, and whether any client impact resulted.

  • [ ] Governance review records. Minutes or notes from AI governance reviews are maintained, showing what was assessed, what gaps were identified, and what actions were taken.

  • [ ] Compliance evidence is exportable. Records can be assembled and exported in a format suitable for an SRA inspection within a reasonable timeframe — not assembled from scratch over several weeks.


Rule 2.5 — Identification and Management of Material Risks

"You identify, monitor and manage all material risks to your business, including… to the business itself"[^2]

What this means for AI: AI use introduces material risks that must be formally identified, assessed, monitored, and managed — not treated as an IT project or left to individual fee earners to navigate.

Checklist

  • [ ] AI risk assessment completed. A formal risk assessment has been conducted for each AI tool used within the firm, covering: accuracy and hallucination risk, confidentiality and data handling risk, regulatory compliance risk, over-reliance and competence risk, and vendor/supply chain risk.

  • [ ] AI risks in the firm-wide risk register. AI-specific risks are recorded in the firm's risk register alongside other material risks, with assigned owners, assessed likelihood and impact, and defined mitigation measures.

  • [ ] Data processing impact assessment. Where AI tools process personal or client-sensitive data, a data protection impact assessment (DPIA) has been completed in accordance with UK GDPR Article 35 where applicable.

  • [ ] Risk mitigation measures documented. For each identified AI risk, specific mitigation measures are documented — e.g., independent verification for hallucination risk, approved-tools-only policy for data handling risk, supervision requirements for over-reliance risk.

  • [ ] Ongoing monitoring in place. AI risks are monitored on an ongoing basis — not just assessed once. Monitoring includes tracking AI incident frequency, reviewing verification pass/fail rates, and assessing whether mitigation measures remain effective.

  • [ ] Emerging risks assessed. The risk assessment process includes a mechanism for identifying new AI-related risks as the technology, the regulatory landscape, and the firm's AI use evolve.


Rule 4.2 — Competent Service

"You ensure that the service you provide to clients is competent, delivered in a timely manner, and takes account of your clients' needs and circumstances"[^2]

What this means for AI: AI-assisted legal work must meet the same competence standard as non-AI work. That means AI outputs need to be verified for accuracy, appropriateness, and relevance to the specific client's matter before delivery.

Checklist

  • [ ] AI output verification process documented. There is a defined, documented process for reviewing and verifying AI-generated legal work before it is used in or delivered as part of client work. The process specifies who reviews, what they check, and how they record the review.

  • [ ] Independent citation verification. AI-generated legal citations (case references, statutory references, regulatory references) are verified against primary sources — not just accepted based on the AI tool's own confidence scoring. Over 1,200 court cases have documented AI hallucination of legal authorities[^3]; independent verification is not optional.

  • [ ] Jurisdictional accuracy checked. AI outputs are reviewed for jurisdictional accuracy — ensuring that authorities cited are from the correct jurisdiction and have not been overruled, repealed, or superseded.

  • [ ] Supervision arrangements for AI use. Junior staff and trainees using AI tools are supervised in their AI use, with senior review of AI-assisted work appropriate to the risk level of the matter.

  • [ ] Client-specific tailoring verified. AI-generated work is reviewed to ensure it accounts for the specific client's needs, circumstances, and instructions — not just generic legal analysis.


Rule 4.3 — Staff Competence

"You ensure that your managers and employees are competent to carry out their role, and keep their professional knowledge and skills, as well as understanding of their legal, ethical and regulatory obligations, up to date"[^2]

What this means for AI: Staff using AI tools must understand their capabilities, limitations, and risks — including the specific failure modes relevant to legal work (hallucination, fabrication, jurisdictional errors, confidentiality risks).

Checklist

  • [ ] AI-specific training programme in place. All staff who use AI tools in their work receive training on: approved tools and their appropriate use, AI hallucination and verification requirements, data handling and confidentiality obligations, the firm's AI governance procedures, and how to report AI incidents.

  • [ ] Training is role-appropriate. Training content is tailored to different roles — fee earners need to understand verification and competence obligations; support staff need to understand data handling; the COLP needs to understand the full governance framework.

  • [ ] Training is ongoing. AI technology and the regulatory landscape are evolving rapidly. Training is delivered on a regular cycle (at least annually) and when new tools are introduced or significant regulatory developments occur.

  • [ ] Competence is assessed. Completion of training is not sufficient on its own. There is a mechanism for assessing whether staff can apply their AI training in practice — through supervision, spot checks, or competence assessments.

  • [ ] Understanding of limitations is documented. Staff who use AI tools have confirmed (in writing or through assessment) that they understand: AI can fabricate legal authorities, AI outputs require independent verification, client data handling in AI tools requires specific safeguards, and AI-assisted work must meet the same competence standard as non-AI work.


Rules 6.3, 6.4, 6.5 — Confidentiality

Rule 6.3: "You keep the affairs of current clients confidential." Rule 6.4: "You keep the affairs of former clients confidential." Rule 6.5: "You do not act for a client… in a matter where that client has an interest adverse to the interest of another current or former client…"[^2]

What this means for AI: AI tools process data. If that data includes client information, confidentiality obligations apply with full force — including to former clients and to conflicts between clients.

Checklist

  • [ ] Data flow mapping for AI tools. For every AI tool used in the firm, the data flow is documented: what data is input, where it is processed, where it is stored, whether it leaves the firm's infrastructure, and whether it is used for model training by the AI vendor.

  • [ ] Approved tools for confidential data. Only AI tools that have been assessed and approved for use with client-confidential data are used for client matters. General-purpose AI tools (ChatGPT free tier, etc.) are not used with client-identifying information unless specifically assessed and approved.

  • [ ] Client data handling protocols. Staff know and follow specific protocols for handling client data in AI tools — including what information can and cannot be input, how to anonymise or pseudonymise data where appropriate, and what to do if client data is inadvertently input into a non-approved tool.

  • [ ] Information barriers in AI context. Where the firm acts for clients with potentially adverse interests, AI tools do not surface information from one client's matters in the context of another's. This requires understanding how AI tools handle context, memory, and prior interactions.

  • [ ] Data processing agreements with AI vendors. Appropriate data processing agreements (DPAs) are in place with all AI tool vendors, covering: data handling, retention and deletion, sub-processing, breach notification, and confirmation that data is not used for model training without consent.

  • [ ] Client notification or consent process. Where appropriate, clients are informed about the firm's use of AI in their matters. The approach to client notification (proactive disclosure, consent, or opt-out) is defined and consistently applied.

  • [ ] Data sovereignty. Client data processed by AI tools is hosted on infrastructure in the appropriate jurisdiction — UK data on UK infrastructure, EU data on EU infrastructure — in compliance with UK GDPR and any client-specific data handling requirements.


Beyond SRA: Additional Considerations

The SRA Code of Conduct is the core framework, but AI governance in 2026 extends beyond it.

EU AI Act (for firms with EU clients)

If your firm serves EU-based clients or provides legal services related to EU law, the EU AI Act (Regulation 2024/1689) reaches full enforcement on 2 August 2026[^4]. AI systems used in the administration of justice are classified as high-risk under Annex III, paragraph 8, triggering requirements for:

  • [ ] Conformity assessment (Article 43)
  • [ ] Risk management system (Article 9)
  • [ ] Technical documentation (Article 11)
  • [ ] Record-keeping and audit trails (Article 12)
  • [ ] Human oversight (Article 14)

PII Renewal Preparation

Your professional indemnity insurer is increasingly asking AI-specific questions[^5]. Before your next renewal:

  • [ ] Can you provide specific, evidence-based answers to AI governance questions?
  • [ ] Do you have documented verification processes and audit trails?
  • [ ] Can you show how client confidentiality is protected in AI use?

Client and Panel Tender Readiness

Clients asking about AI governance in panel tenders want evidence, not descriptions:

  • [ ] Can you describe your AI governance framework in a tender response?
  • [ ] Can you provide evidence (audit trail excerpts, verification rates, training records) to support your description?

From Checklist to Infrastructure

This checklist is a starting point — a way to identify gaps and prioritise remediation. But a checklist completed once is not governance. Governance is the ongoing operation of the systems and controls this checklist describes.

The difference between a checklist and governance infrastructure is the difference between:

  • Assembling evidence before an inspection and generating evidence automatically as a by-product of normal operations
  • Relying on individual diligence and enforcing standards through systems
  • Describing what should happen and proving what did happen

If you work through this checklist and find that most items require manual processes, ad-hoc evidence assembly, and individual compliance — that's the signal that governance infrastructure would transform your COLP's ability to demonstrate compliance with confidence.


LegalAI Space is building the governance infrastructure this checklist points to — automated verification against BAILII and legislation.gov.uk, real-time COLP dashboards, and audit trails generated by default. Join the waitlist or book a research conversation with Founder Daman Kaur.


Sources

[^1]: SRA, Compliance officers: A thematic review, December 2025. The SRA visited 25 firms and interviewed 36 compliance officers about their general regulatory obligations (not AI-specific). Only one COLP could outline all of their regulatory responsibilities. We cite this finding because it illustrates the baseline compliance readiness gap — if COLPs struggle with general obligations, AI-specific governance maturity is likely even lower. See also SRA Risk Outlook: AI in the Legal Market (November 2023).

[^2]: SRA Code of Conduct for Firms, effective 25 November 2019 (current version effective 11 April 2025). SRA Standards and Regulations. Rule references: 2.1 (effective governance structures), 2.2 (records demonstrating compliance), 2.5 (identification and management of material risks), 4.2 (competent service), 4.3 (staff competence), 6.3 (current client confidentiality), 6.4 (former client confidentiality), 6.5 (conflicts). Text quoted is from the SRA Code of Conduct for Firms as currently published.

[^3]: Damien Charlotin, AI Hallucination Cases Database. Over 1,200 documented court cases where AI-generated hallucinated content has been identified. See our analysis in 1,200 AI Hallucination Cases and Counting.

[^4]: EU AI Act, Regulation (EU) 2024/1689, published OJ L 2024/1689, 12 July 2024. Article 113 sets out the phased enforcement timeline with full Annex III high-risk enforcement from 2 August 2026. Annex III, paragraph 8 classifies AI systems used to "research and interpret facts and the law and to apply the law to a concrete set of facts" as high-risk. Penalties under Article 99: up to EUR 15M or 3% of worldwide turnover for high-risk AI violations. See our EU page for full details.

[^5]: Legal Futures, AI and law firm risk — the view of professional indemnity insurers. Insurers expect evidence of AI governance, including identified accountable persons and documented procedures. See our analysis in Your Professional Indemnity Insurer Is About to Ask About AI.

[^6]: The SRA is developing a GenAI FAQ and Good Practice Note on AI use and client data, as outlined at the SRA's AI Policy and Regulation webinar, 4 February 2026. In the interim, the SRA encourages firms to document all AI use, conduct regular risk assessments, and ensure staff are trained in responsible AI use. This checklist is designed to be compatible with anticipated SRA guidance — but firms should verify requirements against the published guidance when it becomes available.

[^7]: ICO, Consultation on Draft Guidance About Automated Decision-Making, March 2026. Consultation open until 29 May 2026; final guidance expected Summer 2026. Data protection impact assessments (DPIAs) for AI tools that process personal data are already required under UK GDPR Article 35 where the processing is likely to result in a high risk to individuals.