There is a regulatory deadline approaching that many UK law firms have not yet reckoned with. On 2 August 2026, the bulk of the EU AI Act's obligations become enforceable, including the requirements for high-risk AI systems. That is four months from now.
The common assumption at UK firms is that this is European legislation and therefore somebody else's problem. That assumption is wrong, and the consequences of getting it wrong are significant.
The EU AI Act has extraterritorial reach. Under Article 2, it applies to providers and deployers of AI systems regardless of where they are established, if the output produced by those systems is used in the EU. A UK law firm using AI tools to advise clients with EU operations, process documents involving EU contracts, or generate outputs that affect EU-based parties is likely within scope.
2 August 2026
The date when high-risk AI system obligations, transparency requirements, and the full enforcement framework of the EU AI Act become applicable.
European Commission, EU AI Act Implementation Timeline
What actually happens in August
The EU AI Act did not arrive all at once. It entered into force on 1 August 2024 and has been phased in over two years. Some provisions are already live. The ban on prohibited AI practices, such as social scoring and certain uses of real-time biometric identification, took effect on 2 February 2025. Obligations for general-purpose AI models became applicable on 2 August 2025.
What arrives on 2 August 2026 is the core of the regulation: the obligations for high-risk AI systems, the transparency requirements for certain AI systems, and the full enforcement mechanism including penalties. This is when the Act gets teeth.
A high-risk AI system, under the Act, is one that makes or substantially contributes to decisions with significant effects on people. In a legal context, this could include AI tools used for legal research that informs case strategy, document review systems that determine which evidence is relevant, or contract analysis tools that flag or assess risk. The classification depends on use, not on the technology itself.
Why UK firms are in scope
The extraterritorial provisions of the EU AI Act follow a similar logic to GDPR. There are three main triggers that can bring a UK law firm within scope.
First, the "output used in the EU" rule. Article 2(1)(c) of the Act applies to providers and deployers located in a third country where the output produced by the AI system is used in the Union. If your firm uses an AI tool to draft advice, review contracts, or analyse documents for a client with EU operations, and that output informs decisions affecting EU-based parties, the Act likely applies.
Second, the "placed on the market" rule. If your firm has developed any proprietary AI tooling, even internal tools, and those tools are used by or for clients in the EU, the firm could be classified as a provider placing an AI system on the EU market.
Third, the client obligation chain. Even if your firm's own AI use falls outside scope, your EU-based clients may have obligations under the Act that flow back to you. In-house legal teams at EU companies will increasingly need to know whether their outside counsel are using AI, what safeguards are in place, and whether those tools comply with the Act. Firms that cannot answer these questions will lose work to firms that can.
The penalty structure is serious
The EU AI Act establishes three tiers of administrative fines under Article 99. These are not theoretical. The enforcement mechanism becomes fully operational on 2 August 2026.
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices (Article 5) | Up to EUR 35 million or 7% of global annual turnover, whichever is higher |
| High-risk AI system obligations and other provisions | Up to EUR 15 million or 3% of global annual turnover, whichever is higher |
| Supplying incorrect information to authorities | Up to EUR 7.5 million or 1% of global annual turnover, whichever is higher |
For SMEs, including most UK mid-size law firms, the fines are lower but still substantial. The important point is not the maximum penalty figure. It is the reputational and commercial damage of being found non-compliant with a regulation that your firm should have been advising clients on.
The Colorado AI Act adds a second deadline
The EU is not the only jurisdiction tightening AI regulation this year. Colorado's AI Act, originally scheduled for February 2026, was pushed to 30 June 2026 following a special legislative session. It requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination.
For UK firms with US clients, this creates a second compliance conversation. The Colorado Act defines high-risk AI systems as those that make or substantially contribute to consequential decisions, including decisions related to legal services. The enforcement mechanism gives the Colorado Attorney General exclusive authority to enforce the requirements, with civil penalties of up to $20,000 per violation under the Colorado Consumer Protection Act.
The pattern is clear. AI regulation is not slowing down. It is accelerating across jurisdictions, and law firms sit at a unique intersection: they are both users of AI tools and advisors to clients navigating these regulations.
How the SRA's approach intersects with the EU framework
The UK has taken a fundamentally different approach to AI regulation than the EU. Where the EU AI Act is prescriptive and rules-based, categorising AI systems by risk level and imposing specific obligations for each category, the UK has opted for a principles-based, sector-led model. The government empowers existing regulators like the SRA and the ICO to interpret core AI principles in ways that make sense for their industries.
For UK law firms, this creates a dual compliance landscape. The SRA regulates on an outcomes-based model. AI falls under the existing requirement in the Code of Conduct for Firms to have effective governance structures, arrangements, systems and controls. The core SRA principles that apply to AI use include acting in the best interests of each client (Principle 7), upholding public trust in the profession (Principle 2), and acting with integrity (Principle 5).
The SRA's approach is flexible by design. There is no SRA-mandated AI risk classification system, no mandatory registration of AI tools, no prescribed audit process. The expectation is that firms will apply professional judgement and maintain standards proportionate to the risk.
The EU AI Act, on the other hand, leaves little room for interpretation. It prescribes specific obligations: conformity assessments for high-risk systems, technical documentation requirements, human oversight mechanisms, record-keeping for at least six months, and mandatory registration in an EU database for certain AI systems.
For firms with EU-exposed clients, this means meeting both frameworks simultaneously. The SRA's principles-based approach does not exempt you from the EU's rules-based requirements. If anything, the SRA's expectation that firms maintain "effective governance structures" when using new technology means that ignoring a major piece of international AI regulation affecting your clients would itself be a compliance concern.
Three things your firm should do before August
The firms that act between now and August will have a significant advantage over those that do not. This does not require a massive investment. It requires focused attention across three areas.
A practical pre-August action plan
Map every AI tool the firm uses, who uses it, what data it processes, and where the outputs go. Include tools used by individual lawyers, not just firm-licensed platforms. You cannot assess EU AI Act exposure without knowing what tools you have and how they are being used. This inventory also satisfies the SRA's expectation of effective governance systems when new technology is introduced.
Review your client list and identify every client with EU operations, EU-based customers, EU contracts, or EU regulatory obligations. For each of these clients, assess whether your firm's AI-assisted work could produce outputs that are used in the EU. This is the trigger under Article 2(1)(c) of the Act, and it is broader than most firms expect.
Even a basic AI governance document, covering your acceptable use policy, output verification process, and data handling protocols, puts you ahead of most mid-size firms. If a client or regulator asks how you govern AI use, having a documented answer is the difference between credibility and scrambling. If you already have a framework from following SRA guidance, review it against the EU Act's requirements for high-risk systems and update where needed.
None of this requires hiring an AI compliance team or purchasing new software. It requires someone at the firm, ideally the COLP or a senior partner, taking ownership of this before August and dedicating the time to do it properly.
The competitive angle
There is a business case here beyond compliance. According to a survey by the Association of Corporate Counsel and Everlaw, 59% of in-house counsel say they do not know whether their outside counsel is using generative AI on their matters. That is going to change as the EU AI Act enforcement date approaches. In-house teams at EU-regulated companies will start asking questions about AI governance, data handling, and compliance. The firms that can answer those questions confidently will win work. The firms that cannot will be passed over.
Building an AI governance framework before August 2026 is not just about avoiding regulatory risk. It is about being the firm that can demonstrate responsible AI use when clients start asking, which they will.
We are building governance tools for this
LegalAI Space is creating AI governance tools designed for UK mid-size firms navigating both SRA expectations and international AI regulation. We are currently in pre-launch. If you want early access, join the waitlist.
Resources
For firms that want to go deeper, these are the primary sources referenced in this post:
The EU AI Act full text and implementation timeline: artificialintelligenceact.eu
Article 2 (scope, including extraterritorial provisions): artificialintelligenceact.eu/article/2
Article 99 (penalties): artificialintelligenceact.eu/article/99
SRA compliance tips on AI and technology: sra.org.uk/solicitors/resources/innovate/compliance-tips-for-solicitors
Colorado AI Act (SB 24-205): leg.colorado.gov/bills/sb24-205