Every senior solicitor in the City has had this conversation in 2026.
The client — usually a GC at a tech company, sometimes a head of legal at a fund — runs the supplier agreement through Claude or ChatGPT. It comes back with three issues the partner missed. A missing IP indemnity on a platform built on third-party open source libraries. A liability cap one-tenth of what the deal warrants. A jurisdiction clause that would land any dispute in the wrong forum.
The kind of thing that costs millions when it actually goes wrong.
The senior partner the company pays a six-figure annual retainer to has drafted around it. The AI catches it in 40 seconds.
The GC does not fire the firm. Not yet. But she renegotiates the retainer. And starts wondering if there are better options.
Now multiply that conversation by every general counsel at every UK tech company, every European fund, every regulated business with a procurement team that has discovered the same trick.
The asymmetry: clients use AI, firms won't say if they do
In November 2025, Above the Law ran a headline that said the quiet part out loud: "New Report On AI Use In-House Spells Trouble For Outside Lawyers." The underlying study was the ACC and Everlaw 2025 survey of 657 in-house legal professionals across 30 countries. Two findings stood out. 67% of in-house lawyers are now using generative AI. 59% say they do not know whether their outside counsel uses it on their matters.
The asymmetry is stark. The client uses AI. The client does not know if the firm does. The client checks the firm's work with AI anyway.
A study from the University of Southampton, led by Dr Eike Schneiders and published in April 2025, ran 288 participants through hypothetical legal scenarios. Some saw advice generated by ChatGPT. Some saw advice from real lawyers. When the source was hidden, participants were significantly more willing to rely on the ChatGPT advice. When the source was revealed, they were equally willing to rely on either. ChatGPT's advice was shorter and more confident; the lawyer's was longer and more nuanced. The shorter, more confident answer won the trust round.
When AI invents law that doesn't exist
Now flip the page.
In June 2025, the High Court issued a ruling in Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank ([2025] EWHC 1383 (Admin)). Dame Victoria Sharp, President of the King's Bench Division, called out "a lamentable failure to comply with the basic requirement to check the accuracy of material that is put before the court." Citations to cases that did not exist. Submissions filed without verification. The ruling carried a clear warning — solicitors and barristers exposed to wasted-costs orders, regulatory referrals, and in serious cases, contempt of court.
It was not the first such ruling. In Harber v HMRC ([2023] UKFTT 1007 (TC)), the appellant relied on nine cases generated by AI in front of the First-tier Tax Tribunal. None were real. Tribunal Judge Anne Redston accepted that Harber herself did not know they were fabricated. Across the Atlantic, Mata v Avianca (S.D.N.Y. 2023) saw counsel sanctioned $5,000 for filing six fictitious AI-generated authorities. ChatGPT had assured the lawyers the cases "indeed exist" and could be found "in reputable legal databases such as LexisNexis and Westlaw." They could not.
The regulatory pressure: EU AI Act and the SRA Code
Across the Channel, the EU AI Act is now law. Regulation (EU) 2024/1689. Articles 6, 13, and 14 set out obligations on risk classification, transparency, and human oversight for high-risk AI systems. The bulk of the high-risk obligations apply on 2 August 2026. Penalties for non-compliance with operator obligations go up to EUR 15 million or 3% of global annual turnover, whichever is higher. Article 2 gives the Act extraterritorial reach. If your firm uses AI on a matter that touches the EU market, the Act touches you.
The SRA has not been silent either. The SRA Code of Conduct for Firms is already in force on AI use:
- Rule 2.1 — effective governance, systems, and controls.
- Rule 2.2 — records to demonstrate compliance.
- Rule 2.5 — identify, monitor, and manage material risks.
- Rule 4.3 — staff competent to do their work.
- Principle 7 — act in each client's best interests.
Every one of these is in play the moment a solicitor pastes a client matter into a public chatbot.
Both things are true at the same time
AI catches real things real lawyers miss.
AI also invents law that does not exist.
The old asymmetry between solicitor and client is gone. For thirty years, the solicitor understood the law and the client did not. The fee was paid for that gap. The client could not check the work in real time. The solicitor could not be second-guessed mid-sentence.
That world is gone.
Not because clients suddenly understand law. They do not. They understand a lot more than they did before. Enough to ask uncomfortable questions. Not enough to know when the AI is hallucinating.
This is the most dangerous zone in any professional relationship. The client knows enough to second-guess but not enough to be right. The solicitor is being audited by a tool the solicitor does not fully understand either. Both sides are flying half-blind.
The numbers underneath are not comforting. Clio's UK and Ireland Legal Insights Report 2026 found 89% of legal professionals are now using AI in some form, and 17% of firms permit AI use without any formal policy at all. A Thomson Reuters study at the end of 2024 found only 10% of firms had formal guidelines for generative AI within their wider technology policies. And a VinciWorks survey of 230 UK compliance, legal, and IT professionals found just 3.5% felt their organisation was fully prepared for AI regulation. 63% could not describe themselves as prepared at all.
That is the gap.
What's missing: the layer in the middle
Most firms are responding to it in the worst possible way. They are getting defensive. Telling clients AI is unreliable. Refusing to engage with the AI memo. Charging for the time it takes to debunk it.
That does not work. The trend is not going away. The clients are not going to stop. The GCs are not going to stop. The High Court is now writing rulings about it. The EU AI Act will not pause for any firm that is not ready on 2 August 2026.
What is missing is a layer in the middle. A way to use AI on legal work without the failure modes — the hallucinations, the fabricated citations, the missed indemnities, the wrong liability caps, the inability to prove to a regulator that any of it was governed.
That is what LegalAI Space is.
How LegalAI Space closes the gap
LegalAI Space was founded by Daman Kaur — ex-Microsoft, ex-HPE, a decade building cloud and AI infrastructure for Fortune 500 enterprises, co-author of Implementing Hybrid Cloud with Azure Arc. The team has spent the last several months in conversations with COLPs, managing partners, compliance officers, and innovation directors at firms across the UK and EU. The pattern was the same in every one of those calls — lawyers who wanted to use AI but could not trust it, regulators who would not wait, and no infrastructure underneath any of it.
So Daman built one.
LegalAI Space is a SaaS platform with purpose-built agents — Research, Contract, Compliance Monitor, Audit & Risk — sitting on a governance engine. Every output flows through three stages.
Verify. Independent citation checking against BAILII, legislation.gov.uk, EUR-Lex, and a proprietary case database. Before the answer reaches the user.
Comply. SRA Code Rules and EU AI Act Articles encoded as policy-as-code, evaluated on every output, not bolted on at the end.
Prove. Immutable, cryptographically verified audit trails. Exportable as compliance certificates for SRA inspections and PII renewals.
If a citation cannot be confirmed against a real authority, it does not reach the user dressed up as fact. It gets flagged.
If an output cannot be tied to an SRA rule or an AI Act article, the audit trail says so.
Cloud or self-hosted — your firm's choice. UK data stays in the UK. EU data stays in the EU. Bring your own LLM contracts (Azure OpenAI, Anthropic, Mistral). The governance is the moat, not the model.
This is not anti-AI. The platform uses AI heavily under the hood. The point is that no single model is trusted to be right on its own. The engine is built to catch the model when it is wrong, the way a good editor catches a writer.
The product is live. There is a five-day trial. Pricing starts from $49 per seat.
What we are looking for now is firms and lawyers willing to use it on real matters and tell us where the governance engine should bite harder — which playbooks to encode, which agents to build next, what audit artefacts your COLP actually has to defend in front of the regulator. We are actively collaborating with early firms to shape the next layer of features. The product needs to be built with the people who do the work, not at them.
The trust boundary has moved
The trust boundary in legal work has moved. The infrastructure has to move with it.
If you are a solicitor, COLP, GC, or innovation director and the AI-trust problem is showing up in your week, come build it with us.
Book a 15-minute meeting with Daman — no pitch. Or see how the pricing works first, including the five-day trial.