There is a database that every lawyer using AI should know about.
Damien Charlotin, an Australian legal researcher, maintains a public database tracking court decisions where AI-generated hallucinated content has been identified[^1]. As of early 2026, it contains over 1,200 cases — spanning multiple jurisdictions, court levels, and practitioner types.
That number is not a curiosity. It's a pattern. And the pattern is accelerating.
This is not a theoretical risk assessment. These are real cases, in real courts, where fabricated legal authorities were submitted and caught. Each one represents a failure that a verification process should have prevented. Taken together, they tell us something important about where the legal profession stands on AI governance — and what needs to change.
What 1,200+ Cases Tell Us
The Scale
When Charlotin's database first attracted public attention in mid-2025, it documented just over 100 cases[^2]. Within months, that number passed 200. By early 2026, it exceeded 1,200. The acceleration reflects both increasing AI use and increasing judicial scrutiny — courts are now looking for AI hallucinations in a way they weren't two years ago.
The Types of Hallucination
Not all AI hallucinations are the same. The database reveals several distinct failure modes:
Fabricated citations — Cases that do not exist at all. The AI generates a plausible-sounding case name, neutral citation, and court, but no such case was ever decided. This is the most common type and the easiest to detect — if someone checks.
Fabricated holdings — Real cases cited for propositions they do not support. The case exists, but the AI misrepresents what it decided. This is harder to catch because a surface-level check confirms the case is real; only reading the actual judgment reveals the misattribution.
Fabricated quotations — Real cases with fabricated direct quotations attributed to the judgment. The California Court of Appeal described this pattern in Noland v. Land of the Free (2025): "nearly all of the quotations… ha[d] been fabricated"[^3].
Outdated authorities — Cases or statutes cited without noting they have been overruled, repealed, or superseded. The authority was real at some point but is no longer good law.
Jurisdictional errors — Authorities from the wrong jurisdiction cited as if they apply. A Scottish case cited in English proceedings, a federal statute applied in state court, or EU law cited where it has no application.
Who Submits Hallucinated Content
A Stanford Cyber Policy Center analysis of AI-tainted court filings found that many cases involve pro se litigants — people representing themselves who use AI tools without understanding their limitations[^4]. But the database also includes cases involving practising lawyers, experienced barristers, and attorneys at established firms. This is not just a problem of unsophisticated users.
UK Cases in Detail
The UK has produced several of the most significant hallucination cases — each with different characteristics and consequences.
Harber v HMRC [2023] UKFTT 1007 (TC)
The wake-up call. A litigant in person, Ms Harber, submitted nine AI-fabricated case citations to the First-tier Tribunal (Tax Chamber). The tribunal accepted she did not know the authorities were fabricated, but Judge Redston noted that "providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue"[^5].
What this case shows: Even well-intentioned AI use can produce fabricated authorities. Ms Harber was not trying to deceive the tribunal — she simply trusted the AI's output. The fabrications were detected because the judge checked.
Ayinde v London Borough of Haringey [2025]
A pupil barrister's submissions to the High Court contained multiple fictitious case citations suspected to have been generated by AI. The court found the conduct constituted professional misconduct. The High Court noted that providing fake case descriptions "qualifies quite clearly as professional misconduct" and that the barrister "should have reported herself to the Bar Council"[^6].
What this case shows: The professional consequences are escalating. Where Harber resulted in a judicial warning, Ayinde resulted in a finding of professional misconduct and referral to the BSB. The trajectory is clear: courts are treating AI hallucinations more seriously over time, not less.
Al-Haroun v Qatar National Bank [2025]
A solicitor's submissions to the High Court contained 18 non-existent case authorities, generated using AI tools by the client and filed without independent verification. The solicitor was referred to the SRA. The court described "a lamentable failure to check the accuracy of material put before the court"[^7].
What this case shows: The lawyer's responsibility extends to everything they file — including AI-generated content provided by clients. "My client gave me this" is not a defence. And 18 fabricated authorities in a single submission demonstrates that AI doesn't hallucinate occasionally — it can hallucinate systematically.
The UK Pattern
These three cases, spanning 2023 to 2025, show a clear escalation:
| Case | Year | Hallucinations | Outcome | |------|------|---------------|---------| | Harber v HMRC | 2023 | 9 fabricated citations | Judicial warning; litigant in person | | Ayinde v Haringey | 2025 | Multiple fabricated citations | Professional misconduct finding; BSB referral | | Al-Haroun v Qatar National Bank | 2025 | 18 fabricated citations | SRA referral |
The direction is unmistakable. Early cases were treated with relative leniency — an acknowledgement that AI hallucination was a new phenomenon. By 2025, professional regulators are involved, and "I didn't know AI could fabricate" is no longer an acceptable explanation.
International Cases with UK Implications
UK firms should pay attention to international developments, because UK courts and regulators watch them closely — and because UK firms with international practices may encounter these standards directly.
Noland v. Land of the Free, L.P. (California, 2025)
California's first published appellate opinion on AI hallucinations[^3]. Attorney Amir Mostafavi used four different AI tools — ChatGPT, Claude, Gemini, and Grok — to draft appellate briefs. Of 23 case quotations in the opening brief, 21 were fabricated. Several cited cases did not exist at all.
The California Court of Appeal imposed a $10,000 sanction and referred counsel to the State Bar. The court published the opinion specifically as a warning: "No brief, pleading, motion, or any other paper filed in any court should contain any citations — whether provided by generative AI or any other source — that the attorney responsible for submitting the pleading has not personally read and verified."
The detail that matters: Mostafavi used four different AI tools and each one hallucinated. This is not a single-vendor problem. It's a fundamental characteristic of how large language models generate text.
Pennsylvania Commonwealth Court (2025–2026)
Spotlight PA reported that filings in at least 13 Pennsylvania cases contained confirmed or implied AI hallucinations in 2025[^8]. In one high-profile case before Commonwealth Court, experienced attorneys from a public-interest law firm presented a brief with numerous citation errors — including misquotes, attribution errors, and quotations that did not exist. In a sex discrimination case, a pro se plaintiff was fined $1,000 and their suit was dismissed.
What this shows: AI hallucination is not concentrated in a single jurisdiction or court level. It spans from first-tier tribunals to appellate courts, from pro se litigants to experienced counsel.
Mata v. Avianca (US, 2023)
The case that started global awareness. Two New York lawyers submitted a brief containing six entirely fictitious case citations generated by ChatGPT. Both were sanctioned by the federal court[^9]. The case has been cited in judicial opinions and legal ethics guidance worldwide, including in UK commentary on AI in legal practice.
A New Development: Sanctions for Not Catching AI Hallucinations
A notable development in 2025 was courts sanctioning lawyers not for submitting AI hallucinations, but for failing to detect them in opposing counsel's filings. LawNext reported on attorneys being sanctioned for failing to identify fake citations in an opponent's brief[^10]. This raises the bar further: it's not enough to govern your own AI use. Practitioners may be expected to verify AI-generated content from any source.
Why Existing Tools Don't Solve This
Every legal AI tool on the market — research platforms, contract review systems, drafting assistants — can produce hallucinated content. The question is what happens next.
Most tools rely on self-verification: the same model that generated the output reviews its own work. This is the "checking your own homework" problem. A model that hallucinated a citation in the first place may not catch its own error on review — because it has no independent reference point. It is assessing plausibility, not checking facts against primary sources.
Independent verification requires a different approach: a separate process that retrieves and validates citations against authoritative legal databases — BAILII and legislation.gov.uk for UK law, EUR-Lex for EU law — independently of the model that generated the original output. The verification agent doesn't ask "does this sound right?" It asks "does this exist in the database, and does it say what the original output claims?"
This distinction matters because it's the difference between a confidence check and a fact check. The 1,200+ cases in Charlotin's database are evidence that confidence checks alone are insufficient.
What Firms Should Do Now
1. Accept the Base Rate
AI hallucination is not a bug that will be fixed in the next model release. It is a fundamental characteristic of how large language models work. They generate statistically plausible text — and sometimes plausible text is false. Every AI tool your firm uses can and will produce hallucinated content. The question is whether you catch it before it reaches a court or client.
2. Implement Independent Verification
Establish a process — manual or automated — where AI-generated legal citations are checked against primary sources independently of the AI tool that generated them. For UK law, this means checking against BAILII for case law and legislation.gov.uk for statutes, including amendment and repeal status. For cross-border work, EUR-Lex and jurisdiction-specific databases.
3. Document the Verification
Under SRA Rule 2.2, your firm needs records demonstrating compliance[^11]. A verification process that generates no records is invisible to regulators. Whether verification is manual or automated, it should produce a timestamped record of what was checked, against which source, and what the result was.
4. Train for Scepticism
Every fee earner using AI should understand that AI can fabricate legal authorities confidently, cite real cases for propositions they don't support, quote judgments with words the judge never wrote, and miss that a statute has been repealed or a case overruled. This is not about being anti-AI. It's about being a competent professional who understands the tool's limitations — which SRA Rule 4.3 already requires[^11].
5. Watch the Database
Charlotin's database is updated regularly and is freely accessible[^1]. If you're a COLP, an innovation director, or a managing partner responsible for AI governance, it should be on your reading list. The pattern it reveals is the most compelling case for verification infrastructure that exists.
LegalAI Space is building AI agents for legal teams with a governance layer that independently verifies every citation against authoritative legal databases — BAILII, legislation.gov.uk, EUR-Lex — before it reaches a lawyer. Join the waitlist or book a research conversation with Founder Daman Kaur.
Sources
[^1]: Damien Charlotin, AI Hallucination Cases Database. A publicly maintained database tracking legal decisions where the use of AI-generated hallucinated content has been identified by courts or tribunals. The count of 1,200+ cases reflects the database as of early 2026. Charlotin also maintains a related AI Evidence Database tracking broader judicial treatment of AI-generated evidence.
[^2]: The database's growth trajectory is documented in coverage from eDiscovery Today (June 2025, reporting 112+ cases), Simon Willison (May 2025), and Lowering the Bar (June 2025). By early 2026, the count exceeded 1,200.
[^3]: Noland v. Land of the Free, L.P., No. B331918 (Cal. Ct. App. 2d Dist. Sept. 12, 2025). California's first published appellate opinion on AI hallucinations. The court imposed a $10,000 sanction on attorney Amir Mostafavi and referred him to the State Bar after finding that 21 of 23 case quotations in the opening brief were fabricated. The attorney used ChatGPT, Claude, Gemini, and Grok to draft the briefs. See McGuireWoods analysis and Proskauer reporting.
[^4]: Stanford Cyber Policy Center, "Who's Submitting AI-Tainted Filings in Court?", October 2025. Analysis of the demographics and contexts of AI-tainted court filings.
[^5]: Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC). A litigant in person submitted nine AI-fabricated case citations to the First-tier Tribunal (Tax Chamber). The tribunal accepted she did not know the authorities were fabricated. See Law Gazette reporting.
[^6]: Ayinde v London Borough of Haringey [2025]. High Court judgment in which a barrister's submissions contained multiple fictitious case citations suspected to have been generated by AI. The court found the conduct constituted professional misconduct and that the barrister should have self-reported to the Bar Council. See Law Gazette reporting and BIICL analysis.
[^7]: R (Ayinde) v London Borough of Haringey, Al-Haroun v Qatar National Bank QPSC [2025] EWHC 1383 (Admin). The solicitor was referred to the SRA after filing 18 non-existent case authorities generated using AI tools. The AI-generated content had been provided by the client and filed without independent verification.
[^8]: Spotlight PA, "Judges find suspected AI hallucinations in PA court cases", January 2026. Based on Charlotin's database filtered by Pennsylvania jurisdiction. The reporting notes that the majority of cases involved pro se litigants, but also included experienced attorneys before Commonwealth Court.
[^9]: Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023). Two lawyers were sanctioned after filing a brief containing six entirely fictitious case citations generated by ChatGPT. This case is widely cited as the first high-profile example of AI hallucination in legal proceedings and has been referenced in judicial guidance worldwide.
[^10]: LawNext, "A New Wrinkle in AI Hallucination Cases: Lawyers Dinged for Failing to Detect Opponent's Fake Citations", September 2025.
[^11]: SRA Code of Conduct for Firms, effective 25 November 2019 (current version effective 11 April 2025). SRA Standards and Regulations. Rule 2.2 requires records demonstrating compliance. Rule 4.3 requires staff to be competent and maintain up-to-date knowledge — which in 2026 includes understanding the capabilities and limitations of AI tools used in legal work.