Let’s stop blaming the hallucinations and focus on the real problem:
Lawyers who don’t do their job because they are too busy, too lazy, or too incompetent.
The lawyer who cites a hallucinated AI case and the lawyer who cites a real case without reading it have committed the same ethical failure. Today, it’s usually one of them who gets disciplined.
AI didn’t invent the fake citation. It just automated it.
Long before ChatGPT, lawyers were citing cases they’d never actually read. I know, because as a law clerk to a U.S. District Court judge, I read the cases they didn’t. The citations were real enough — the cases existed — but they had nothing to do with the argument being made. Every time I found one, I discounted everything else in the brief.
This wasn’t evenly distributed. Large firms, with their layers of associates and research infrastructure, rarely had this problem. The less institutional support a lawyer had, the more likely I was to find phantom relevance in their citations. That’s not an indictment of any particular lawyers — it’s an indictment of a profession that has always tolerated sloppy research as long as no one checked.
AI didn’t create a malpractice problem. It just made the existing one impossible to ignore — because now the cases don’t even exist, which is harder to explain away than citing a real case you obviously never read.
The standard hasn’t changed: if you cite it, you’d better have read it and understood it. The only thing that’s changed is that the shortcuts are getting caught.










