The new book AI Snake Oil  calls out the major “hype superspreaders” fueling today’s AI bubble:

  • Big Tech Companies: Eager to attract investment, tech companies frequently overstate AI’s capabilities. From software firms touting the latest “gen AI” tool as a revolution, to cloud providers bragging about infinite AI compute power, corporate marketing sets unrealistic expectations.
  • Researchers and Benchmark Gaming: The academic AI community is not blameless. Pressures to publish and get media attention tempt researchers to overstate findings or game benchmarks. For example, dozens of papers claimed to predict court decisions with high accuracy, but many exploited “data leakage”– e.g., using words from a judge’s opinion that only appear after the decision is known. Once that leak was patchedthe predictive power vanished. Always dig into how an AI was evaluated – was it a controlled, rigorous test, or are we looking at inflated numbers?
  • Journalists and Media: Sensational headlines often amplify AI myths. Reporters sometimes uncritically reprint company press releases or anthropomorphize AI for clicks. One New York Times column went viral by describing a chatbot that “wanted to be alive.” Narayanan and Kapoor argue that such stories sow public confusion about sentient algorithms that don’t exist. They also criticize “access journalism,” in which tech reporters soften coverage to stay in companies’ good graces. The result: every incremental lab result is hailed as world-changing, while lurid tales of AI misbehavior (often misunderstood) go viral.
  • Public Figures and Pundits: Celebrities, CEOs, and even policymakers sometimes spread misleading narratives. Flashy keynotes proclaim that AI will “transform everything,” while doomsayers warn of an imminent robot apocalypse. Grandiose rhetoric usually serves the speaker’s agenda – attracting funding, shaping legislation, or simply grabbing attention.

The full review of AI Snake Oil is available at LLRX.com.

Jerry Lawson Headshot

I was late to the AI party. In 2019 a legal journal editor asked me a question about AI. I replied “I’m 67 years old and have developed a little expertise in a few areas. At this point in my life it’s too late for me to try to learn about something new and as complicated as AI. I don’t have the time. I’ll leave that for younger, more ambitious lawyers.”

That was then. This is now.

ChatGPT 3.5 was released in November 2022. It and other AI apps are changing the world. I realized I could no longer ignore AI. Since then, learning to use AI has been my top professional priority. It’s just that important.

Skepticism about AI is not only justified—it’s evidence of good judgment. There are indeed pitfalls to AI use. Inept use of AI won’t help you, but my experience has been that in the hands of skilled lawyers with good judgment, AI is essential to obtaining the best results, for one simple reason:

AI is only as good as the question it’s given. This is where senior lawyers excel. Knowing what issue to frame, what clause to focus on, what fact might tip the case—this is precisely what you’ve spent your career developing.

AI can assist. But it still needs someone to think.

Hyping the Risk of Out-of-Control AI

Many purveyors of AI snake oil delight in forecasting that Skynet is right around the corner. They suggest that Generative AI is so close to AGI (artificial general intelligence, meaning AI apps that can perform most or all tasks as effectively as any human being). They claim that we should expect a Terminator-style revolt any day now. This is the flip side of the AI-Is-Our-Savior pitch.

Narayanan and Kapoor have a more realistic view in their new book AI Snake Oil:

We’re not saying that AGI will never be built, or that there is nothing to worry about if it is built. But we think AGI is a long-term prospect, and that society already has the tools to address its risks calmly. We shouldn’t let the bugbear of existential risk distract us from the more immediate harms of AI snake oil.

AGI is definitely a possibility we should take seriously. The questions we should ask are when we might reach it, what it will look like, and what we can do to steer it in a more beneficial direction.

I agree with the authors that we can get giant benefits from AI safely if we build in the right safeguards. My only concern is whether our polarized politics will allow us to implement the safeguards the authors recommend.

Richard Susskind’s new book How to Think About AI: A Guide for the Perplexed explains how thought patterns affect openness to AI:

The “process vs. outcome” distinction.

Richard Susskind: How To Think About AI
How To Think About AI

Chapter 3, “Process-thinking and Outcome-thinking,” sets the stage for following chapters by contrasting the views of two heavyweight public intellectuals: Henry Kissinger and Noam Chomsky. Kissinger praises AI to the heavens. Chomsky thinks it’s basically worthless. Susskind’s explanation for the contrasts is that Kissinger focuses on outputs, while Chomsky focuses on process:

  • Process-thinkers are interested in how complex systems work. 
  • Outcome-thinkers are interested in the results they bring. 
  • Process-thinkers are interested in the architecture of systems. 
  • Outcome-thinkers concentrate on their function. 
  • Outcome-thinkers also tend to be “bottom-up” thinkers, preoccupied with overall impact.

This is a key distinction that explains a lot about differences of opinion about AI. Since AI apps don’t think the way humans think, process-thinkers tend to dismiss them as useless. Outcome-thinkers are more pragmatic, focusing on the demonstrable practical benefits. They understand that “machines don’t need to copy us to deliver the outcomes or outputs that customers, clients and users want from their providers.” Lawyers, trained to analyze process, may be predisposed to dismiss AI’s unfamiliar logic, missing the forest (useful results) for the trees (alien methods).

How much disruption can we expect from adoption of AI? A new report assesses the issue:

Generative AI and Jobs: A Refined Global Index of Occupational Exposure is a comprehensive analysis from the International Labour Organization (ILO) and Poland’s National Research Institute (NASK) of the possible effects of Generative AI on the job market. The report combines information on 30,000 occupational tasks with expert validation, AI-assisted scoring, and ILO harmonized microdata.

The authors conclude full job automation will probably remain limited. Even when AI makes workers more efficient, some human involvement will be necessary.

However, 1 in 4 jobs worldwide are exposed to at least some disruption from AI deployment. Women are predicted to be affected disproportionately. Clerical and administrative jobs are particularly vulnerable, and women held 93 to 97% of those jobs in recent years.

The future is coming, whether we are ready or not.

Here’s the conclusion of my LLRX.com review of AI Snake Oil. In case you couldn’t tell, I liked it:

The Bottom Line

From predicting case outcomes to drafting legal documents, AI promises abound. But as Narayanan and Kapoor compellingly argue, separating AI fact from fiction is now a critical skill for professionals.

AI Snake Oil has a few warts. The authors will never be candidates for the Nobel Prize for Literature. It is repetitive and seems too negative at times.

Despite its stylistic shortcomings, AI Snake Oil is a crucial guide for navigating the complex and often misleading landscape of artificial intelligence. It is essential reading for lawyers and policymakers struggling to make sense of how to deal with AI. It belongs on the shortlist with Ethan Mollick’s Co-Intelligence: Living and Working with AI and Richard Susskind’s How to Think About AI: A Guide for the Perplexed.

Purchase Info:

Arvind Narayanan and Sayash Kapor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Princeton University Press, Princeton, and Oxford, September 2024). Available from Princeton University PressBarnes and NobleAmazonGoogle Play (Ebook) and Audible (audio recording).

With the selection of Big Thinker Cory Doctorow this year ABA Techshow continued its practice of featuring brilliant keynote speakers. Longtime Internet users will recognize the pattern Doctorow described. AOL became worthless. Yahoo became irrelevant. The content-to-ad ratio on Facebook is overwhelming. Doctorow calls this predictable degradation “En****tification.”

Danielle Braff‘s article in ABA Journal has a summary of Doctorow’s key insights:

Initially, tech platforms are created to be good to their users, or else no one would use them, he said. They suck you in, Doctorow said.

Take Google, for example. It spent an unbelievable amount of money to snag users, offering them the opportunity to use Google to receive instant answers to searches in a way they had never experienced in the past. Google was magical, Doctorow said. . . .

As soon as the public was hooked on Google, the company began abusing the end users to attract businesses, such as advertisers and web publishers. This is when ads started popping up on Google, he said.

But by 2019, Google’s search engine had grown as much as it possibly could. So they rigged the ad market, making searches worse on purpose to reduce the system’s accuracy so users must try multiple searches to get the answers they need, he said. “We’re all still using Google,” Doctorow pointed out.

The Kennedy-Mighell Report podcast has more on Doctorow’s concept of “En****tification.”

A warning from Richard Susskind’s How to Think About AI: A Guide for the Perplexed:

We’re still warming up. In not many years, our current technologies will look primitive, much as our 1980s kit appears antiquated today. [The current wave of AI apps] are our faltering first infant steps. Most predictions about the future are in my view irredeemably flawed because they ignore the not yet invented technologies.

He notes that Ray Kurzweil’s “law of accelerating returns” appears to be coming into play: “Information technologies like computing get exponentially cheaper because each advance makes it easier to design the next stage of their own evolution.”

These are probably the reasons why even top computer scientists, including Stephen Wolfram, cannot explain how generative AI works. Susskind quotes Wolfram: “It’s complicated in there, and we don’t understand it — even though in the end it’s producing recognizable human language.”

The implication? We’re not just on a new road — we may be building a new kind of vehicle while already driving at speed.

In the Kennedy-Mighell Report Episode 300, What’s Happening in LegalTech Other than AI? Dennis Kennedy and Tom Mighell had some good ideas in response to my recent question about where to find prompt libraries. It’s good to have starting points, but I strongly agree with Dennis that “roll your own” is the way to go.

Here is an edited transcript of that section of their podcast, with hypertext links to key resources mentioned:

Jerry Lawson: I was wondering if you had any suggestions for where I might find some prompt libraries. Thanks.

Tom Mighell: Well, first, I want to say thank you very much for your question, and thank you for breaking the long drought in time since we had a question. I think it’s a great question because I had to go and look at it too, because it’s something that I don’t have a lot of familiarity with, and I’m hoping that Dennis has better answers than I do. When I started to look for prompt libraries on the Internet, most of the libraries I found were about generating prompts for image AIs, like MidJourney, like Dall-E [ Ed: This link goes to the Dall-E developer group. Many other Dall-E prompt examples show up in response to a Google search), and they’re really cool, but I don’t think that, Jerry, is what you’re asking about.

Anthropic (developers of Claude) has a prompt library. They have one that you can look at, and it’s got some good tools. Google has a prompting essentials website where you can learn more about prompting, but it doesn’t really have a library in it.

I have found somewhat useful the Copilot prompt library. So if you have Microsoft Copilot, you can take advantage of that prompt library, which is nice to have, but I’m really interested, Dennis, to see what you have to say about this, because in my opinion, my advice would be, go out and take advantage of some of these prompt libraries to get an idea of what a prompt should look like, and then create your own, and then build what you have to use for yourself, but don’t necessarily rely on a library to have what you need. Use it as inspiration, and then find a way to build your own library.

Dennis, am I wrong about that?

Dennis Kennedy: No, I would say it a little bit differently. I used to look at prompt libraries, but I think what prompt libraries are really useful for is giving you ideas of what AI might be able to do for you that you hadn’t thought of yourself. So I’ve actually, coincidentally, have given this a lot of thought, because we discussed it in my AI and law class very specifically, because I had my students do two prompt creation projects.”

“And then my friend, our friend, Jerry Lawson, also said, back in the early days of blogging, people gave all kinds of content away. Why don’t people give away, like, all these great prompts in the same way? And I thought it was like a fair question from Jerry.

And so I thought about my own approach, and I’ll tell you what came up in the class as well. So I wrote a column on law department innovation for Legal Tech Hub, and for probably at least the last year and a half, I’ve ended every one of those columns with the suggested prompt that people can use. I’ve also written a paper that’s on SSRN that will tell you exactly how I structure prompts as of a year and a half ago.

And I’ve made other material available about prompting. But I think what’s more valuable is to teach people the framework and how to structure prompts rather than the prompts themselves. So the reason that I say that is when we discussed it in class, people were concerned about a couple of things.

“So if I give you a prompt and I don’t know what AI tool you’re going, I have no idea whether you’re going to get the same thing I got. And so there can be a lot of differences out there. Things can change.”

I would say, I’ve said this before, that in the last two months, the AI tools have changed more dramatically than any point I’ve seen. So some of the prompts I used to use don’t work as well anymore. So you have that.

And then in the class, we basically kind of relived the whole open-source discussion. And so I asked the students whether they would want to publish for free and make freely available the prompts they did in class. And none of them did.

And their concerns were, they were worried about not getting attribution. They didn’t want to be liable of something that went wrong. And they didn’t want to be like the helpdesk if people tried their prompts and didn’t get the same results, which basically recaptures all the discussions around open-source licensing.”

“So there are some things out there. So Tom, you mentioned a few things. Ethan Mollick has a website where he has some prompts.

Jennifer Wondracek, who is a law librarian, has some prompts for lawyers [Ms. Wondrachek shares at least some of her work at the AI Law Librarians group]. You could find some things out there. I just see them as starting points. contributions.

And as we might talk about later in the podcast, I may do some writing about, I sort of think we’re reaching the point where the AIs themselves can do a better job of prompting, actually creating the prompts and optimizing than we as humans can. So that’s where I’m at. So there are some things out there.

They’re going to be out of date. They might be helpful. You might get unexpected results.

I just use them, I would say, look around for them, and mainly to get ideas of the things you might try with AI that you had never thought of.”

From The Kennedy-Mighell Report Episode 300, May 2, 2025: What’s Happening in LegalTech Other than AI?

My review of Arvind Narayanan and Sayash Kapoor’s new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference is available at LLRX.com. The bottom line:

 AI Snake Oil is a crucial guide for navigating the complex and often misleading landscape of artificial intelligence. It is essential reading for lawyers and policymakers struggling to make sense of how to deal with AI. It belongs on the shortlist with Ethan Mollick’s Co-Intelligence: Living and Working with AI and Richard Susskind’s How to Think About AI: A Guide for the Perplexed.