With the selection of Big Thinker Cory Doctorow this year ABA Techshow continued its practice of featuring brilliant keynote speakers. Longtime Internet users will recognize the pattern Doctorow described. AOL became worthless. Yahoo became irrelevant. The content-to-ad ratio on Facebook is overwhelming. Doctorow calls this predictable degradation “En****tification.”

Danielle Braff‘s article in ABA Journal has a summary of Doctorow’s key insights:

Initially, tech platforms are created to be good to their users, or else no one would use them, he said. They suck you in, Doctorow said.

Take Google, for example. It spent an unbelievable amount of money to snag users, offering them the opportunity to use Google to receive instant answers to searches in a way they had never experienced in the past. Google was magical, Doctorow said. . . .

As soon as the public was hooked on Google, the company began abusing the end users to attract businesses, such as advertisers and web publishers. This is when ads started popping up on Google, he said.

But by 2019, Google’s search engine had grown as much as it possibly could. So they rigged the ad market, making searches worse on purpose to reduce the system’s accuracy so users must try multiple searches to get the answers they need, he said. “We’re all still using Google,” Doctorow pointed out.

The Kennedy-Mighell Report podcast has more on Doctorow’s concept of “En****tification.”

A warning from Richard Susskind’s How to Think About AI: A Guide for the Perplexed:

We’re still warming up. In not many years, our current technologies will look primitive, much as our 1980s kit appears antiquated today. [The current wave of AI apps] are our faltering first infant steps. Most predictions about the future are in my view irredeemably flawed because they ignore the not yet invented technologies.

He notes that Ray Kurzweil’s “law of accelerating returns” appears to be coming into play: “Information technologies like computing get exponentially cheaper because each advance makes it easier to design the next stage of their own evolution.”

These are probably the reasons why even top computer scientists, including Stephen Wolfram, cannot explain how generative AI works. Susskind quotes Wolfram: “It’s complicated in there, and we don’t understand it — even though in the end it’s producing recognizable human language.”

The implication? We’re not just on a new road — we may be building a new kind of vehicle while already driving at speed.

In the Kennedy-Mighell Report Episode 300, What’s Happening in LegalTech Other than AI? Dennis Kennedy and Tom Mighell had some good ideas in response to my recent question about where to find prompt libraries. It’s good to have starting points, but I strongly agree with Dennis that “roll your own” is the way to go.

Here is an edited transcript of that section of their podcast, with hypertext links to key resources mentioned:

Jerry Lawson: I was wondering if you had any suggestions for where I might find some prompt libraries. Thanks.

Tom Mighell: Well, first, I want to say thank you very much for your question, and thank you for breaking the long drought in time since we had a question. I think it’s a great question because I had to go and look at it too, because it’s something that I don’t have a lot of familiarity with, and I’m hoping that Dennis has better answers than I do. When I started to look for prompt libraries on the Internet, most of the libraries I found were about generating prompts for image AIs, like MidJourney, like Dall-E [ Ed: This link goes to the Dall-E developer group. Many other Dall-E prompt examples show up in response to a Google search), and they’re really cool, but I don’t think that, Jerry, is what you’re asking about.

Anthropic (developers of Claude) has a prompt library. They have one that you can look at, and it’s got some good tools. Google has a prompting essentials website where you can learn more about prompting, but it doesn’t really have a library in it.

I have found somewhat useful the Copilot prompt library. So if you have Microsoft Copilot, you can take advantage of that prompt library, which is nice to have, but I’m really interested, Dennis, to see what you have to say about this, because in my opinion, my advice would be, go out and take advantage of some of these prompt libraries to get an idea of what a prompt should look like, and then create your own, and then build what you have to use for yourself, but don’t necessarily rely on a library to have what you need. Use it as inspiration, and then find a way to build your own library.

Dennis, am I wrong about that?

Dennis Kennedy: No, I would say it a little bit differently. I used to look at prompt libraries, but I think what prompt libraries are really useful for is giving you ideas of what AI might be able to do for you that you hadn’t thought of yourself. So I’ve actually, coincidentally, have given this a lot of thought, because we discussed it in my AI and law class very specifically, because I had my students do two prompt creation projects.”

“And then my friend, our friend, Jerry Lawson, also said, back in the early days of blogging, people gave all kinds of content away. Why don’t people give away, like, all these great prompts in the same way? And I thought it was like a fair question from Jerry.

And so I thought about my own approach, and I’ll tell you what came up in the class as well. So I wrote a column on law department innovation for Legal Tech Hub, and for probably at least the last year and a half, I’ve ended every one of those columns with the suggested prompt that people can use. I’ve also written a paper that’s on SSRN that will tell you exactly how I structure prompts as of a year and a half ago.

And I’ve made other material available about prompting. But I think what’s more valuable is to teach people the framework and how to structure prompts rather than the prompts themselves. So the reason that I say that is when we discussed it in class, people were concerned about a couple of things.

“So if I give you a prompt and I don’t know what AI tool you’re going, I have no idea whether you’re going to get the same thing I got. And so there can be a lot of differences out there. Things can change.”

I would say, I’ve said this before, that in the last two months, the AI tools have changed more dramatically than any point I’ve seen. So some of the prompts I used to use don’t work as well anymore. So you have that.

And then in the class, we basically kind of relived the whole open-source discussion. And so I asked the students whether they would want to publish for free and make freely available the prompts they did in class. And none of them did.

And their concerns were, they were worried about not getting attribution. They didn’t want to be liable of something that went wrong. And they didn’t want to be like the helpdesk if people tried their prompts and didn’t get the same results, which basically recaptures all the discussions around open-source licensing.”

“So there are some things out there. So Tom, you mentioned a few things. Ethan Mollick has a website where he has some prompts.

Jennifer Wondracek, who is a law librarian, has some prompts for lawyers [Ms. Wondrachek shares at least some of her work at the AI Law Librarians group]. You could find some things out there. I just see them as starting points. contributions.

And as we might talk about later in the podcast, I may do some writing about, I sort of think we’re reaching the point where the AIs themselves can do a better job of prompting, actually creating the prompts and optimizing than we as humans can. So that’s where I’m at. So there are some things out there.

They’re going to be out of date. They might be helpful. You might get unexpected results.

I just use them, I would say, look around for them, and mainly to get ideas of the things you might try with AI that you had never thought of.”

From The Kennedy-Mighell Report Episode 300, May 2, 2025: What’s Happening in LegalTech Other than AI?

My review of Arvind Narayanan and Sayash Kapoor’s new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference is available at LLRX.com. The bottom line:

 AI Snake Oil is a crucial guide for navigating the complex and often misleading landscape of artificial intelligence. It is essential reading for lawyers and policymakers struggling to make sense of how to deal with AI. It belongs on the shortlist with Ethan Mollick’s Co-Intelligence: Living and Working with AI and Richard Susskind’s How to Think About AI: A Guide for the Perplexed.

Reading law review articles is usually pretty low on my priority list, but I’m making an exception for Carolyn Elefant’s Energy Law Journal law article “Generative AI for the Energy Law Practitioner.” There’s more info on LinkedIn. The pioneering Elefant included a bonus: Creating a new Bluebook format for citing to phony cases:

She has good advice for all lawyers, not just energy lawyers. Her conclusion:

The adoption of GenAI in energy law practice marks a paradigm shift, offering unprecedented opportunities for efficiency and innovation. From automating complex legal research and contract analysis to streamlining regulatory compliance and permitting processes, GenAI has the potential to revolutionize the way energy practitioners engage with clients, agencies, and the public. At the same time, energy practitioners must approach GenAI with a balanced perspective, recognizing its capabilities while implementing best practices and safeguards described in this article to mitigate risks and protect clients.

My review of Richard Susskind’s excellent new book, How to Think About AI: A Guide for the Perplexed is now available at AttorneyAtWork.com. Here is the conclusion:

With over 500 generative AI apps for lawyers cataloged by LegalTech Hub in March 2025, the proliferation of such tools shows no signs of slowing. In this environment, nuanced understanding is more critical than simply another application. What the world needs is clear explanations of the ways AI is changing our world now — and what we can expect tomorrow. 

Whether you’re writing briefs, litigating high-stakes matters, lobbying policymakers or just trying to future-proof a career, Susskind’s book aims to give you enough clarity to steer rather than drift. And in the AI era, that might be the most practical gift of all.

“How to Think About AI” is the literary equivalent of a well-lit observation deck overlooking a stormy sea. It is as much about society, ethics and identity as it is about neural networks. For attorneys plotting strategy in a generative-AI world, this book is required reading.

It was an honor to recently sit down with the ABA Senior Lawyers Division’s Experience magazine to discuss my career path, the evolution of legal practice, and the invaluable role of mentorship.  The best part was the opportunity to acknowledge some of the many people who have mentored or influenced me, including key early teachers Big Creek High School teacher Frieda Riley, who taught me the importance of clear thinking and accurate analysis. Concord College teacher J.B. Shrewsbury, who taught me to write, and University of Kentucky Law School teacher Bobby Gene Lawson, who taught me how to analyze legal issues. 

Other key lawyer influencers and mentors included Burgess Allison, the most influential voice in the early lawyer adoption of the Internet,  Richard Granat, winner of the ABA Legal Rebel Award for his work in improving access to justice; Greg Siskind, a top immigration lawyer also known for his leadership in lawyer marketing and innovative use of technology; Kevin O’Keefe, the uber lawyer/blogger and Dennis Kennedy, longtime author of the ABA Journal IT column, now a podcaster and law school professor.

A non-paywalled version of the full interview is now available on LLRX.com.

Excerpt:

Sometimes, I think it’s a wonder that I became a lawyer at all. I grew up in the West Virginia coal fields. As the New York Times said of my home, “McDowell County, the poorest in West Virginia, has been emblematic of entrenched American poverty for more than a half-century.” An academic study concluded that of 3,142 counties in the United States, McDowell County ranked last in life expectancy.

Coal mining is not a lucrative line of work, and the cyclical nature of the business meant that whenever my father was laid off, we survived on welfare and food stamps. We did not have an indoor bathtub or toilet until I was 14 years old. I don’t remember seeing a dentist until I left the coal fields and got a job.  

McDowell County is not the most promising launching pad for a professional career, but I was blessed to have an extraordinary high school teacher, Freida Riley. One of her students, Homer Hickam, became a NASA engineer and wrote about her in his memoir Rocket Boys. It was later made into the 1999 movie October Sky, with Laura Dern playing this inspirational teacher. The National Museum of Education’s Freida J. Riley Teacher Award annually recognizes “an American teacher who overcomes adversity or makes an enormous sacrifice in order to positively impact students.”

She certainly positively impacted me. I may never have attended college, let alone become a lawyer, without her influence.

With the help of Sabrina Pacifici, my article AI In High-Stakes Litigation: The Critical Role of Experienced Attorneys will be published this month at LLRX.com. Here is an excerpt:

test

The “Centaur” Approach Is the Optimal Model (For Now)

Today’s chess-playing computers can crush the best human players without breaking a sweat. This wasn’t always true. Twenty years ago, teams of the strongest humans and the most powerful computers were stronger than either humans or computers alone. These teams were sometimes called “centaurs.” They combined the strength of a mighty beast with human judgment.

For at least the next few years, legal centaur teams—combining the experience of the best lawyers and the best AI apps—will always win over the best human lawyers or the best AI apps working alone.

Today’s best legal AI experts (including Richard Susskind) believe that this may not always be true. They speculate that eventually computers will reach a stage of “hyperintelligence” in which AI systems become unfathomably more capable than humans. We are not there yet, and we may never get there. For the foreseeable future, experienced lawyers who know how to use AI will dominate.

Today, I have no problem asking an AI app a simple question about licensing of speech therapists. I would verify any of its analysis before relying on it for anything important, but AI is now my first choice for relatively simple questions where the stakes are low.

I would never dream of relying on an unassisted AI app for an important issue in $40 million litigation. Neither should you.

At the same time, today I would not trust my unassisted human judgment on a high-stakes matter.

 It was fun to come across Lexblog founder Kevin O’Keefe‘s archived mentions of Sabrina Pacific, Dennis Kennedy and yours truly. It is flattering to think that the three of us may have inspired one of the top entrepreneurs (and nice guys) in legal tech history:

I like LLRX for its info on Internet marketing. It was discussion about blogs for marketing purposes between, I believe, Dennis Kennedy and Jerry Lawson, at LLRX’s roundtable feature that was a key in turning me onto the power of blogs a couple years ago.

Kevin’s additional comments about Sabrina and LLRX.com are even truer today than they were 20 years ago:

LLRX.com, an all round online legal resource with a bias towards Internet related information, has been a labor of love for Sabrina Pacifici for over 10 years. Sabrina attracts thought leaders to create original content and have archived discussions on various subjects. Should be on the short list of resources for all legal blog publishers.