From my review of How to Think About AI: A Guide for the Perplexed at Attorney At Work:

With over 500 generative AI apps for lawyers cataloged by LegalTech Hub in March 2025, the proliferation of such tools shows no signs of slowing. In this environment, nuanced understanding is more critical than simply another application. What the world needs is clear explanations of the ways AI is changing our world now — and what we can expect tomorrow. 

Whether you’re writing briefs, litigating high-stakes matters, lobbying policymakers or just trying to future-proof a career, Susskind’s book aims to give you enough clarity to steer rather than drift. And in the AI era, that might be the most practical gift of all.

“How to Think About AI” is the literary equivalent of a well-lit observation deck overlooking a stormy sea. It is as much about society, ethics and identity as it is about neural networks. For attorneys plotting strategy in a generative-AI world, this book is required reading.

Too many books today are getting it all wrong about the risks of AI. Many vendors and dreamers are telling us there is no risk. It’s all going to be peachy and positive. Others are screeching that AI will inevitably lead to the Apocalypse.

Richard Susskind gets it right. His new book, How To Think About AI: A Guide for the Perplexed, analyzes the risks in a balanced and thoughtful way. It is one of the most thought-provoking and useful books in years. Here’s an excerpt from my review at LLRX.com:

In Chapters 8 and 9, Susskind analyzes AI risks using the following chart:

CATEGORIES OF AI RISK
Category 1: Existential RisksThreats to the long-term survival or potential of humanity.
Category 2: Risks of CatastropheLarge-scale disasters or societal disruptions short of extinction.
Category 3: Political RisksImpacts on democracy, governance, surveillance, and autonomy.
Category 4: Socio-Economic RisksEffects on employment, inequality, social cohesion, and bias.
Category 5: Risks of UnreliabilityIssues arising from AI errors, inaccuracies, or “hallucinations.”
Category 6: Risks of RelianceDangers of over-dependence or inappropriate trust in AI systems.
Category 7: Risks of InactionNegative consequences of failing to develop or deploy beneficial AI.
Richard Susskind: How To Think About AI
How To Think About AI

Having laid out the risks, Susskind provides suggestions for dealing with them. He emphasizes measured urgency rather than end-of-the-world hysteria. His message is that policymakers and the public need to grasp the size and speed of current AI shifts, not because disaster is inevitable, but because decisions made in the next few years will ripple for decades. The subtext: Burying your head in the sand isn’t a neutral act — it quietly hands the steering wheel to whoever is paying attention.

The final three chapters address philosophical ideas and speculation as to what the future may hold for AI — and humanity. Discussions of Plato’s allegory of the cave, umwelten and Kant’s distinctions between phenomena and noumena most likely won’t engage the attention of every lawyer, but Susskind’s conclusion most likely will:

“My guess is that we have at least a decade to decide what we want for humanity and then to act upon that decision — if necessary, emphatically and pre-emptively — through nation and international law. [O]ur future will depend largely on how we react over the next few years. conflict resolution or prevention of legal problems could reduce or replace litigation as we know it.”

Richard Susskind: How To Think About AI
How To Think About AI

Richard Susskind’s new book How to Think About AI: A Guide for the Perplexed suggests three ways AI might change the workplace–and which one to worry about. He distinguishes three concepts: 

  • Automation (task substitution) is about finding a way to be more efficient about doing what we are doing now. 
  • Innovation means delivering the outcomes clients want, using techniques or technology that support radically new underlying processes.
  • Elimination means not just solving a problem but eliminating it. 

Many analysts see automation as the biggest AI threat to jobs. Innovation and elimination may be bigger dangers. Susskind makes a convincing case — to me, at any rate — that using AI to implement different approaches to conflict resolution or prevention of legal problems could reduce or replace litigation as we know it.

Today’s chess-playing computers can crush the best human players without breaking a sweat. This wasn’t always true. A couple of decades ago teams of the strongest humans and the most powerful computers were stronger than either humans or computers alone. These teams were sometimes called “centaurs.” They combined the strength of a mighty beast with human judgment.

For at least the next few years, legal centaur teams combining the experience of the best lawyers and top AI apps will always win over either human lawyers or AI apps working alone.

Today’s best legal AI experts (including Richard Susskind) believe that this may not always be true. They speculate that eventually computers will reach a stage of “hyperintelligence” in which AI systems become unfathomably more capable than humans. We are not there yet, and we may never get there. For the foreseeable future, experienced lawyers who know how to use AI will dominate.

Today, I have no problem asking an AI app a simple question about state licensing of music therapists. I would verify its analysis before relying on it for anything important, but AI is now my first choice for a relatively simple question where the stakes are low.

I would never dream of relying on an unassisted, unsupervised AI app for an important issue in $40 million litigation.

At the same time, today I no longer rely solely on my unassisted human judgment on a high-stakes matter.

The rules of thumb I explained in a January LLRX.com article describe the best approach:

  • Never rely on anything AI tells you about crucial issues.
  • Always ask AI for advice on crucial issues.

Richard Susskind’s new book How to Think About AI has a warning for professionals:

Richard Susskind: How To Think About AI
How To Think About AI

Professionals see much greater scope for AI in disciplines other than their own. Doctors are quick to suggest that AI has great potential in law, accounting and architecture, but instinctively they tend to resist its deployment in health care. Lawyers assert confidently that audit, journalism and management consulting are ripe for displacement but offer special pleadings on its very limited suitability in the practice of law and the administration of justice.

Talking about AI requires a new vocabulary. For example, AI skeptics are fond of saying that computers can never replace them because they don’t have the same judgment, empathy and creativity. 

What’s overlooked is that computers can provide what we might call quasi-judgment, quasi-empathy and quasi-creativity. Susskind demonstrates — quite convincingly — that the computer versions of these biological traits can be superior in several ways to the human version. Skeptical about this? All I can say is check out Chapter 5 before becoming too confident that you are irreplaceable.

A complete review of How to Think About AI: A Guide for the Perplexed is available at LLRX.com.

The new book AI Snake Oil  calls out the major “hype superspreaders” fueling today’s AI bubble:

  • Big Tech Companies: Eager to attract investment, tech companies frequently overstate AI’s capabilities. From software firms touting the latest “gen AI” tool as a revolution, to cloud providers bragging about infinite AI compute power, corporate marketing sets unrealistic expectations.
  • Researchers and Benchmark Gaming: The academic AI community is not blameless. Pressures to publish and get media attention tempt researchers to overstate findings or game benchmarks. For example, dozens of papers claimed to predict court decisions with high accuracy, but many exploited “data leakage”– e.g., using words from a judge’s opinion that only appear after the decision is known. Once that leak was patchedthe predictive power vanished. Always dig into how an AI was evaluated – was it a controlled, rigorous test, or are we looking at inflated numbers?
  • Journalists and Media: Sensational headlines often amplify AI myths. Reporters sometimes uncritically reprint company press releases or anthropomorphize AI for clicks. One New York Times column went viral by describing a chatbot that “wanted to be alive.” Narayanan and Kapoor argue that such stories sow public confusion about sentient algorithms that don’t exist. They also criticize “access journalism,” in which tech reporters soften coverage to stay in companies’ good graces. The result: every incremental lab result is hailed as world-changing, while lurid tales of AI misbehavior (often misunderstood) go viral.
  • Public Figures and Pundits: Celebrities, CEOs, and even policymakers sometimes spread misleading narratives. Flashy keynotes proclaim that AI will “transform everything,” while doomsayers warn of an imminent robot apocalypse. Grandiose rhetoric usually serves the speaker’s agenda – attracting funding, shaping legislation, or simply grabbing attention.

The full review of AI Snake Oil is available at LLRX.com.

Jerry Lawson Headshot

I was late to the AI party. In 2019 a legal journal editor asked me a question about AI. I replied “I’m 67 years old and have developed a little expertise in a few areas. At this point in my life it’s too late for me to try to learn about something new and as complicated as AI. I don’t have the time. I’ll leave that for younger, more ambitious lawyers.”

That was then. This is now.

ChatGPT 3.5 was released in November 2022. It and other AI apps are changing the world. I realized I could no longer ignore AI. Since then, learning to use AI has been my top professional priority. It’s just that important.

Skepticism about AI is not only justified—it’s evidence of good judgment. There are indeed pitfalls to AI use. Inept use of AI won’t help you, but my experience has been that in the hands of skilled lawyers with good judgment, AI is essential to obtaining the best results, for one simple reason:

AI is only as good as the question it’s given. This is where senior lawyers excel. Knowing what issue to frame, what clause to focus on, what fact might tip the case—this is precisely what you’ve spent your career developing.

AI can assist. But it still needs someone to think.

Hyping the Risk of Out-of-Control AI

Many purveyors of AI snake oil delight in forecasting that Skynet is right around the corner. They suggest that Generative AI is so close to AGI (artificial general intelligence, meaning AI apps that can perform most or all tasks as effectively as any human being). They claim that we should expect a Terminator-style revolt any day now. This is the flip side of the AI-Is-Our-Savior pitch.

Narayanan and Kapoor have a more realistic view in their new book AI Snake Oil:

We’re not saying that AGI will never be built, or that there is nothing to worry about if it is built. But we think AGI is a long-term prospect, and that society already has the tools to address its risks calmly. We shouldn’t let the bugbear of existential risk distract us from the more immediate harms of AI snake oil.

AGI is definitely a possibility we should take seriously. The questions we should ask are when we might reach it, what it will look like, and what we can do to steer it in a more beneficial direction.

I agree with the authors that we can get giant benefits from AI safely if we build in the right safeguards. My only concern is whether our polarized politics will allow us to implement the safeguards the authors recommend.

Richard Susskind’s new book How to Think About AI: A Guide for the Perplexed explains how thought patterns affect openness to AI:

The “process vs. outcome” distinction.

Richard Susskind: How To Think About AI
How To Think About AI

Chapter 3, “Process-thinking and Outcome-thinking,” sets the stage for following chapters by contrasting the views of two heavyweight public intellectuals: Henry Kissinger and Noam Chomsky. Kissinger praises AI to the heavens. Chomsky thinks it’s basically worthless. Susskind’s explanation for the contrasts is that Kissinger focuses on outputs, while Chomsky focuses on process:

  • Process-thinkers are interested in how complex systems work. 
  • Outcome-thinkers are interested in the results they bring. 
  • Process-thinkers are interested in the architecture of systems. 
  • Outcome-thinkers concentrate on their function. 
  • Outcome-thinkers also tend to be “bottom-up” thinkers, preoccupied with overall impact.

This is a key distinction that explains a lot about differences of opinion about AI. Since AI apps don’t think the way humans think, process-thinkers tend to dismiss them as useless. Outcome-thinkers are more pragmatic, focusing on the demonstrable practical benefits. They understand that “machines don’t need to copy us to deliver the outcomes or outputs that customers, clients and users want from their providers.” Lawyers, trained to analyze process, may be predisposed to dismiss AI’s unfamiliar logic, missing the forest (useful results) for the trees (alien methods).

How much disruption can we expect from adoption of AI? A new report assesses the issue:

Generative AI and Jobs: A Refined Global Index of Occupational Exposure is a comprehensive analysis from the International Labour Organization (ILO) and Poland’s National Research Institute (NASK) of the possible effects of Generative AI on the job market. The report combines information on 30,000 occupational tasks with expert validation, AI-assisted scoring, and ILO harmonized microdata.

The authors conclude full job automation will probably remain limited. Even when AI makes workers more efficient, some human involvement will be necessary.

However, 1 in 4 jobs worldwide are exposed to at least some disruption from AI deployment. Women are predicted to be affected disproportionately. Clerical and administrative jobs are particularly vulnerable, and women held 93 to 97% of those jobs in recent years.

The future is coming, whether we are ready or not.