Implementing AI is challenging for professions where progress often means figuring out how to get your printer to work. Many lawyers are waiting until they have developed a perfect implementation strategy. They wind up doing nothing.

Lawyers can and should benefit from AI today rather than waiting to develop the world’s greatest policy. Until that perfect policy arrives, follow two rules:

  1. Never rely on anything AI tells you about crucial issues.
  2. Always ask AI for advice on crucial issues.

AI does not merely save time. It can make your work product better. The time to get these benefits is now.

At the same time, safety matters. Conceptualizing AI as a sort of super research assistant can help. Wouldn’t you always verify any research assistant work product before using it for anything crucial? You don’t want your memo to be a Shakespearean sonnet about the existential angst of a CPA.

Can it be malpractice to fail to use AI, as people as wise as Carolyn Elefant have suggested? Not today, and probably never. It’s just a giant missed opportunity.

Pro Tip: Explore specialized AI-powered legal research tools like Lexis+ AI or Westlaw Edge. These tools can help you find what you need faster and more safely.

More Tips at LLRX article 10 Tips for Prosperity and Pleasure in 2025.

The American Bar Association’s Formal Opinion 512, “Generative Artificial Intelligence Tools,” gets some things right, but goes astray when considering alternative billing models like flat fees:

The factors set forth in Rule 1.5(a) also apply when evaluating the reasonableness of charges for GAI tools when the lawyer and client agree on a flat or contingent fee. For example, if using a GAI tool enables a lawyer to complete tasks much more quickly than without the tool, it may be unreasonable under Rule 1.5 for the lawyer to charge the same flat fee when using the GAI tool as when not using it. “A fee charged for which little or no work was performed is an unreasonable fee.” [Footnotes omitted].

It is penalizing lawyers for efficiency that is unreasonable.

I can’t explain it better than Greg Siskind, so I won’t try:

You select a car based on a lot of subjective factors, including the reputation of the manufacturer and the perceived quality of the vehicle. Drivers usually don’t care whether the car is completely handmade or built with robots. But they do tend to care about things like the buying experience, the car’s look and feel, and the reputation and trustworthiness of the manufacturer. The buyers don’t care about how much automation was used in making the car. If a manufacturer produces a great product at a much lower cost to make the product because of superior technology, buyers are happy to reward them with higher profits. We would think it absurd if the government said that a car manufacturer had to lower their prices because of the savings they achieved through robots and automation. That would be the role of a competitive market. Why is law different?

More in my LLRX.com article Artificial Intelligence, ABA Formal Opinion 512 And Access To Justice.

Pro Tip: Take AI’s advice on how to reduce the possibility of errors. ChatGPT hallucinated when I asked it to give examples of brilliant chess games played by a famous player (Lubomir Kavalek). It made up a game that never happened. When I questioned the advice, it apologized. I followed up with a question about how I could make future errors less likely:

Question: I am particularly interested in the erroneous answer you provided earlier concerning the non-existent Kavalek-Fischer simultaneous game. What can I do to reduce the risk that you might give me similar bad information in the future?

Chat GPT Answer:

I appreciate your concern about ensuring accurate responses, and I value your desire for high-quality, reliable information. Here’s how you can reduce the risk of encountering similar inaccuracies:

  1. Request Sources or Citations: When asking for historical, biographical, or factual details, you can explicitly request sources or citations. For example, ask, “Can you provide evidence or a citation for this claim?” This will prompt me to clarify the basis for my response.
  2. Cross-Check with External Sources: If a topic is critical, consider verifying the information against trusted external sources like books, articles, or databases. I can assist by helping you identify credible resources.
  3. Phrase Questions for Verification: Instead of asking open-ended questions, frame queries to prompt a more precise or researched answer. For example, “Is there evidence of a simultaneous exhibition involving Kavalek and Fischer?”
  4. Ask for Current Information: If you’re seeking updated details, mention it explicitly, and I can use tools to gather the latest information where appropriate.
  5. Highlight Critical Accuracy: Let me know when the stakes for accurate information are high. For example, “This is an important topic to me. Please ensure the answer is as precise as possible.”
  6. Ask for Context or Nuance: If a claim sounds too definitive, you can ask me to explore its context or address potential exceptions. For instance, “Can you elaborate on any controversies or misconceptions related to this claim?”
  7. Feedback and Clarification: If you notice a discrepancy or error in my answer, provide immediate feedback. This will help me refine my responses or clarify misunderstandings.

If you have additional strategies in mind or specific concerns about how I process information, feel free to share them. My goal is to ensure our conversations meet your standards for accuracy and usefulness.

Dennis Kennedy’s top lesson learned from 2024 is right on target:


Oh my god, there are so many legal AI experts everywhere. There’s a lot of talk, a lot of people claim that they’re experts. I saw that Stanford is doing a conference in a couple months about best practices in Generative AI. It’s been around a year. It’s like the best practice would be don’t play with matches and burn the house down. What do we know at this point? The lesson always with legal tech is to have sort of an experimental approach, a humble approach, and don’t think that you know everything from the beginning, but just try to learn it. And especially if something as significant as generative AI seems to be, I think we should spend a little less time saying that we were experts a little more time just rolling up our sleeves and doing some experiments. [Lightly edited version of Kennedy Mighell podcast.]

Yep. Way too many lawyers are forgetting that “A man’s gotta know his limitations.”

Magnum Force, 1973.

Carolyn Elefant has some good advice: “Check everything ChatGPT does.

I have found two rules of thumb useful:

* Never trust ChatGPT advice on any important issue.

* Always ask ChatGPT for advice on every important issue.

I treat AI as a sort of super law clerk. I would always verify any law clerk work product before using it for anything important.

At the same time, AI frequently suggests ideas that would not have occurred to me, or comes up with improvements on my work.

My biggest problem is realizing when I’m dealing with something AI can help me with. Whenever I do, I ask.

Stephen Embry’s article Billable Hour Tiers for Associates: Progressive or Lipstick on a Pig? has the diagnosis right:

Bloomberg Law recently reported that the venerable firm Steptoe would allow associates to choose their own billable hour targets. The program will start next year. An associate can choose to bill 2200 hours and receive top pay, or they can opt for 2000 hours and make less, or 1800 hours and make even less. Associates reportedly elect to move to a different tier. Associates are allowed to bill fewer than 1800 hours and have their pay pro-rated accordingly. …

The Steptoe approach is just another disguised way to put lipstick on a pig. Law firms need to take a hard look at their demands and expectations of associates. Instead of demanding and then rewarding unrealistic billable hours, firms should set reasonable expectations and encourage innovation. But as long as the billable hour forms the basis of your business model, that’s not likely to happen.

Can’t argue with that. Gotta part ways with Steve on his thinking about how AI fits into the situation:

And there’s a whole other issue when it comes to AI. Associates should be rewarded for figuring out how to do more in less time using today’s (and tomorrow’s) tools. Yet under the Steptoe approach, the most innovative associates would be penalized. That’s already true in most firms, but recognizing a caste system would provide even fewer incentives than there already are.A better—and hopefully more forward looking–approach would be to encourage innovation and efficiency. The approach does little more than exacerbate the problems with the billable hour model when firms need to be looking at ways to move away.

The current legal billing system is ugly. However, it’s a dream to imagine AI will kill the billable hour model. The system will never change until clients make it clear they will not accept abusive and often unethical billing practices.

The 2024 election results make Glenn Fine’s new book Watchdogs: Inspectors General and the Battle for Honest and Accountable Government timely and essential reading. It deals with a relatively little-known group that may provide some limits on corrupt or overreaching Presidential actions in these difficult times. LLRX.com has my detailed review.

No doubt Trump will attempt to get the IGs in line with his program, but controlling the institution won’t be as easy as some people think. It will be interesting to see how the situation plays out. Here’s a promotional interview with Fine:

Michael Ravnitzky is a national treasure. He earned my respect during my civil service days as a counsel to Inspectors General at multiple agencies.

He is a FOIA expert and a relentless seeker of important government information that the public should know. He filed many FOIA requests and followed up on the most important tenaciously.

Other agency lawyers hated him. They resented the fact that his many FOIA requests caused them extra work, sometimes significantly more work.

I’ll never forget the time I mentioned his name at an interagency meeting. I suggested that rather than fight with him, reflexively resisting every request, they should cooperate with him, working with him to let him refine his requests. They could help him get the information he needed in a timely fashion, while minimizing the work they would need to do. This suggestion annoyed many in the audience, who moaned or hooted when I said his name.

My view was different. I always respected him. His requests demonstrated the wisdom of Congress in enacting the Freedom of Information Act.

Mike’s new article The New Administration: A Boon for Investigative Journalism is the best explanation I have seen of the way forward for journalists in these troubled times.