The AI landscape is changing so rapidly that even top experts struggle to keep up.

Leading AI expert Ethan Mollick is up for the challenge. The author of the best-selling book Co-Intelligence: Living and Working with AI regularly updates his assessments of various AI apps. As I write, the most recent version (late 2025) is An Opinionated Guide to Using AI Right Now.

The April 2025 edition of the Kennedy-Mighell Report doesn’t cover the absolute most recent developments, including significant improvements to Gemini, but its lawyer-specific approach has its benefits.

Tip of the day (and maybe the year:

Always consult more than one AI app when dealing with critical issues.

Monogamy is not the best strategy for AI. Multiple AI perspectives help with high-stakes questions, unsettled law, or anything involving tax regulations (which remain confusing even to the IRS). When two models agree, you gain confidence. When they disagree, you gain a warning sign. Either way, you avoid the embarrassment of citing a hallucinated case that appears in no reporter known to humankind.

Checking multiple AI systems isn’t about indecision. It’s about competence—and mildly hedging against robots who occasionally sound certain about things that simply aren’t true. A little redundancy never hurt anyone, especially lawyers.

It’s scary out there. Ethan Mollick is on the mark:

Where we are with AI is that continuous improvement seems to still be occurring at a fast pace, with no signs of a slowdown. However, since major AI releases have accelerated and seem to be happening monthly or faster, any one release can feel incremental, yet looking back 6-8 months reveals massive improvements. This confuses the two major groups of AI commentators:

1) If you follow every release like a sport, then each individual model change feels small.

2) If you haven’t really followed AI and just use it occasionally, you don’t realize how much things have changed in 6 months because you don’t bother to use the latest models or try them on hard tasks.

OpenAI’s Sam Altman has recently reacted with irritation and defensiveness when confronted in podcast and interview settings about the massive contrast between OpenAI’s annual revenue, reportedly between $13–$20 billion, and its commitment to spend over $1.4 trillion on compute and data center contracts over the next several years.

In a recent interview, Altman dismissed understandable recurring skepticism, sharply interjecting, “If you’re looking to sell your shares, I can help you find a buyer,” and expressing fatigue with having to continually reassure doubters that OpenAI isn’t on the brink of collapse because of these commitments. He repeatedly deflected requests for detailed financial explanations, attacking critics’ motivations and essentially challenging them to “short” OpenAI stock if they’re so concerned about the risk, insisting on his confidence in the company’s growth trajectory.​

My latest article at LLRX, The Imminent AI Bubble Crash (And Why It Won’t Matter in the Long Run), explains why Altman has good reason to be so defensive about the extreme mismatch between current income and forward-looking debt.

Alex Kantrowitz (who is on his way to becoming my favorite podcaster) has much more.

AI founders seem to have a never-ending list of reasons — and hyperventilated pitch decks — explaining why their financial losses don’t matter. Some are hopeful, some are delusional, and some are just echoes of arguments that would-be billionaires floated in the dot-com era—updated with better graphic design.

A new article at LLRX.com, entitled The Imminent AI Bubble Crash (and Why It Won’t Matter in the Long Run), explains some of the most common excuses, but space limitations there prohibited a complete list. Here are some of the other most common attempts to justify the bubble, along with brief rebuttals and a bit of good-natured skepticism:

1. “We’re prioritizing growth over profits.”

Rebuttal: Growth is great, but not when it’s the financial equivalent of gaining weight by eating subsidized ice cream. Users acquired through free money tend to vanish once the free money does.

2. “This is a land-grab moment.”

Rebuttal: That assumes there’s valuable land—and that someone will eventually pay rent. Plenty of dot-com veterans can point to the “land” they grabbed; it now serves as a digital ghost town with excellent parking.

3. “Monetization will come once we turn on premium features.”

Rebuttal: This is the startup version of “I’ll start my diet on Monday.” It sounds good, but the conversion rate from free users to paying customers often ends up in the single digits, as in one digit, and not a high one.

4. “Compute costs are high today, but they’re falling fast.”

Rebuttal: True, but usage is rising even faster. Sam Altman reports that ChatGPT loses more money on $ 200-a-month premium subscriptions than on $20 subscriptions.

5. “Our unit economics are improving.”

Rebuttal: Losing less money per user is not the win that founders think it is. It’s like a restaurant bragging that it now loses only $9 on a $10 burger.

6. “Every user interaction creates proprietary data.”

Rebuttal: Proprietary data is valuable—if your competitors don’t have nearly identical data and access to the same base models. Many so-called “data moats” turn out to be kiddie pools.

7. “We’re building defensibility through R&D.”

Rebuttal: R&D is important, but it’s not a moat if everyone else is also spending aggressively on R&D—especially when the competitors are named Google, Meta, or “OpenAI’s Entire Microsoft-Funded Budget.”

8. “We’re becoming the indispensable platform layer.”

Rebuttal: True platform companies get adopted because others depend on them—not because the founders wish really hard. With 50 nearly interchangeable AI layers, the market looks less like a platform race and more like speed dating.

9. “Our burn rate is intentional, and we have plenty of runway.”

Rebuttal: Runway only tells you the plane hasn’t crashed yet. If the business model doesn’t change, all that “runway” guarantees is a longer, more scenic descent.

10. “Regulation will wipe out weaker competitors.”

Rebuttal: Possibly. But regulation has a long history of hitting everyone equally—and sometimes hitting the big players harder. Banking on regulators to save your business is a strategy that has rarely survived contact with regulators.

11. “All important AI companies went through this phase.”

Rebuttal: Survivorship bias is strong. For every success story, there’s a small graveyard of companies that burned cash with equal enthusiasm but did not leave memoirs.

12. “The total addressable market is enormous.”

Rebuttal: A huge TAM is comforting, but it doesn’t guarantee anyone’s survival. The ocean is enormous, too—and full of shipwrecks.

There’s lots of big talk about the best approach to improving access to justice for self-represented litigants. Dennis Kennedy is not big on theory and bloviation. His most recent article on this topic is available for free republishing in favor of cheap, practical, and effective approaches.

Dennis ran several prompts below through Gemini, with good results. I decided to run his key prompt through ChatGPT 5.1 as well, with these results. Looks like a pretty good result.

Here’s Dennis’s original prompt:

I’m a family court judge dealing with high volumes of self-represented litigants in family law cases (e.g., custody/child support/divorce). My biggest challenge is the vast majority of SRLs do not file accurate and complete forms, generating large amounts of extra work for my staff and me and making each hearing and step in the process longer and more cumbersome for everyone than it should be. Assume that you are a world-class expert in the application of AI to assist courts with SRL issues. What are 5 practical, low-cost ways I could address this issue, particularly using technology or process improvements? Consider solutions that could be implemented within 3-6 months with a very limited budget.

In a profession defined by billable hours, finding time for professional development—let alone personal enrichment—can feel like an impossible task. I used to think podcasts were just another distraction. I was wrong.

Podcasts have not only helped me professionally but also added some joy to my life. They offer a rare opportunity for busy lawyers: the chance to learn and grow even while doing routine tasks. It’s learning that fits into your life, not the other way around

The appeal of podcasts comes from their versatility. You can listen to advanced legal analysis during your commute, catch up on industry news while walking the dog, or pick up a new hobby while folding laundry. For me, they turned a daily exercise routine from a chore into a valued hour of learning and entertainment. Podcasts have even made battling traffic on I-395 and raking leaves almost tolerable.

My recent article at LLRX.com explains the why and how of listening to podcasts.

The headlines are alarming. Reports detail patients being harmed, misled, or outright failed by popular AI apps. Stories like these are emotionally charged, and my preliminary assessment of the seven high-profile cases recently documented by Information Age is that at least some may have genuine merit.

It’s easy to read about a chatbot giving harmful advice and immediately conclude that AI in this space is inherently dangerous.

However, to truly understand whether AI poses a threat, we must stop comparing it to a myth and start asking the comparative question that is too rarely raised: What is it compared to?

The Flawed Comparison: AI vs. The Perfect Doctor

The common fallacy is to benchmark AI-driven results against a false model: The perfect, tireless, and unbiased human clinician.

The comparison, however, should be between AI and real-world doctors. This comparison is complex. Doctors are not perfect, and neither are AI apps. Both have profound strengths and undeniable risks.

Let’s look at the facts and the potential. An article published by the American Psychoanalytic Association concluded:

Utilizing machine learning algorithms which predict suicide attempts via analysis of patient self-report data and EHR data may significantly enhance a clinicians’ ability to identify high-risk individuals who arrive to the ED. Such enhanced predictive value may offer potential for closer monitoring of high-risk patients and earlier intervention in order prevent suicide attempts.

In high-stakes, time-sensitive environments like the Emergency Department (ED), AI is showing a concrete ability to flag risks that humans might miss.

The Real Question: Human vs. Machine Failure

When it comes to mental-health safety, no system—human or artificial—is inherently safe.

  • Human clinicians miss warning signs every day because they are, well, human: tired, biased, overwhelmed, or simply limited by the caseload.
  • AI systems fail less often, but when they do, their errors can be jarringly and bizarrely disconnected from common sense or empathetic understanding.

Critically, each system compensates for the other’s weaknesses. The AI provides tireless data analysis; the human provides contextual judgment and empathy.

So the right question isn’t “Can AI harm patients?” (The answer is clearly yes, just as humans can.)

It’s “Compared to what level of harm?”

Imperfect but Valuable Supplements

Judged fairly, today’s AI tools look less like an existential threat and more like imperfect but indispensable supplements to an overburdened healthcare system.

The path forward is not prohibition, but integration. With transparent design, proper regulation, and rigorous ethical oversight, these tools could help make mental-health care not just more accessible, but arguably safer than it has ever been.

The debate shouldn’t be about replacing the doctor, but about empowering them.

Much more on the ethics, regulation, and specific case studies in our upcoming posts. Be sure to subscribe so you don’t miss them!

Multiple reputable sources are reporting that OpenAI CEO Sam Altman recently told employees—via a leaked internal memo—that Google’s newest advances in AI could “create some temporary economic headwinds” for OpenAI, even as he tried to reassure staff that the company is “catching up fast” and remains well-positioned for the long run.

Altman’s message reportedly acknowledged that the external environment may be “difficult for some time.” His framing was classic morale management: describe the turbulence as short-term, emphasize confidence in the engineering teams, and signal that better days lie ahead.

Notably, OpenAI has not publicly commented on the memo. But that doesn’t make the reporting less credible. Altman is many things, but politically naïve is not one of them. He surely knew that such a memo—at this moment, with competitive pressure rising—would leak with thermonuclear speed.

My read is that the memo served two audiences. Yes, it was written to employees, but it was also written for investors: a controlled admission that the winds have shifted, coupled with a promise that OpenAI will ride out the storm and reassert its competitive edge.

I’m not convinced this optimism is warranted. The newest release of Google’s Gemini app demonstrates that generative AI is rapidly becoming commoditized. If these tools converge toward similar capabilities, the market begins to resemble cloud storage or web hosting: a handful of powerful players offering roughly interchangeable products, leaving very little room for profit.

The race is no longer about who can wow the public first; it’s about who can operate at scale, sustainably, and without burning billions. That is a very different contest.

Buckle up. Things are about to get fun.