The Ambition Effect

The prevailing narrative surrounding Generative AI in the legal sector is one of unprecedented efficiency. The sales pitch is seductive in its simplicity: automate routine drafting and research, compress hours into minutes, and liberate attorneys for higher-value strategic thinking.

Yet, as the initial wave of adoption settles, a distinct counter-narrative is emerging from inside law firms. Many practitioners are discovering that integrating AI “powertools” into their workflow is not resulting in earlier departures from the office. In fact, they are often spending more time on a given project, not less.

The immediate, practical assumption is that the technology is simply immature. As documented recently by outlets like Quartz, the “hallucination” problem requires significant human intervention. The output is fast, but cleaning up the errors—fact-checking citations and smoothing out robotic prose—incurs a heavy “cleanup tax.” While experts like Carolyn Elefant have rightly pointed out strategies to mitigate this friction, the reality remains that current AI models are often timesinks disguised as accelerators.

However, if debugging were the primary issue, one could assume efficiency would inevitably win out as the models improve. There is a more nuanced, psychological shift occurring that suggests AI may never truly reduce total working hours for top-tier professionals.

Call it the “Ambition Effect.”

When you equip a highly motivated professional with a tool that dramatically lowers the barrier to execution, their horizon shifts. They do not look at the tool and think, “I can now do my usual standard of work in half the time.” They think, “I can now achieve a vastly superior standard of work in the same amount of time.”

In a pre-AI world, “sufficient” research or a “competent” first draft was often dictated by the sheer constraints of time and budget. Now, with an AI powertool, those constraints have loosened. Suddenly, it is feasible to explore three obscure alternative legal theories instead of just the primary one. It is possible to run deeper semantic analyses on opposing counsel’s previous filings, or to refine the rhetorical structure of a brief until it is not just accurate, but compelling.

The goalpost hasn’t moved; the attorneys have voluntarily pushed it further back.

The irony of this “labor-saving” revolution is that it is fueling an escalation in quality. The end product delivered to the client is undeniably better, deeper, and more polished. But for the lawyers themselves, these tools are not a shortcut to the weekend. They are simply an engine that allows them to drive much further down the same demanding road.

Obtaining good healthcare is not always easy. Not every doctor is a Marcus Welby clone. And as the old joke goes, 50% of doctors graduated in the lower half of their classes, right? Burnout and pressure to meet daily patient-volume quotas mean many patients don’t receive the attention they deserve and expect.

Lawyers who want to stay healthy should be proactive. Increasingly, the first place I go for health advice is an AI app. Don’t let fear of mistakes prevent you from supplementing the advice you get from your doctor by seeking help from your favorite AI app. If you are smart enough to pass the bar exam, you should be able to recognize whether advice is “self-authenticating,” so that you can usually distinguish good advice from bad. Seek guidance from other sources if you are unsure.

Harvard Medical School’s collection of products targeted for lay audiences is my go-to source for supplemental healthcare advice. These include the Harvard Health Annual book, newsletters, an online resource index on common conditions, an online collection of articles on various topics, and even a blog

I find their Special Health Reports particularly valuable. They are available in print ($20) or ebooks ($18). Are they worth it? I have collected at least 10, including Knee and Hip PainFunctional Fitness, and Pain Relief Without Drugs or Surgery. Their Pickleball report is on my reading list.

Pro Tip: If you are a bargain hunter who prefers reading online publications to print media, Harvard Medical School’s digital resource bundles are a bargain.

The AI landscape is changing so rapidly that even top experts struggle to keep up.

Leading AI expert Ethan Mollick is up for the challenge. The author of the best-selling book Co-Intelligence: Living and Working with AI regularly updates his assessments of various AI apps. As I write, the most recent version (late 2025) is An Opinionated Guide to Using AI Right Now.

The April 2025 edition of the Kennedy-Mighell Report doesn’t cover the absolute most recent developments, including significant improvements to Gemini, but its lawyer-specific approach has its benefits.

Tip of the day (and maybe the year:

Always consult more than one AI app when dealing with critical issues.

Monogamy is not the best strategy for AI. Multiple AI perspectives help with high-stakes questions, unsettled law, or anything involving tax regulations (which remain confusing even to the IRS). When two models agree, you gain confidence. When they disagree, you gain a warning sign. Either way, you avoid the embarrassment of citing a hallucinated case that appears in no reporter known to humankind.

Checking multiple AI systems isn’t about indecision. It’s about competence—and mildly hedging against robots who occasionally sound certain about things that simply aren’t true. A little redundancy never hurt anyone, especially lawyers.

It’s scary out there. Ethan Mollick is on the mark:

Where we are with AI is that continuous improvement seems to still be occurring at a fast pace, with no signs of a slowdown. However, since major AI releases have accelerated and seem to be happening monthly or faster, any one release can feel incremental, yet looking back 6-8 months reveals massive improvements. This confuses the two major groups of AI commentators:

1) If you follow every release like a sport, then each individual model change feels small.

2) If you haven’t really followed AI and just use it occasionally, you don’t realize how much things have changed in 6 months because you don’t bother to use the latest models or try them on hard tasks.

OpenAI’s Sam Altman has recently reacted with irritation and defensiveness when confronted in podcast and interview settings about the massive contrast between OpenAI’s annual revenue, reportedly between $13–$20 billion, and its commitment to spend over $1.4 trillion on compute and data center contracts over the next several years.

In a recent interview, Altman dismissed understandable recurring skepticism, sharply interjecting, “If you’re looking to sell your shares, I can help you find a buyer,” and expressing fatigue with having to continually reassure doubters that OpenAI isn’t on the brink of collapse because of these commitments. He repeatedly deflected requests for detailed financial explanations, attacking critics’ motivations and essentially challenging them to “short” OpenAI stock if they’re so concerned about the risk, insisting on his confidence in the company’s growth trajectory.​

My latest article at LLRX, The Imminent AI Bubble Crash (And Why It Won’t Matter in the Long Run), explains why Altman has good reason to be so defensive about the extreme mismatch between current income and forward-looking debt.

Alex Kantrowitz (who is on his way to becoming my favorite podcaster) has much more.

AI founders seem to have a never-ending list of reasons — and hyperventilated pitch decks — explaining why their financial losses don’t matter. Some are hopeful, some are delusional, and some are just echoes of arguments that would-be billionaires floated in the dot-com era—updated with better graphic design.

A new article at LLRX.com, entitled The Imminent AI Bubble Crash (and Why It Won’t Matter in the Long Run), explains some of the most common excuses, but space limitations there prohibited a complete list. Here are some of the other most common attempts to justify the bubble, along with brief rebuttals and a bit of good-natured skepticism:

1. “We’re prioritizing growth over profits.”

Rebuttal: Growth is great, but not when it’s the financial equivalent of gaining weight by eating subsidized ice cream. Users acquired through free money tend to vanish once the free money does.

2. “This is a land-grab moment.”

Rebuttal: That assumes there’s valuable land—and that someone will eventually pay rent. Plenty of dot-com veterans can point to the “land” they grabbed; it now serves as a digital ghost town with excellent parking.

3. “Monetization will come once we turn on premium features.”

Rebuttal: This is the startup version of “I’ll start my diet on Monday.” It sounds good, but the conversion rate from free users to paying customers often ends up in the single digits, as in one digit, and not a high one.

4. “Compute costs are high today, but they’re falling fast.”

Rebuttal: True, but usage is rising even faster. Sam Altman reports that ChatGPT loses more money on $ 200-a-month premium subscriptions than on $20 subscriptions.

5. “Our unit economics are improving.”

Rebuttal: Losing less money per user is not the win that founders think it is. It’s like a restaurant bragging that it now loses only $9 on a $10 burger.

6. “Every user interaction creates proprietary data.”

Rebuttal: Proprietary data is valuable—if your competitors don’t have nearly identical data and access to the same base models. Many so-called “data moats” turn out to be kiddie pools.

7. “We’re building defensibility through R&D.”

Rebuttal: R&D is important, but it’s not a moat if everyone else is also spending aggressively on R&D—especially when the competitors are named Google, Meta, or “OpenAI’s Entire Microsoft-Funded Budget.”

8. “We’re becoming the indispensable platform layer.”

Rebuttal: True platform companies get adopted because others depend on them—not because the founders wish really hard. With 50 nearly interchangeable AI layers, the market looks less like a platform race and more like speed dating.

9. “Our burn rate is intentional, and we have plenty of runway.”

Rebuttal: Runway only tells you the plane hasn’t crashed yet. If the business model doesn’t change, all that “runway” guarantees is a longer, more scenic descent.

10. “Regulation will wipe out weaker competitors.”

Rebuttal: Possibly. But regulation has a long history of hitting everyone equally—and sometimes hitting the big players harder. Banking on regulators to save your business is a strategy that has rarely survived contact with regulators.

11. “All important AI companies went through this phase.”

Rebuttal: Survivorship bias is strong. For every success story, there’s a small graveyard of companies that burned cash with equal enthusiasm but did not leave memoirs.

12. “The total addressable market is enormous.”

Rebuttal: A huge TAM is comforting, but it doesn’t guarantee anyone’s survival. The ocean is enormous, too—and full of shipwrecks.

There’s lots of big talk about the best approach to improving access to justice for self-represented litigants. Dennis Kennedy is not big on theory and bloviation. His most recent article on this topic is available for free republishing in favor of cheap, practical, and effective approaches.

Dennis ran several prompts below through Gemini, with good results. I decided to run his key prompt through ChatGPT 5.1 as well, with these results. Looks like a pretty good result.

Here’s Dennis’s original prompt:

I’m a family court judge dealing with high volumes of self-represented litigants in family law cases (e.g., custody/child support/divorce). My biggest challenge is the vast majority of SRLs do not file accurate and complete forms, generating large amounts of extra work for my staff and me and making each hearing and step in the process longer and more cumbersome for everyone than it should be. Assume that you are a world-class expert in the application of AI to assist courts with SRL issues. What are 5 practical, low-cost ways I could address this issue, particularly using technology or process improvements? Consider solutions that could be implemented within 3-6 months with a very limited budget.

In a profession defined by billable hours, finding time for professional development—let alone personal enrichment—can feel like an impossible task. I used to think podcasts were just another distraction. I was wrong.

Podcasts have not only helped me professionally but also added some joy to my life. They offer a rare opportunity for busy lawyers: the chance to learn and grow even while doing routine tasks. It’s learning that fits into your life, not the other way around

The appeal of podcasts comes from their versatility. You can listen to advanced legal analysis during your commute, catch up on industry news while walking the dog, or pick up a new hobby while folding laundry. For me, they turned a daily exercise routine from a chore into a valued hour of learning and entertainment. Podcasts have even made battling traffic on I-395 and raking leaves almost tolerable.

My recent article at LLRX.com explains the why and how of listening to podcasts.

The headlines are alarming. Reports detail patients being harmed, misled, or outright failed by popular AI apps. Stories like these are emotionally charged, and my preliminary assessment of the seven high-profile cases recently documented by Information Age is that at least some may have genuine merit.

It’s easy to read about a chatbot giving harmful advice and immediately conclude that AI in this space is inherently dangerous.

However, to truly understand whether AI poses a threat, we must stop comparing it to a myth and start asking the comparative question that is too rarely raised: What is it compared to?

The Flawed Comparison: AI vs. The Perfect Doctor

The common fallacy is to benchmark AI-driven results against a false model: The perfect, tireless, and unbiased human clinician.

The comparison, however, should be between AI and real-world doctors. This comparison is complex. Doctors are not perfect, and neither are AI apps. Both have profound strengths and undeniable risks.

Let’s look at the facts and the potential. An article published by the American Psychoanalytic Association concluded:

Utilizing machine learning algorithms which predict suicide attempts via analysis of patient self-report data and EHR data may significantly enhance a clinicians’ ability to identify high-risk individuals who arrive to the ED. Such enhanced predictive value may offer potential for closer monitoring of high-risk patients and earlier intervention in order prevent suicide attempts.

In high-stakes, time-sensitive environments like the Emergency Department (ED), AI is showing a concrete ability to flag risks that humans might miss.

The Real Question: Human vs. Machine Failure

When it comes to mental-health safety, no system—human or artificial—is inherently safe.

  • Human clinicians miss warning signs every day because they are, well, human: tired, biased, overwhelmed, or simply limited by the caseload.
  • AI systems fail less often, but when they do, their errors can be jarringly and bizarrely disconnected from common sense or empathetic understanding.

Critically, each system compensates for the other’s weaknesses. The AI provides tireless data analysis; the human provides contextual judgment and empathy.

So the right question isn’t “Can AI harm patients?” (The answer is clearly yes, just as humans can.)

It’s “Compared to what level of harm?”

Imperfect but Valuable Supplements

Judged fairly, today’s AI tools look less like an existential threat and more like imperfect but indispensable supplements to an overburdened healthcare system.

The path forward is not prohibition, but integration. With transparent design, proper regulation, and rigorous ethical oversight, these tools could help make mental-health care not just more accessible, but arguably safer than it has ever been.

The debate shouldn’t be about replacing the doctor, but about empowering them.

Much more on the ethics, regulation, and specific case studies in our upcoming posts. Be sure to subscribe so you don’t miss them!