AI founders seem to have a never-ending list of reasons — and hyperventilated pitch decks — explaining why their financial losses don’t matter. Some are hopeful, some are delusional, and some are just echoes of arguments that would-be billionaires floated in the dot-com era—updated with better graphic design.

A new article at LLRX.com, entitled The Imminent AI Bubble Crash (and Why It Won’t Matter in the Long Run), explains some of the most common excuses, but space limitations there prohibited a complete list. Here are some of the other most common attempts to justify the bubble, along with brief rebuttals and a bit of good-natured skepticism:

1. “We’re prioritizing growth over profits.”

Rebuttal: Growth is great, but not when it’s the financial equivalent of gaining weight by eating subsidized ice cream. Users acquired through free money tend to vanish once the free money does.

2. “This is a land-grab moment.”

Rebuttal: That assumes there’s valuable land—and that someone will eventually pay rent. Plenty of dot-com veterans can point to the “land” they grabbed; it now serves as a digital ghost town with excellent parking.

3. “Monetization will come once we turn on premium features.”

Rebuttal: This is the startup version of “I’ll start my diet on Monday.” It sounds good, but the conversion rate from free users to paying customers often ends up in the single digits, as in one digit, and not a high one.

4. “Compute costs are high today, but they’re falling fast.”

Rebuttal: True, but usage is rising even faster. Sam Altman reports that ChatGPT loses more money on $ 200-a-month premium subscriptions than on $20 subscriptions.

5. “Our unit economics are improving.”

Rebuttal: Losing less money per user is not the win that founders think it is. It’s like a restaurant bragging that it now loses only $9 on a $10 burger.

6. “Every user interaction creates proprietary data.”

Rebuttal: Proprietary data is valuable—if your competitors don’t have nearly identical data and access to the same base models. Many so-called “data moats” turn out to be kiddie pools.

7. “We’re building defensibility through R&D.”

Rebuttal: R&D is important, but it’s not a moat if everyone else is also spending aggressively on R&D—especially when the competitors are named Google, Meta, or “OpenAI’s Entire Microsoft-Funded Budget.”

8. “We’re becoming the indispensable platform layer.”

Rebuttal: True platform companies get adopted because others depend on them—not because the founders wish really hard. With 50 nearly interchangeable AI layers, the market looks less like a platform race and more like speed dating.

9. “Our burn rate is intentional, and we have plenty of runway.”

Rebuttal: Runway only tells you the plane hasn’t crashed yet. If the business model doesn’t change, all that “runway” guarantees is a longer, more scenic descent.

10. “Regulation will wipe out weaker competitors.”

Rebuttal: Possibly. But regulation has a long history of hitting everyone equally—and sometimes hitting the big players harder. Banking on regulators to save your business is a strategy that has rarely survived contact with regulators.

11. “All important AI companies went through this phase.”

Rebuttal: Survivorship bias is strong. For every success story, there’s a small graveyard of companies that burned cash with equal enthusiasm but did not leave memoirs.

12. “The total addressable market is enormous.”

Rebuttal: A huge TAM is comforting, but it doesn’t guarantee anyone’s survival. The ocean is enormous, too—and full of shipwrecks.

There’s lots of big talk about the best approach to improving access to justice for self-represented litigants. Dennis Kennedy is not big on theory and bloviation. His most recent article on this topic is available for free republishing in favor of cheap, practical, and effective approaches.

Dennis ran several prompts below through Gemini, with good results. I decided to run his key prompt through ChatGPT 5.1 as well, with these results. Looks like a pretty good result.

Here’s Dennis’s original prompt:

I’m a family court judge dealing with high volumes of self-represented litigants in family law cases (e.g., custody/child support/divorce). My biggest challenge is the vast majority of SRLs do not file accurate and complete forms, generating large amounts of extra work for my staff and me and making each hearing and step in the process longer and more cumbersome for everyone than it should be. Assume that you are a world-class expert in the application of AI to assist courts with SRL issues. What are 5 practical, low-cost ways I could address this issue, particularly using technology or process improvements? Consider solutions that could be implemented within 3-6 months with a very limited budget.

In a profession defined by billable hours, finding time for professional development—let alone personal enrichment—can feel like an impossible task. I used to think podcasts were just another distraction. I was wrong.

Podcasts have not only helped me professionally but also added some joy to my life. They offer a rare opportunity for busy lawyers: the chance to learn and grow even while doing routine tasks. It’s learning that fits into your life, not the other way around

The appeal of podcasts comes from their versatility. You can listen to advanced legal analysis during your commute, catch up on industry news while walking the dog, or pick up a new hobby while folding laundry. For me, they turned a daily exercise routine from a chore into a valued hour of learning and entertainment. Podcasts have even made battling traffic on I-395 and raking leaves almost tolerable.

My recent article at LLRX.com explains the why and how of listening to podcasts.

The headlines are alarming. Reports detail patients being harmed, misled, or outright failed by popular AI apps. Stories like these are emotionally charged, and my preliminary assessment of the seven high-profile cases recently documented by Information Age is that at least some may have genuine merit.

It’s easy to read about a chatbot giving harmful advice and immediately conclude that AI in this space is inherently dangerous.

However, to truly understand whether AI poses a threat, we must stop comparing it to a myth and start asking the comparative question that is too rarely raised: What is it compared to?

The Flawed Comparison: AI vs. The Perfect Doctor

The common fallacy is to benchmark AI-driven results against a false model: The perfect, tireless, and unbiased human clinician.

The comparison, however, should be between AI and real-world doctors. This comparison is complex. Doctors are not perfect, and neither are AI apps. Both have profound strengths and undeniable risks.

Let’s look at the facts and the potential. An article published by the American Psychoanalytic Association concluded:

Utilizing machine learning algorithms which predict suicide attempts via analysis of patient self-report data and EHR data may significantly enhance a clinicians’ ability to identify high-risk individuals who arrive to the ED. Such enhanced predictive value may offer potential for closer monitoring of high-risk patients and earlier intervention in order prevent suicide attempts.

In high-stakes, time-sensitive environments like the Emergency Department (ED), AI is showing a concrete ability to flag risks that humans might miss.

The Real Question: Human vs. Machine Failure

When it comes to mental-health safety, no system—human or artificial—is inherently safe.

  • Human clinicians miss warning signs every day because they are, well, human: tired, biased, overwhelmed, or simply limited by the caseload.
  • AI systems fail less often, but when they do, their errors can be jarringly and bizarrely disconnected from common sense or empathetic understanding.

Critically, each system compensates for the other’s weaknesses. The AI provides tireless data analysis; the human provides contextual judgment and empathy.

So the right question isn’t “Can AI harm patients?” (The answer is clearly yes, just as humans can.)

It’s “Compared to what level of harm?”

Imperfect but Valuable Supplements

Judged fairly, today’s AI tools look less like an existential threat and more like imperfect but indispensable supplements to an overburdened healthcare system.

The path forward is not prohibition, but integration. With transparent design, proper regulation, and rigorous ethical oversight, these tools could help make mental-health care not just more accessible, but arguably safer than it has ever been.

The debate shouldn’t be about replacing the doctor, but about empowering them.

Much more on the ethics, regulation, and specific case studies in our upcoming posts. Be sure to subscribe so you don’t miss them!

Multiple reputable sources are reporting that OpenAI CEO Sam Altman recently told employees—via a leaked internal memo—that Google’s newest advances in AI could “create some temporary economic headwinds” for OpenAI, even as he tried to reassure staff that the company is “catching up fast” and remains well-positioned for the long run.

Altman’s message reportedly acknowledged that the external environment may be “difficult for some time.” His framing was classic morale management: describe the turbulence as short-term, emphasize confidence in the engineering teams, and signal that better days lie ahead.

Notably, OpenAI has not publicly commented on the memo. But that doesn’t make the reporting less credible. Altman is many things, but politically naïve is not one of them. He surely knew that such a memo—at this moment, with competitive pressure rising—would leak with thermonuclear speed.

My read is that the memo served two audiences. Yes, it was written to employees, but it was also written for investors: a controlled admission that the winds have shifted, coupled with a promise that OpenAI will ride out the storm and reassert its competitive edge.

I’m not convinced this optimism is warranted. The newest release of Google’s Gemini app demonstrates that generative AI is rapidly becoming commoditized. If these tools converge toward similar capabilities, the market begins to resemble cloud storage or web hosting: a handful of powerful players offering roughly interchangeable products, leaving very little room for profit.

The race is no longer about who can wow the public first; it’s about who can operate at scale, sustainably, and without burning billions. That is a very different contest.

Buckle up. Things are about to get fun.

The relentless, frenetic excitement surrounding Artificial Intelligence feels familiar. For anyone who remembers the turn of the millennium, it’s a clear echo of the dot-com bubble, a time of speculation about a new technology completely detached from business fundamentals.

Back then, optimism for an internet-driven economy sent stock prices for companies like AOL to astronomical levels. The problem was that few of these companies were actually profitable. When the bubble burst, the losses were staggering. AOL Time Warner posted a staggering $98.7 billion annual loss in 2002, the largest in U.S. corporate history at the time, largely from writing down the value of its internet division.

Today, we see a parallel. The market is flooded with AI hype, yet very few companies—outside of key infrastructure players like chipmaker Nvidia—are turning a profit. This is unsustainable. These unprofitable AI app-developing “miners” are the biggest customers for Nvidia’s “shovels.” If they run out of cash and collapse, the demand will inevitably falter.

This doesn’t mean the AI revolution isn’t real. It is. But, as with the internet, the first wave is rarely the one that lasts. The true, world-changing success of the internet didn’t come from the bubble-era giants; it came from the companies built after the crash, like Google and Facebook.

Bill Gates articulated this exact point in a recent interview, directly comparing the current “frenzy” to the internet bubble. He clarified that while AI is “profound”—the “biggest technical thing ever” in his lifetime—it will follow a similar pattern. Just as with the internet, many companies will fail, and a “ton of these investments… will be dead ends.”

History suggests we are in the speculative phase. The real, lasting AI revolution will likely be built on the ashes of this first, over-hyped wave.

Platforms like Substack and Medium have made publishing easier than ever—but if you rely solely on them, you’re renting space on someone else’s land. Owning your own domain gives you independence, credibility, long-term control, and many other benefits:

1. Professional Credibility and Branding

  • A custom domain name projects authority and legitimacy.
  • Think about it—who would you trust more: smithlaw.medium.com or smithlaw.com?”

Pro Tip: Journalists, lawyers, consultants, and authors who use custom domains are taken more seriously than those who don’t.

2. Long-Term Ownership

  • Your domain is yours, regardless of what publishing platform you use.
  • If Medium or Substack shuts down, changes its rules, loses popularity, or becomes less desirable for any reason, you can move your content elsewhere without changing your web address.
  • Imagine losing access to your writing archive because a platform shut down or changed its terms overnight.

Pro Tip: Think of your domain as a digital land deed. You control the address; the platform is just a tenant.

3. Better SEO Control

  • Search engines (like Google) reward content that lives under a consistent domain.
  • With your own domain, all search authority and backlinks accrue to your brand—not to Medium, Substack, or another host. When your posts rank on Google, you want the traffic going to your site—not to someone else’s domain

Pro Tip: If you ever switch platforms, keeping your domain means you don’t lose your Google rankings.

4. More Customization Options

  • With a custom domain, you can eventually build out a multi-part web presence:
    • blog.yoursite.com for articles
    • docs.yoursite.com for whitepapers
    • www.yoursite.com for your main site
  • You’re not stuck within the limits of a publishing platform. As your content grows, your site can evolve into a knowledge hub—not just a blog.

5. Email Address Integration

  • You can set up custom email addresses like you@yourdomain.com, which boosts trust in business and networking contexts.
  • An email from jane@lawandstrategy.com signals professionalism; an email from janestrategy@gmail.com does not.”

Pro Tip: Readers are more likely to open emails from a custom domain than from an address like Gmail.com or Outlook.com.

6. Avoids Platform Lock-In

  • If you build your brand around yourname.medium.com you are dependent on Medium’s policies.
  • If they add fees, start injecting ads, or ban content types, your options are limited.

 Pro Tip: A custom domain ensures you—not the platform—own your audience and content location.

Convinced of the Need for a Domain Name? Here’s How to Get Started

  1. Choose a short, memorable domain (ideally under 15 characters).
  2. Register it through a reputable provider like NamecheapGoogle Domains, or Hover.
  3. Connect it to your Substack, Medium, or WordPress site—most platforms make this a one-click setup.
  4. Use it consistently in your email signature, social media bios, and marketing materials.

Pro Tip: Register common variants or misspellings of your domain to protect your brand.

Final Thoughts

Platforms come and go—but your domain is permanent digital real estate. Professionals who invest in their own address today will build stronger brands and avoid painful migrations tomorrow.

Professional branding matters. If you haven’t claimed your domain yet, do it today—it’s the cheapest credibility boost you’ll ever buy.

We’re all busy, right? I get it. It’s all about knowing what to prioritize.

If you want (or need) thoughtful analysis of AI developments, MIT Sloan Management Review is one of the best places to invest a little time. A case in point: their October 7 article titled, “Cut Through GenAI Confusion: Eight Definitive Reads.”

Here’s a summary:

Despite its ubiquity, generative AI is still young. It’s no surprise, then, that many leaders are struggling with important questions about using the tools efficiently, responsibly, and skeptically. The questions are significant:

  • Can we trust it?
  • What should our strategy be in deploying it and scaling success?
  • Who should be using it?
  • How will we measure its ROI?
  • When should we use other types of AI, like machine learning, instead?

MIT Sloan Management Review authors on this topic — including academics, researchers, and practitioners — are reporting from the front lines about the best GenAI questions to ask and how to think about answering them.

This is exactly the kind of practical, expert-driven insight the Review delivers consistently.

If you’re looking to stay ahead, it’s easy to get access. As a starting point, you can become a “member” for free, which puts you on their mailing list and lets you unlock a couple of free articles a month.

When you’re ready for full access, a one-year digital subscription is $69. It’s a small investment for staying informed on one of the most significant topics in business today.

Why Expensive Designs often Fail and How Smart Lawyers Can Fix It

In law, we’re trained to believe that you get what you pay for. But when it comes to law firm websites, the opposite is often true. Some of the most expensive sites perform the worst—especially when they rely on proprietary systems that lock you into a single vendor. Conrad Saam documents one instance when moving to a bespoke approach resulted in a 44% drop in website traffic.

Instead of paying for “elite” proprietary platforms that only your designer understands, most firms are far better off with Word Press, the open-source powerhouse that quietly runs over 40% of the internet—including The New York Times and countless major law firms

WordPress isn’t just cheap—it’s fast, flexible, secure, and supported by a global developer community that keeps improving it daily. Contrast that with boutique digital agencies that charge $10,000 to $50,000 (or more) for custom websites that may look polished but often:

  • Require you to contact the agency for every minor update,
  • Use obscure or proprietary code that other developers won’t touch
  • Ignore essential SEO principles.

What Lawyers Should Prioritize Instead

1. SEO Beats Eye Candy

A beautiful website is meaningless if no one sees it. SEO (Search Engine Optimization) drives traffic—and most clients begin their search for a lawyer on Google.

You don’t need a $25,000 design to rank on search engines. You need:

  • Fast load times
  • Clear and keyword-focused headlines
  • Useful, original content, published regularly
  • Mobile Optimization

Tools like Yoast SEO, RankMath, or Google Search Console integrate easily into WordPress and help you improve visibility without hiring an SEO “guru.”

Pro tip: Spend your budget on a skilled legal copywriter, not a homepage video that slows your site.


2. Control, Don’t Depend

Custom “bespoke” sites often make you dependent on a single agency for every small change.

Pro tip: Investing $18 in a WordPress guidebook can save thousands of dollars in consulting fees.


3. Security Strength in Numbers

WordPress’s open ecosystem means continuous updates and peer-reviewed plugins.

Pro tip: Enable automatic minor updates and weekly cloud backups to minimize risk.


4. Clients Want Speed, Clarity and Confidence

Prospective clients care less about animations and more about answers:

  • What kind of law do you practice?
  • Are you any good?
  • How can I contact you?

Pro tip: A site that loads in under 3 seconds is golden.


Final Thought: Make Smart, Not Showy, Choices

There’s nothing wrong with investing in marketing—but make sure your investment earns attention, not just admiration.

ABA Formal Opinion 512 provides welcome guidance on ethical obligations for lawyers, demanding competence in understanding AI’s benefits and risks (Model Rule 1.1), diligence in protecting client confidentiality (Model Rule 1.6), clarity in client communications (Model Rule 1.4), candor toward tribunals (Model Rule 3.3), effective supervision of AI use (Model Rules 5.1, 5.3), and reasonableness in fees (Model Rule 1.5).

While not an ethics compliance manual, Richard Susskind’s new book How to Think About AI: A Guide for the Perplexed offers precisely the conceptual tools — the “mental models” — needed to navigate these practical obligations. For example:

  • Susskind’s discussions on AI capabilities, limitations, and the difficulty in explaining how some systems work (Chapters 1, 2 and 5) directly inform the duty of competence under Rule 1.1, which requires lawyers to understand the benefits and risks of associated technology. 
  • His structured analysis of AI risks (Chapters 8 and 9) provides a framework for assessing potential threats to confidentiality under Rule 1.6, particularly concerning data security and inadvertent disclosure when using third-party AI tools. 
  • Exploring the “process vs. outcome” distinction (Chapter 3) can illuminate challenges in communicating AI use to clients (Rule 1.4) or ensuring candor to tribunals (Rule 3.3) about the origins and reliability of AI-generated materials. 

The value proposition of Susskind’s book lies significantly in equipping lawyers with the cognitive framework necessary to operationalize the ethical requirements newly formalized in Opinion 512.