In a profession defined by billable hours, finding time for professional development—let alone personal enrichment—can feel like an impossible task. I used to think podcasts were just another distraction. I was wrong.

Podcasts have not only helped me professionally but also added some joy to my life. They offer a rare opportunity for busy lawyers: the chance to learn and grow even while doing routine tasks. It’s learning that fits into your life, not the other way around

The appeal of podcasts comes from their versatility. You can listen to advanced legal analysis during your commute, catch up on industry news while walking the dog, or pick up a new hobby while folding laundry. For me, they turned a daily exercise routine from a chore into a valued hour of learning and entertainment. Podcasts have even made battling traffic on I-395 and raking leaves almost tolerable.

My recent article at LLRX.com explains the why and how of listening to podcasts.

The headlines are alarming. Reports detail patients being harmed, misled, or outright failed by popular AI apps. Stories like these are emotionally charged, and my preliminary assessment of the seven high-profile cases recently documented by Information Age is that at least some may have genuine merit.

It’s easy to read about a chatbot giving harmful advice and immediately conclude that AI in this space is inherently dangerous.

However, to truly understand whether AI poses a threat, we must stop comparing it to a myth and start asking the comparative question that is too rarely raised: What is it compared to?

The Flawed Comparison: AI vs. The Perfect Doctor

The common fallacy is to benchmark AI-driven results against a false model: The perfect, tireless, and unbiased human clinician.

The comparison, however, should be between AI and real-world doctors. This comparison is complex. Doctors are not perfect, and neither are AI apps. Both have profound strengths and undeniable risks.

Let’s look at the facts and the potential. An article published by the American Psychoanalytic Association concluded:

Utilizing machine learning algorithms which predict suicide attempts via analysis of patient self-report data and EHR data may significantly enhance a clinicians’ ability to identify high-risk individuals who arrive to the ED. Such enhanced predictive value may offer potential for closer monitoring of high-risk patients and earlier intervention in order prevent suicide attempts.

In high-stakes, time-sensitive environments like the Emergency Department (ED), AI is showing a concrete ability to flag risks that humans might miss.

The Real Question: Human vs. Machine Failure

When it comes to mental-health safety, no system—human or artificial—is inherently safe.

  • Human clinicians miss warning signs every day because they are, well, human: tired, biased, overwhelmed, or simply limited by the caseload.
  • AI systems fail less often, but when they do, their errors can be jarringly and bizarrely disconnected from common sense or empathetic understanding.

Critically, each system compensates for the other’s weaknesses. The AI provides tireless data analysis; the human provides contextual judgment and empathy.

So the right question isn’t “Can AI harm patients?” (The answer is clearly yes, just as humans can.)

It’s “Compared to what level of harm?”

Imperfect but Valuable Supplements

Judged fairly, today’s AI tools look less like an existential threat and more like imperfect but indispensable supplements to an overburdened healthcare system.

The path forward is not prohibition, but integration. With transparent design, proper regulation, and rigorous ethical oversight, these tools could help make mental-health care not just more accessible, but arguably safer than it has ever been.

The debate shouldn’t be about replacing the doctor, but about empowering them.

Much more on the ethics, regulation, and specific case studies in our upcoming posts. Be sure to subscribe so you don’t miss them!

Multiple reputable sources are reporting that OpenAI CEO Sam Altman recently told employees—via a leaked internal memo—that Google’s newest advances in AI could “create some temporary economic headwinds” for OpenAI, even as he tried to reassure staff that the company is “catching up fast” and remains well-positioned for the long run.

Altman’s message reportedly acknowledged that the external environment may be “difficult for some time.” His framing was classic morale management: describe the turbulence as short-term, emphasize confidence in the engineering teams, and signal that better days lie ahead.

Notably, OpenAI has not publicly commented on the memo. But that doesn’t make the reporting less credible. Altman is many things, but politically naïve is not one of them. He surely knew that such a memo—at this moment, with competitive pressure rising—would leak with thermonuclear speed.

My read is that the memo served two audiences. Yes, it was written to employees, but it was also written for investors: a controlled admission that the winds have shifted, coupled with a promise that OpenAI will ride out the storm and reassert its competitive edge.

I’m not convinced this optimism is warranted. The newest release of Google’s Gemini app demonstrates that generative AI is rapidly becoming commoditized. If these tools converge toward similar capabilities, the market begins to resemble cloud storage or web hosting: a handful of powerful players offering roughly interchangeable products, leaving very little room for profit.

The race is no longer about who can wow the public first; it’s about who can operate at scale, sustainably, and without burning billions. That is a very different contest.

Buckle up. Things are about to get fun.

The relentless, frenetic excitement surrounding Artificial Intelligence feels familiar. For anyone who remembers the turn of the millennium, it’s a clear echo of the dot-com bubble, a time of speculation about a new technology completely detached from business fundamentals.

Back then, optimism for an internet-driven economy sent stock prices for companies like AOL to astronomical levels. The problem was that few of these companies were actually profitable. When the bubble burst, the losses were staggering. AOL Time Warner posted a staggering $98.7 billion annual loss in 2002, the largest in U.S. corporate history at the time, largely from writing down the value of its internet division.

Today, we see a parallel. The market is flooded with AI hype, yet very few companies—outside of key infrastructure players like chipmaker Nvidia—are turning a profit. This is unsustainable. These unprofitable AI app-developing “miners” are the biggest customers for Nvidia’s “shovels.” If they run out of cash and collapse, the demand will inevitably falter.

This doesn’t mean the AI revolution isn’t real. It is. But, as with the internet, the first wave is rarely the one that lasts. The true, world-changing success of the internet didn’t come from the bubble-era giants; it came from the companies built after the crash, like Google and Facebook.

Bill Gates articulated this exact point in a recent interview, directly comparing the current “frenzy” to the internet bubble. He clarified that while AI is “profound”—the “biggest technical thing ever” in his lifetime—it will follow a similar pattern. Just as with the internet, many companies will fail, and a “ton of these investments… will be dead ends.”

History suggests we are in the speculative phase. The real, lasting AI revolution will likely be built on the ashes of this first, over-hyped wave.

Platforms like Substack and Medium have made publishing easier than ever—but if you rely solely on them, you’re renting space on someone else’s land. Owning your own domain gives you independence, credibility, long-term control, and many other benefits:

1. Professional Credibility and Branding

  • A custom domain name projects authority and legitimacy.
  • Think about it—who would you trust more: smithlaw.medium.com or smithlaw.com?”

Pro Tip: Journalists, lawyers, consultants, and authors who use custom domains are taken more seriously than those who don’t.

2. Long-Term Ownership

  • Your domain is yours, regardless of what publishing platform you use.
  • If Medium or Substack shuts down, changes its rules, loses popularity, or becomes less desirable for any reason, you can move your content elsewhere without changing your web address.
  • Imagine losing access to your writing archive because a platform shut down or changed its terms overnight.

Pro Tip: Think of your domain as a digital land deed. You control the address; the platform is just a tenant.

3. Better SEO Control

  • Search engines (like Google) reward content that lives under a consistent domain.
  • With your own domain, all search authority and backlinks accrue to your brand—not to Medium, Substack, or another host. When your posts rank on Google, you want the traffic going to your site—not to someone else’s domain

Pro Tip: If you ever switch platforms, keeping your domain means you don’t lose your Google rankings.

4. More Customization Options

  • With a custom domain, you can eventually build out a multi-part web presence:
    • blog.yoursite.com for articles
    • docs.yoursite.com for whitepapers
    • www.yoursite.com for your main site
  • You’re not stuck within the limits of a publishing platform. As your content grows, your site can evolve into a knowledge hub—not just a blog.

5. Email Address Integration

  • You can set up custom email addresses like you@yourdomain.com, which boosts trust in business and networking contexts.
  • An email from jane@lawandstrategy.com signals professionalism; an email from janestrategy@gmail.com does not.”

Pro Tip: Readers are more likely to open emails from a custom domain than from an address like Gmail.com or Outlook.com.

6. Avoids Platform Lock-In

  • If you build your brand around yourname.medium.com you are dependent on Medium’s policies.
  • If they add fees, start injecting ads, or ban content types, your options are limited.

 Pro Tip: A custom domain ensures you—not the platform—own your audience and content location.

Convinced of the Need for a Domain Name? Here’s How to Get Started

  1. Choose a short, memorable domain (ideally under 15 characters).
  2. Register it through a reputable provider like NamecheapGoogle Domains, or Hover.
  3. Connect it to your Substack, Medium, or WordPress site—most platforms make this a one-click setup.
  4. Use it consistently in your email signature, social media bios, and marketing materials.

Pro Tip: Register common variants or misspellings of your domain to protect your brand.

Final Thoughts

Platforms come and go—but your domain is permanent digital real estate. Professionals who invest in their own address today will build stronger brands and avoid painful migrations tomorrow.

Professional branding matters. If you haven’t claimed your domain yet, do it today—it’s the cheapest credibility boost you’ll ever buy.

We’re all busy, right? I get it. It’s all about knowing what to prioritize.

If you want (or need) thoughtful analysis of AI developments, MIT Sloan Management Review is one of the best places to invest a little time. A case in point: their October 7 article titled, “Cut Through GenAI Confusion: Eight Definitive Reads.”

Here’s a summary:

Despite its ubiquity, generative AI is still young. It’s no surprise, then, that many leaders are struggling with important questions about using the tools efficiently, responsibly, and skeptically. The questions are significant:

  • Can we trust it?
  • What should our strategy be in deploying it and scaling success?
  • Who should be using it?
  • How will we measure its ROI?
  • When should we use other types of AI, like machine learning, instead?

MIT Sloan Management Review authors on this topic — including academics, researchers, and practitioners — are reporting from the front lines about the best GenAI questions to ask and how to think about answering them.

This is exactly the kind of practical, expert-driven insight the Review delivers consistently.

If you’re looking to stay ahead, it’s easy to get access. As a starting point, you can become a “member” for free, which puts you on their mailing list and lets you unlock a couple of free articles a month.

When you’re ready for full access, a one-year digital subscription is $69. It’s a small investment for staying informed on one of the most significant topics in business today.

Why Expensive Designs often Fail and How Smart Lawyers Can Fix It

In law, we’re trained to believe that you get what you pay for. But when it comes to law firm websites, the opposite is often true. Some of the most expensive sites perform the worst—especially when they rely on proprietary systems that lock you into a single vendor. Conrad Saam documents one instance when moving to a bespoke approach resulted in a 44% drop in website traffic.

Instead of paying for “elite” proprietary platforms that only your designer understands, most firms are far better off with Word Press, the open-source powerhouse that quietly runs over 40% of the internet—including The New York Times and countless major law firms

WordPress isn’t just cheap—it’s fast, flexible, secure, and supported by a global developer community that keeps improving it daily. Contrast that with boutique digital agencies that charge $10,000 to $50,000 (or more) for custom websites that may look polished but often:

  • Require you to contact the agency for every minor update,
  • Use obscure or proprietary code that other developers won’t touch
  • Ignore essential SEO principles.

What Lawyers Should Prioritize Instead

1. SEO Beats Eye Candy

A beautiful website is meaningless if no one sees it. SEO (Search Engine Optimization) drives traffic—and most clients begin their search for a lawyer on Google.

You don’t need a $25,000 design to rank on search engines. You need:

  • Fast load times
  • Clear and keyword-focused headlines
  • Useful, original content, published regularly
  • Mobile Optimization

Tools like Yoast SEO, RankMath, or Google Search Console integrate easily into WordPress and help you improve visibility without hiring an SEO “guru.”

Pro tip: Spend your budget on a skilled legal copywriter, not a homepage video that slows your site.


2. Control, Don’t Depend

Custom “bespoke” sites often make you dependent on a single agency for every small change.

Pro tip: Investing $18 in a WordPress guidebook can save thousands of dollars in consulting fees.


3. Security Strength in Numbers

WordPress’s open ecosystem means continuous updates and peer-reviewed plugins.

Pro tip: Enable automatic minor updates and weekly cloud backups to minimize risk.


4. Clients Want Speed, Clarity and Confidence

Prospective clients care less about animations and more about answers:

  • What kind of law do you practice?
  • Are you any good?
  • How can I contact you?

Pro tip: A site that loads in under 3 seconds is golden.


Final Thought: Make Smart, Not Showy, Choices

There’s nothing wrong with investing in marketing—but make sure your investment earns attention, not just admiration.

ABA Formal Opinion 512 provides welcome guidance on ethical obligations for lawyers, demanding competence in understanding AI’s benefits and risks (Model Rule 1.1), diligence in protecting client confidentiality (Model Rule 1.6), clarity in client communications (Model Rule 1.4), candor toward tribunals (Model Rule 3.3), effective supervision of AI use (Model Rules 5.1, 5.3), and reasonableness in fees (Model Rule 1.5).

While not an ethics compliance manual, Richard Susskind’s new book How to Think About AI: A Guide for the Perplexed offers precisely the conceptual tools — the “mental models” — needed to navigate these practical obligations. For example:

  • Susskind’s discussions on AI capabilities, limitations, and the difficulty in explaining how some systems work (Chapters 1, 2 and 5) directly inform the duty of competence under Rule 1.1, which requires lawyers to understand the benefits and risks of associated technology. 
  • His structured analysis of AI risks (Chapters 8 and 9) provides a framework for assessing potential threats to confidentiality under Rule 1.6, particularly concerning data security and inadvertent disclosure when using third-party AI tools. 
  • Exploring the “process vs. outcome” distinction (Chapter 3) can illuminate challenges in communicating AI use to clients (Rule 1.4) or ensuring candor to tribunals (Rule 3.3) about the origins and reliability of AI-generated materials. 

The value proposition of Susskind’s book lies significantly in equipping lawyers with the cognitive framework necessary to operationalize the ethical requirements newly formalized in Opinion 512. 

Anna Guo reports on a study comparing the performance of lawyers and AI apps. Key takeaways:

1. Several AI tools matched, and in some cases outperformed, lawyers in producing reliable first drafts.

2. The top AI (Gemini 2.5 Pro) marginally outperformed the top individual human lawyer: 73.3% vs. 70% reliability rate.

3. Specialized legal AI tools surfaced material risks that human lawyers missed entirely.

My assessment:

The conclusions align closely with my subjective impression of comparative performance in several legal domains.

We should expect the performance of human lawyers vs. AI apps to vary depending on the type of domain and the facts. Most likely, contract drafting is a strong point for AI. My guess is that AI apps will also have an advantage in estate planning.

Most important: Teams of lawyers working with AI apps will usually be superior to lawyers or AI apps working alone. It’s what I call the Centaur Approach.

There’s a significant debate on how soon computers may achieve artificial general intelligence. The AlphaZero Project–which led one of its leaders to receive the Nobel Prize for Chemistry–is an important data point in the discussion. Here’s the intro to my article at LLRX.com.

Will computers ever achieve the holy grail of artificial general intelligence (AGI)—an intelligence that matches or surpasses human abilities across virtually all cognitive tasks? Experts disagree not only on the feasibility but also on the desirability of sch an outcome. Optimists envision an era of abundance. Pessimists fear an existential threat.

One case study suggests AGI may be closer than widely believed. In 2017, Google DeepMind’s AlphaZero taught itself more about chess in four hours than humans had managed to uncover in 1,500 years. That’s remarkable in itself, but the truly amazing part is that AlphaZero accomplished this with a level of style and creativity that even the best human players can’t understand, much less emulate.

AlphaZero’s success raises a provocative question: if a computer can teach itself a complex domain like chess in hours, what does that imply about how close we might be to machines that can teach themselves anything?

The implications of AlphaZero’s methods go far beyond board games. For example, in 2024, chemists using a similar self-learning approach won the Nobel Prize for Chemistry. The National Institutes of Health report that their findings promote the development of new vaccines, enhance disease prevention, support personalized medicine, and generally deepen our knowledge of how life works at a molecular level.