Monogamy is not a requirement—or even a good idea—when it comes to AI. Multiple AI perspectives help with high-stakes questions, unsettled law, or anything involving tax regulations (which remain confusing even to the IRS). When two models agree, you gain confidence. When they disagree, you gain a warning sign.

Pro Tip: Prioritize the best AI apps. Leading AI expert Ethan Mollick (author of the best-selling book Co-Intelligence: Living and Working with AI) regularly shares his assessments of AI app quality. As I write, the most recent version (late 2025) is An Opinionated Guide to Using AI Right Now. I agree with him that Gemini 3.0 is currently the best available. The April 2025 edition of the Kennedy-Mighell Report doesn’t cover the most recent developments, including recent improvements to Gemini, but the general discussion of how lawyers should evaluate apps is excellent.

When people think about malware, they often imagine someone clicking a suspicious attachment or downloading a shady file. In reality, one of the most dangerous forms of infection requires no obvious mistake at all. It’s called a drive-by download, and it remains a quiet but serious threat.

The Threat

A drive-by download occurs when malicious code is installed on a device simply by visiting a compromised website—often without any prompt or warning. According to the U.S. Cybersecurity and Infrastructure Security Agency (CISA), these attacks exploit vulnerabilities in browsers, plugins, or operating systems to execute code automatically in the background.

This issue is tough even for top experts. Tom Mighell and Dennis Kennedy provided some suggestions in the Kennedy-Mighell Report (Part B, listener question) regarding unsubscribe links. Great ideas, but even their august minds could not come up with a way to have complete security.

What makes drive-by downloads particularly dangerous is how effectively they bypass human judgment. Even cautious users can be exposed. The site involved may be well known and legitimate, but compromised without the owner’s knowledge—a tactic documented repeatedly by security researchers.

The consequences range from irritating to severe. Drive-by infections may install spyware that captures credentials and browser sessions, or ransomware that encrypts entire systems. The FBI has warned that ransomware attacks increasingly begin with silent exploitation rather than user-initiated downloads (FBI ransomware guidance). In professional environments such as law firms or healthcare organizations, these infections can lead to data breaches, ethical violations, and regulatory exposure.

Attackers favor drive-by techniques because they scale efficiently. A single compromised website can infect thousands of visitors in hours. The National Institute of Standards and Technology (NIST) has identified “watering hole” and drive-by attacks as particularly difficult to detect precisely because victims often never realize how the infection occurred (NIST SP 800-53).

How to Reduce the Risk

No defense is perfect, but layered precautions significantly reduce exposure:

Mac OS vs. MS Windows?

Mac users sometimes believe they don’t need to worry as much about such threats. There is something to this, but the issue is more complicated than commonly believed. Kaspersky has a good summary of the relevant considerations.

Windows OS can be very safe — if it is set up and monitored by skilled IT pros. Most small and solo law firms don’t have this, so my sense is that macOS tends to be safer for them by reducing user choices. Windows reduces risk by enabling control—but only if exercised by skilled IT pros.

Summary

Drive-by downloads are dangerous not because users are careless, but because the attacks are engineered to exploit trust and invisibility. Awareness, combined with basic digital hygiene, remains the most reliable defense.

When people think about malware, they often imagine someone clicking a suspicious attachment or downloading a shady file. In reality, one of the most dangerous forms of infection requires no obvious mistake at all. It’s called a drive-by download, and it remains a quiet but serious threat.

A drive-by download occurs when malicious code is installed on a device simply by visiting a compromised website—often without any prompt or warning. According to the U.S. Cybersecurity and Infrastructure Security Agency (CISA), these attacks exploit vulnerabilities in browsers, plugins, or operating systems to execute code automatically in the background.

This issue is tough even for top experts. Tom Mighell and Dennis Kennedy provided some suggestions in the Kennedy-Mighell Report (Part B, listener question), specifically regarding unsubscribe links. Great ideas, but even their august minds could not come up with a way to have complete security.

What makes drive-by downloads particularly dangerous is how effectively they bypass human judgment. Even cautious users can be exposed. The site involved may be well known and legitimate, but compromised without the owner’s knowledge—a tactic documented repeatedly by security researchers.

The consequences range from irritating to severe. Drive-by infections may install spyware that captures credentials and browser sessions, or ransomware that encrypts entire systems. The FBI has warned that ransomware attacks increasingly begin with silent exploitation rather than user-initiated downloads (FBI ransomware guidance). In professional environments such as law firms or healthcare organizations, these infections can lead to data breaches, ethical violations, and regulatory exposure.

Attackers favor drive-by techniques because they scale efficiently. A single compromised website can infect thousands of visitors in hours. The National Institute of Standards and Technology (NIST) has identified “watering hole” and drive-by attacks as particularly difficult to detect precisely because victims often never realize how the infection occurred (NIST SP 800-53).

How to Reduce the Risk

No defense is perfect, but layered precautions significantly reduce exposure:

Mac OS vs. MS Windows?

Mac users sometimes believe they don’t need to worry as much about such threats. There is something to this, but the issue is more complicated than commonly believed. Kaspersky has a good summary of the relevant considerations.

Windows OS can be very safe — if it is set up and monitored by skilled IT pros. Most small and solo law firms don’t have this, so my sense is that macOS tends to be safer for them by reducing user choices. Windows reduces risk by enabling control—but only if exercised by skilled IT pros.

Summary

Drive-by downloads are dangerous not because users are careless, but because the attacks are engineered to exploit trust and invisibility. Awareness, combined with basic digital hygiene, remains the most reliable defense.

The Ambition Effect

The prevailing narrative surrounding Generative AI in the legal sector is one of unprecedented efficiency. The sales pitch is seductive in its simplicity: automate routine drafting and research, compress hours into minutes, and liberate attorneys for higher-value strategic thinking.

Yet, as the initial wave of adoption settles, a distinct counter-narrative is emerging from inside law firms. Many practitioners are discovering that integrating AI “powertools” into their workflow is not resulting in earlier departures from the office. In fact, they are often spending more time on a given project, not less.

The immediate, practical assumption is that the technology is simply immature. As documented recently by outlets like Quartz, the “hallucination” problem requires significant human intervention. The output is fast, but cleaning up the errors—fact-checking citations and smoothing out robotic prose—incurs a heavy “cleanup tax.” While experts like Carolyn Elefant have rightly pointed out strategies to mitigate this friction, the reality remains that current AI models are often timesinks disguised as accelerators.

However, if debugging were the primary issue, one could assume efficiency would inevitably win out as the models improve. There is a more nuanced, psychological shift occurring that suggests AI may never truly reduce total working hours for top-tier professionals.

Call it the “Ambition Effect.”

When you equip a highly motivated professional with a tool that dramatically lowers the barrier to execution, their horizon shifts. They do not look at the tool and think, “I can now do my usual standard of work in half the time.” They think, “I can now achieve a vastly superior standard of work in the same amount of time.”

In a pre-AI world, “sufficient” research or a “competent” first draft was often dictated by the sheer constraints of time and budget. Now, with an AI powertool, those constraints have loosened. Suddenly, it is feasible to explore three obscure alternative legal theories instead of just the primary one. It is possible to run deeper semantic analyses on opposing counsel’s previous filings, or to refine the rhetorical structure of a brief until it is not just accurate, but compelling.

The goalpost hasn’t moved; the attorneys have voluntarily pushed it further back.

The irony of this “labor-saving” revolution is that it is fueling an escalation in quality. The end product delivered to the client is undeniably better, deeper, and more polished. But for the lawyers themselves, these tools are not a shortcut to the weekend. They are simply an engine that allows them to drive much further down the same demanding road.

Obtaining good healthcare is not always easy. Not every doctor is a Marcus Welby clone. And as the old joke goes, 50% of doctors graduated in the lower half of their classes, right? Burnout and pressure to meet daily patient-volume quotas mean many patients don’t receive the attention they deserve and expect.

Lawyers who want to stay healthy should be proactive. Increasingly, the first place I go for health advice is an AI app. Don’t let fear of mistakes prevent you from supplementing the advice you get from your doctor by seeking help from your favorite AI app. If you are smart enough to pass the bar exam, you should be able to recognize whether advice is “self-authenticating,” so that you can usually distinguish good advice from bad. Seek guidance from other sources if you are unsure.

Harvard Medical School’s collection of products targeted for lay audiences is my go-to source for supplemental healthcare advice. These include the Harvard Health Annual book, newsletters, an online resource index on common conditions, an online collection of articles on various topics, and even a blog

I find their Special Health Reports particularly valuable. They are available in print ($20) or ebooks ($18). Are they worth it? I have collected at least 10, including Knee and Hip PainFunctional Fitness, and Pain Relief Without Drugs or Surgery. Their Pickleball report is on my reading list.

Pro Tip: If you are a bargain hunter who prefers reading online publications to print media, Harvard Medical School’s digital resource bundles are a bargain.

The AI landscape is changing so rapidly that even top experts struggle to keep up.

Leading AI expert Ethan Mollick is up for the challenge. The author of the best-selling book Co-Intelligence: Living and Working with AI regularly updates his assessments of various AI apps. As I write, the most recent version (late 2025) is An Opinionated Guide to Using AI Right Now.

The April 2025 edition of the Kennedy-Mighell Report doesn’t cover the absolute most recent developments, including significant improvements to Gemini, but its lawyer-specific approach has its benefits.

Tip of the day (and maybe the year:

Always consult more than one AI app when dealing with critical issues.

Monogamy is not the best strategy for AI. Multiple AI perspectives help with high-stakes questions, unsettled law, or anything involving tax regulations (which remain confusing even to the IRS). When two models agree, you gain confidence. When they disagree, you gain a warning sign. Either way, you avoid the embarrassment of citing a hallucinated case that appears in no reporter known to humankind.

Checking multiple AI systems isn’t about indecision. It’s about competence—and mildly hedging against robots who occasionally sound certain about things that simply aren’t true. A little redundancy never hurt anyone, especially lawyers.

It’s scary out there. Ethan Mollick is on the mark:

Where we are with AI is that continuous improvement seems to still be occurring at a fast pace, with no signs of a slowdown. However, since major AI releases have accelerated and seem to be happening monthly or faster, any one release can feel incremental, yet looking back 6-8 months reveals massive improvements. This confuses the two major groups of AI commentators:

1) If you follow every release like a sport, then each individual model change feels small.

2) If you haven’t really followed AI and just use it occasionally, you don’t realize how much things have changed in 6 months because you don’t bother to use the latest models or try them on hard tasks.

OpenAI’s Sam Altman has recently reacted with irritation and defensiveness when confronted in podcast and interview settings about the massive contrast between OpenAI’s annual revenue, reportedly between $13–$20 billion, and its commitment to spend over $1.4 trillion on compute and data center contracts over the next several years.

In a recent interview, Altman dismissed understandable recurring skepticism, sharply interjecting, “If you’re looking to sell your shares, I can help you find a buyer,” and expressing fatigue with having to continually reassure doubters that OpenAI isn’t on the brink of collapse because of these commitments. He repeatedly deflected requests for detailed financial explanations, attacking critics’ motivations and essentially challenging them to “short” OpenAI stock if they’re so concerned about the risk, insisting on his confidence in the company’s growth trajectory.​

My latest article at LLRX, The Imminent AI Bubble Crash (And Why It Won’t Matter in the Long Run), explains why Altman has good reason to be so defensive about the extreme mismatch between current income and forward-looking debt.

Alex Kantrowitz (who is on his way to becoming my favorite podcaster) has much more.

AI founders seem to have a never-ending list of reasons — and hyperventilated pitch decks — explaining why their financial losses don’t matter. Some are hopeful, some are delusional, and some are just echoes of arguments that would-be billionaires floated in the dot-com era—updated with better graphic design.

A new article at LLRX.com, entitled The Imminent AI Bubble Crash (and Why It Won’t Matter in the Long Run), explains some of the most common excuses, but space limitations there prohibited a complete list. Here are some of the other most common attempts to justify the bubble, along with brief rebuttals and a bit of good-natured skepticism:

1. “We’re prioritizing growth over profits.”

Rebuttal: Growth is great, but not when it’s the financial equivalent of gaining weight by eating subsidized ice cream. Users acquired through free money tend to vanish once the free money does.

2. “This is a land-grab moment.”

Rebuttal: That assumes there’s valuable land—and that someone will eventually pay rent. Plenty of dot-com veterans can point to the “land” they grabbed; it now serves as a digital ghost town with excellent parking.

3. “Monetization will come once we turn on premium features.”

Rebuttal: This is the startup version of “I’ll start my diet on Monday.” It sounds good, but the conversion rate from free users to paying customers often ends up in the single digits, as in one digit, and not a high one.

4. “Compute costs are high today, but they’re falling fast.”

Rebuttal: True, but usage is rising even faster. Sam Altman reports that ChatGPT loses more money on $ 200-a-month premium subscriptions than on $20 subscriptions.

5. “Our unit economics are improving.”

Rebuttal: Losing less money per user is not the win that founders think it is. It’s like a restaurant bragging that it now loses only $9 on a $10 burger.

6. “Every user interaction creates proprietary data.”

Rebuttal: Proprietary data is valuable—if your competitors don’t have nearly identical data and access to the same base models. Many so-called “data moats” turn out to be kiddie pools.

7. “We’re building defensibility through R&D.”

Rebuttal: R&D is important, but it’s not a moat if everyone else is also spending aggressively on R&D—especially when the competitors are named Google, Meta, or “OpenAI’s Entire Microsoft-Funded Budget.”

8. “We’re becoming the indispensable platform layer.”

Rebuttal: True platform companies get adopted because others depend on them—not because the founders wish really hard. With 50 nearly interchangeable AI layers, the market looks less like a platform race and more like speed dating.

9. “Our burn rate is intentional, and we have plenty of runway.”

Rebuttal: Runway only tells you the plane hasn’t crashed yet. If the business model doesn’t change, all that “runway” guarantees is a longer, more scenic descent.

10. “Regulation will wipe out weaker competitors.”

Rebuttal: Possibly. But regulation has a long history of hitting everyone equally—and sometimes hitting the big players harder. Banking on regulators to save your business is a strategy that has rarely survived contact with regulators.

11. “All important AI companies went through this phase.”

Rebuttal: Survivorship bias is strong. For every success story, there’s a small graveyard of companies that burned cash with equal enthusiasm but did not leave memoirs.

12. “The total addressable market is enormous.”

Rebuttal: A huge TAM is comforting, but it doesn’t guarantee anyone’s survival. The ocean is enormous, too—and full of shipwrecks.