When did “write clearly and persuasively” go from being a goal to being evidence of robot writing?

A Wall Street Journal piece this morning discusses writers deliberately degrading their own work to dodge accusations of AI use.

  • They’re scattering typos like breadcrumbs. 
  • Swapping em dashes for double hyphens.
  • Stuffing in obscure sitcom quotes. 
  • Saying things like hey yo, for real.”

Wouldn’t we all be better off focusing on writing that’s worth reading?

Strunk and White told us to omit needless words. They didn’t say to add needless errors.

============

Question for Today:

How well did I hide the AI assistance?

The LinkedIn post above was written with help from AI. That’s why I was able to publish it in less than two hours (with graphic) from the Wall Street Journal article this morning. Several of the comments on that LinkedIn post added good ideas. Add your own thoughts there.

FWIW, here’s the history of my work with Claude Pro on this.

Some Other Observations

Some authors believe it increases audience confidence in their work if they include a disclaimer of AI use.

It does not increase my confidence in their work. It makes me question their competence and judgment. If you know how to use AI apps, it’s kind of nutty not to use them. Used well, they can lead to a higher quality, more accurate product.

One of the best ways to use AI is to ask it to critique your draft.

Grammarly provides many of the benefits of AI apps, without leaving artifacts. Use the Pro version. I used to hire a very smart part-time editor to review my most important written work products. I haven’t used her once since I started using Grammarly.

Has the Internet made books obsolete? Not so far as I’m concerned. I have 20+ titles in my personal library of books about presentations—and I’ve even read most of them. If I could keep only three, my choices would be:

  1. Public Speaking for Dummies
  2. PowerPoint for Dummies, and
  3. Presentations for Dummies

Since the publication many years ago of Dan Gookin’s DOS for Dummies, the first book in the successful  Dummies line of technical books, I’ve been ambivalent about the company’s naming and marketing strategy.  However, when a book’s content is good enough, who cares if it has a condescending title?

She begged “Do not do that,” then “STOP OPENCLAW.” Neither worked.

That’s what happened to Summer Yue, Meta’s Director of Alignment at their superintelligence safety lab. By the time she reached her desktop to kill the process manually, the AI agent she’d created had already deleted hundreds of emails. You would expect someone with Yue’s expertise to avoid a problem like this. You would be wrong.

Jennifer Ellis’s article lays out a practical checklist for lawyers considering agentic AI — minimum permissions, confirmation steps, real-world-scale testing, kill switches. Every item on her list is sound. But even if you follow all of them, you’re managing risk, not eliminating it. Yue had a confirmation step. The agent ignored it anyway.

This isn’t a fringe concern. Ziff Davis reports that enterprise AI agents may become the ultimate insider threat — autonomous systems with broad access, acting on stale or misunderstood instructions, with nobody watching in real time. The parallels to law practice are obvious. Lawyers grant agents access to client files, email, and case management systems. A rogue action doesn’t just embarrass you; it can breach confidentiality, spoliate evidence, or torpedo a case.

Anthropic, the vendor behind Claude, labels its own agentic product a “research preview with unique risks due to its agentic nature and internet access.” Read that carefully. This is a company telling you its own tool isn’t fully vetted.

As Rok Popov Ledinski has observed, the gap between what agentic AI can do and what lawyers understand about controlling it is widening, not narrowing. Ellis’s suggestions are the floor, not the ceiling. Most lawyers aren’t ready for that floor.

Hallucinations can hurt your reputation and maybe your wallet. Agentic AI can destroy your law practice.

Every year brings a new legal-technology miracle. In 2026, the most aggressively promoted one may be “AI for discovery.” If you have attended even a single conference lately, you have heard the pitch. AI will slash review costs. AI will eliminate drudgery. AI will—apparently any day now—fetch your coffee. That last claim remains unproven.

What tends to get lost in the enthusiasm surrounding AI for discovery is a basic but critical distinction: not all AI is the same. The market often groups two very different technologies under a single oversized umbrella labeled AI, and the difference between them matters enormously in discovery. Definitions are in order: Technology-assisted review (TAR) is the old, reliable workhorse. It is extractive. It finds what is already there based on mathematical patterns. As an article in the Richmond Journal of Law and Technology demonstrates, it has been in use for more than a decade, is well understood, and has enjoyed broad judicial acceptance.

TAR has earned respect from courts and practitioners who value measurable performance metrics, transparent workflows, and repeatable validation. The Sedona Conference TAR Primer remains the foundational explanation of why TAR works, how it can be audited, and how precision and recall can be evaluated.

Generative AI—large language models such as ChatGPT, Claude, and Gemini—is the new, charismatic intern. It is creative. It quickly generates new text based on probability. It is dazzling at first encounter, articulate, fast, and often helpful. It is also prone to making things up when under pressure.

Generative AI lacks TAR’s long judicial track record in discovery workflows. Chatbots are trained to produce plausible text, not to classify documents according to legal standards. They do not inherently understand responsiveness, confidentiality, privilege, or legal intent. Independent evaluations, including the Stanford HAI Index, consistently warn that while generative models are powerful, they remain unpredictable in risk-sensitive contexts.

MORE at LLRX.com

Let’s stop blaming the hallucinations and focus on the real problem:

Lawyers who don’t do their job because they are too busy, too lazy, or too incompetent.

The lawyer who cites a hallucinated AI case and the lawyer who cites a real case without reading it have committed the same ethical failure. Today, it’s usually one of them who gets disciplined.

AI didn’t invent the fake citation. It just automated it.

Long before ChatGPT, lawyers were citing cases they’d never actually read. I know, because as a law clerk to a U.S. District Court judge, I read the cases they didn’t. The citations were real enough — the cases existed — but they had nothing to do with the argument being made. Every time I found one, I discounted everything else in the brief.

This wasn’t evenly distributed. Large firms, with their layers of associates and research infrastructure, rarely had this problem. The less institutional support a lawyer had, the more likely I was to find phantom relevance in their citations. That’s not an indictment of any particular lawyers — it’s an indictment of a profession that has always tolerated sloppy research as long as no one checked.

AI didn’t create a malpractice problem. It just made the existing one impossible to ignore — because now the cases don’t even exist, which is harder to explain away than citing a real case you obviously never read.

The standard hasn’t changed: if you cite it, you’d better have read it and understood it. The only thing that’s changed is that the shortcuts are getting caught.

The hype machine is working overtime on Agentic AI. Don’t fall for it.

AI chatbots merely respond to prompts. They only give you information. AI agents like Claude Cowork or Openclaw go beyond this. They are built on large language models, but can take action on your behalf.

That sounds great, but there is a big problem: Way too many security risks. AI agents are just too risky, given the current state of the technology. This is true for any business use, but it applies doubly for lawyers, given their ethical duties of client confidentiality.

Prompt injection worries me the most. Bad actors can take control of your agent in surprisingly easy ways.  Other problems include:

 * Greater Hallucination Risks: Hallucinations are a problem with all large language models, but with conventional chatbots, it’s manageable. You can verify the bot’s output before relying on it for anything significant.

 * The “Black Box” Problem: Serious questions remain regarding where client data resides, what is retained, and whether the output can be audited with any degree of legal rigor.

These risks make agentic AI a no-go for the foreseeable future. How long? Until this new product has an extensive track record of safe use in the field. This will probably be at least a year, maybe five years, maybe never.

Pioneers get arrows. Settlers take land.

The promise has become a mantra: AI will free lawyers from drudgery so they can focus on higher-value work. Thomas Martin, writing for the Thomson Reuters Institute, points to research from UC-Berkeley that complicates that story considerably. The study tracked what actually happens when knowledge workers adopt generative AI. They don’t work less. They work more — faster, broader, longer — often without realizing it.

For a profession already deep in a burnout crisis, Martin argues, this should be a wake-up call.

I can confirm the finding from the inside. I use generative AI extensively — Claude, Gemini, ChatGPT — across research, drafting, and analysis. On routine tasks, yes, AI saves time. But on the projects that matter most, I consistently invest more time, not less. The reason is simple: AI has raised my ambition. When a power tool lets you chase a higher quality ceiling, you chase it. The scope of what feels achievable expands, and you expand with it.

The additional time is worth it. The output is genuinely better — more thorough, more polished, more carefully reasoned. But that’s precisely the dynamic the Berkeley researchers identified. The efficiency gains don’t translate into free hours. They get reinvested immediately, almost invisibly, into more demanding work.

This has implications the legal profession hasn’t seriously grappled with. If AI doesn’t reduce workload but intensifies it, then the firms and institutions selling AI adoption as a path to better work-life balance are telling an incomplete story. The real question — the one Martin rightly flags — is whether we’ll make deliberate choices about how AI reshapes legal work, or simply let the tools quietly raise the bar until the new pace feels normal.

Over the past several years, platforms such as Substack have become increasingly attractive to writers seeking to establish themselves as an independent voice. The appeal is obvious. They are easy to use and can turn a writer into a publisher overnight. No web developer is required. Payment systems are integrated, and distribution is built in.

This trend has accelerated as prominent writers have left legacy publishers including the Washington Post, the Wall Street JournalTime Magazine, CBS News, CNN, and NPR in search of stability or independence. Substack markets itself as a refuge for writers who prefer autonomy to corporate hierarchy.

There are good reasons to use Substack and similar businesses, but there are also risks. These platforms are not inherently malign, but they are fragile. Substack is currently the trendy platform, but the key ideas apply to many other platforms, many of which are analyzed in an article entitled Avoiding the Platform Trap: Alternatives to Substack.

There is a seductive simplicity to the modern newsletter platform. It promises to turn a writer into a publisher overnight, without the technical overhead. It is a brilliant bargain, provided one doesn’t look too closely at who owns the title to the land.

Much more on this topic in this LLRX article: “Don’t Build Your House on Rented Land: Why Writers Should Avoid Platform Dependency and How They Can Do So.

Message for My Liberal Friends:

Fact-checking? Good.
Name-calling? Strategic malpractice.

The Facebook post graphic reproduced below illustrates both name-calling and effective fact-checking. If your goal is to change minds, contempt is self-sabotage.

Calling people “stupid” because they disagree with you may feel satisfying. It may earn applause from your side. But it will not persuade a single person who matters.

It will harden them.

Contempt Backfires

Arthur Brooks put it plainly in The Atlantic: Contempt — not disagreement — is what poisons civic life. Treating opponents with disdain doesn’t weaken them. It strengthens their identity and their resolve.
People rarely abandon beliefs because someone mocked them. They defend themselves.

And often, they escalate.

Resentment Is Political Fuel

Many Trump supporters describe feeling culturally disrespected. Jonathan Haidt warned in The New York Times that dismissing people as ignorant or immoral deepens alienation rather than persuasion.

If someone already suspects that “liberals look down on people like me,” calling them stupid doesn’t weaken that belief.

It confirms it.

And resentment is a powerful motivator.

Even Politicians Learn This the Hard Way

When Hillary Clinton used the phrase “basket of deplorables,” it became a rallying cry for her opponents. President Obama later acknowledged that the remark was politically damaging.

Insults mobilize. They do not persuade.

Elections Are Margin Games

You don’t need to persuade everyone. You need to persuade a few.

The loudest voices online are rarely the swing votes. The people who matter most are often quieter — reachable but not yet locked in.

Public shaming is designed for applause.
Persuasion is designed for outcomes.

Those are different audiences.

What Works Better

If you genuinely want to make a difference:

  • Share a calm, well-sourced fact check.
  • Send it privately.
  • Choose someone you believe is persuadable.
  • Lead with respect instead of ridicule.

You don’t need fireworks.
You need one honest conversation that lowers the temperature.

Flip a few — just a few — and the math changes.

Fact-checking is constructive.
Humiliation is counterproductive.

Respect isn’t weakness. It’s strategy.

Found on Facebook:

Screenshot

Trump is backing down—or appears to be. What’s the best response?

Recent history suggests that the retreat is tactical, not transformational.

Authoritarian movements rarely end because of a single reversal. They end because sustained, strategic pressure makes continuation impossible.

Recent state-level pullbacks may feel like victories. At best, they’re Round 1.

As journalists like Rachel Maddow have emphasized—and as political scientist Erica Chenoweth has demonstrated empirically—authoritarian systems can collapse when a surprisingly small minority commits to prolonged, nonviolent, organized resistance.

Think “No Kings Day,” but sustained. Strategic. Relentless.

There’s no single magic script. But there are models worth studying—people who are using their professional skills, platforms, and credibility to do real work at a critical moment:

🔹 Sabrina I. Pacifici, MSLIS documents the erosion of science, public health, and the rule of law at LLRX.

🔹 Greg Siskind challenges unlawful and destructive immigration practices.

🔹 Gregory Miller advances transparent, open-source election infrastructure through the TrustTheVote Project.

🔹 Michael D.J. Eisenberg teaches lawyers how to use mobile phones and dash cams to document police misconduct.

🔹 Damien Riehl, Kara Peterson and Cat Moon have shown that petitions and coordinated legal action can matter more than cynics assume.

None of this is easy—or guaranteed—but the evidence suggests it works. History suggests it’s how turning points begin. Do what you can. But do something.

What is the best way that you or your organization can contribute?