The hype machine is working overtime on Agentic AI. Don’t fall for it.

AI chatbots merely respond to prompts. They only give you information. AI agents like Claude Cowork or Openclaw go beyond this. They are built on large language models, but can take action on your behalf.

That sounds great, but there is a big problem: Way too many security risks. AI agents are just too risky, given the current state of the technology. This is true for any business use, but it applies doubly for lawyers, given their ethical duties of client confidentiality.

Prompt injection worries me the most. Bad actors can take control of your agent in surprisingly easy ways.  Other problems include:

 * Greater Hallucination Risks: Hallucinations are a problem with all large language models, but with conventional chatbots, it’s manageable. You can verify the bot’s output before relying on it for anything significant.

 * The “Black Box” Problem: Serious questions remain regarding where client data resides, what is retained, and whether the output can be audited with any degree of legal rigor.

These risks make agentic AI a no-go for the foreseeable future. How long? Until this new product has an extensive track record of safe use in the field. This will probably be at least a year, maybe five years, maybe never.

Pioneers get arrows. Settlers take land.