Are Human Therapists Causing Suicides? No. (And perhaps AI therapists are not either.)

About 12,000 people die by suicide each year while actively in therapy with a licensed professional. [Psych Today]

Nobody blames those deaths directly on the therapist because we recognize correlation isn’t causation.

Yet when a suicide follows an interaction with an AI therapy bot, the headline jumps straight to blame. That’s a double standard.

Reality

♦️ AI can lower barriers.
Many people open up more with a bot—or in text—than they ever do face-to-face. That’s not a bug. It’s a feature that, if used well, can be powerful.

♦️ Guardrails are failing.
Most AI guardrails are paper-thin. They rely on refusals and keyword blocks that collapse in real-world use. Too many companies treat safety as a compliance checkbox, not a design principle.

🔹 No real escalation paths.
🔹 No structured handoffs to humans.
🔹 No context-aware monitoring of high-risk conversations.

This is negligence. If a bot is going to engage in mental health at all, guardrails must evolve far beyond today’s flimsy protections.

♦️ The opportunity is hybrid care.
AI can extend reach, reduce stigma, and build early trust. But humans must stay in the loop. Especially for high-risk cases. AI cannot be left to handle suicidal ideation or complex trauma alone.

The Conversation We Should Have

Our conversation shouldn’t be about banning AI therapy. (Bans on technology rarely work.)

It should be about better safety, real oversight, smarter integration… and maybe even jail time for executives who refuse to build protections into high-risk AI systems.

Share the Post:

Related Posts