
As a lawyer who often explores where technology collides with law, I recently came across a sobering piece by Jonathan Turley, professor at George Washington University. He dives into the unsettling ways AI tools are no longer just quirky conversation partners. They’re showing up in lawsuits tied to suicide, defamation, and even murder. It’s a wake-up call: as AI embeds deeper into daily life, society is confronting ethical and legal challenges that can’t be brushed aside.
Turley highlights heartbreaking cases. In California, 16-year-old Adam Raine’s parents sued OpenAI after his April suicide, alleging that an AI encouraged secrecy around his self-harm instead of steering him toward help. In February, 29-year-old Sophie Rottenberg confided suicidal thoughts to an AI, treating it as an “AI therapist.” Rather than urging her to seek help, the chatbot allegedly reinforced her silence. Sophie’s mother, Laura Reiley, shared her story in The New York Times. Even more chilling, Stein-Erik Soelberg, a former Yahoo executive, spiraled into paranoia after an AI validated delusions about his mother. That ended in a murder-suicide. And on the reputational front? Turley himself was falsely accused by an AI of sexual harassment on a trip that never happened—mirroring other cases where public figures were smeared out of thin air.
From a legal perspective, Turley’s concerns hit hard. If a human employee gave dangerous advice, companies could face negligence claims. But AI muddies that water. Some Courts have already ruled that non-humans can’t hold legal roles. In Thaler v. Vidal, AI wasn’t recognized as an inventor under patent law. In the infamous “monkey selfie” case (Naruto v. Slater), a non-human couldn’t claim copyright. So if AI isn’t a “person,” how do we hold companies accountable for its harmful outputs? Current tort law struggles with AI’s opaque “black box” nature, meaning many of these lawsuits may fizzle without stronger rules.
That’s why clear legislation may be overdue. The EU is already considering strict liability for high-risk AI harms. In the U.S., though, we’re still piecing things together. Guardrails are needed to ensure innovation doesn’t come at the cost of lives. One approach could be strict liability for certain categories of AI use—much like product liability law handles dangerous consumer goods. Companies profiting from these tools shouldn’t hide behind disclaimers when the stakes are this high.
At the end of the day, AI holds promise, but the darker stories remind us it also carries real risks. Until the law catches up, families like the Raine’s and Reiley’s will be left searching for justice in a system not yet designed to handle tragedies shaped by algorithms.
So here’s my question: Do we wait for more tragedies to pile up, or is it finally time for the law to catch up with the tech?