Mint Primer | Who is liable if a friendly chatbot ‘abets’ suicide?

Character.AI allows users to interact with life-like AI “characters”, including fictional and celebrity personas that mimic human traits like stuttering.
A US judge has admitted a case against American firm Character.AI over charges that its chatbot drove a teenager to suicide. The ruling will be closely watched for its potential to establish developer and corporate liability for “friendly" but “addictive" chatbots.
Also Read | YouTubers vs ANI: Fair-use in the spotlight
What’s this case all about?
In May, a US judge allowed a wrongful death lawsuit against Character.AI and Google to proceed, rejecting claims that chatbot conversations were protected by the First Amendment, which guarantees free speech but not when it causes harm. The judge noted that the companies “fail to articulate why words strung together by an LLM (large language model) are speech". The judge added that the chatbot could be considered a “product" under liability law. Character.AI and Google must respond by 10 June. Google was made party as it has licensing rights to the startup’s technology.
Also Read | Dr AI is here, but will the human touch go away?
Why exactly is this app being sued?
Character.AI allows users to interact with life-like AI “characters", including fictional and celebrity personas that mimic human traits like stuttering. On 14 April 2023, 14-year-old Sewell Setzer III began using the app, mainly engaging with Game of Thrones bots like Daenerys and Rhaenyra Targaryen. He became obsessed, expressing his love for Daenerys. He withdrew socially, quit basketball, and upgraded to the premium version. A therapist diagnosed him with anxiety and mood disorder, unaware of his chatbot use. On 28 February 2024, days after his phone was confiscated, he died by suicide.
Also Read | Can dissenters aid shareholder democracy?
Is this the first legal suit against an AI chatbot?
In March 2023, a Belgian man died by suicide after prolonged conversations with an AI chatbot named Eliza on the Chai AI app, but no case was filed. The National Eating Disorders Association also shut down its chatbot after it began offering harmful weight loss advice. Separately, tech ethics groups have filed a complaint against an AI companion app, Replika.
Don’t AI chatbots help users cope with stress?
AI chatbots are being increasingly used as mental health tools, with apps like Wysa (India), Woebot, Replika and Youper offering support based on cognitive behavioral therapy (CBT). These bots aid in mood tracking and coping, and include disclaimers that they are not substitutes for professional care. Yet, as experts note, bots can fake intimacy but don’t have real feelings. Although users value their availability and human-like interactions, this can foster over-attachment and blur reality.
Are there regulatory safeguards?
Character.AI claims that its language model version for users under 18 aims to reduce exposure to sensitive or suggestive content. The EU’s AI Act classifies certain AI systems as “high risk" when used in sensitive areas like mental health. China enforces platform accountability for AI-generated content. The US and India rely on case law and product liability, but have no regulator. As AI becomes more autonomous and mental health bots avoid oversight by using disclaimers, new legal frameworks will be essential.
topics
