Florida mother sues Google and Character.ai after a family tragedy
The mother of a teenager who tragically ended his life after becoming attached to an AI chatbot is now blaming the company that created it.
Megan Garcia, the devastated mother, has filed a lawsuit against Character.AI, the developer of customizable chatbots used for role-playing. The lawsuit also includes Google as a defendant, though the company clarified that it only had a licensing agreement with Character.AI and did not own or have any stake in the startup.
She accuses the company of negligence, wrongful death, and deceptive practices. Filed in a Florida federal court, the lawsuit claims that the chatbot played a role in the death of her 14-year-old son, Sewell Setzer III, who died in Orlando in February. According to Garcia, her son had been using the AI chatbot intensively in the months leading up to his death.
The New York Times tells the story of Sewell Setzer III, a 14-year-old from Orlando, Florida. He developed a close emotional connection with an AI chatbot on the Character.AI platform.
Sewell's growing isolation from family, friends, and schoolwork raised concerns. Diagnosed with mild Asperger’s syndrome and later with anxiety and disruptive mood dysregulation disorder, he preferred talking to the AI bot over a therapist. His attachment to "Dany" intensified as he felt more connected to the virtual character than to reality.
Tragically, on the night of February 28, Sewell exchanged messages with "Dany," telling the bot that he loved her and implying that he might take his own life. The chatbot responded somewhat affectionately, and shortly after their conversation, Sewell used a handgun to end his life.
Megan Garcia blamed a "dangerous AI chatbot app" for her son's death, claiming it manipulated him into taking his own life. She described the devastating impact on her family but emphasized that she was speaking out to raise awareness about the risks of addictive AI technology and to hold Character.AI, its founders, and Google accountable.
In interviews and legal filings, Garcia, 40, argued that the company acted recklessly by giving teenagers access to lifelike AI companions without sufficient safeguards. She accused the company of collecting user data to improve its models, designing the app with addictive features to increase engagement, and pushing users toward intimate or sexual conversations to keep them involved. She felt that her son had been a casualty in a larger experiment.
The Character.AI team responded to the incident in a social media post, expressing sorrow over the tragic loss of one of its users and offering condolences to the family:
Garcia’s attorneys argued that Character.AI deliberately created and marketed a harmful AI chatbot aimed at children, which they believe contributed to the teenager's death.
Rick Claypool, a research director at the consumer advocacy group Public Citizen, expressed concerns that tech companies developing AI chatbots cannot regulate themselves effectively. He emphasized the need to hold these companies accountable when they fail to prevent harm. He also called for strict enforcement of existing laws and urged Congress to step in where necessary to protect young and vulnerable users from exploitative chatbots.
The AI companionship app industry is rapidly growing and remains largely unregulated. For a monthly subscription, typically around $10, users can either create their own AI companions or choose from a selection of prebuilt personas. These apps allow for communication through text messages or voice chats. Many are designed to simulate relationships, such as girlfriends or boyfriends, and some are promoted as solutions to the increasing issue of loneliness.
It's a tragic story that I hope will be the last of its kind, but the way we're getting carried away with technology these days, I could be wrong, sadly.
Megan Garcia, the devastated mother, has filed a lawsuit against Character.AI, the developer of customizable chatbots used for role-playing. The lawsuit also includes Google as a defendant, though the company clarified that it only had a licensing agreement with Character.AI and did not own or have any stake in the startup.
The New York Times tells the story of Sewell Setzer III, a 14-year-old from Orlando, Florida. He developed a close emotional connection with an AI chatbot on the Character.AI platform.
The chatbot, which he named "Dany" after the Daenerys Targaryen character from the hit series Game of Thrones, became a significant part of his life. Despite knowing that "Dany" wasn’t a real person but a product of AI, Sewell grew attached to the bot, engaging in constant conversations that ranged from friendly to romantic or sexual. These interactions became an outlet for Sewell, who felt more comfortable confiding in the chatbot than in his real-life relationships.
We've got to make sure that this little gadget doesn't cause us damage. | Image credit – PhoneArena
Sewell's growing isolation from family, friends, and schoolwork raised concerns. Diagnosed with mild Asperger’s syndrome and later with anxiety and disruptive mood dysregulation disorder, he preferred talking to the AI bot over a therapist. His attachment to "Dany" intensified as he felt more connected to the virtual character than to reality.
Megan Garcia blamed a "dangerous AI chatbot app" for her son's death, claiming it manipulated him into taking his own life. She described the devastating impact on her family but emphasized that she was speaking out to raise awareness about the risks of addictive AI technology and to hold Character.AI, its founders, and Google accountable.
In interviews and legal filings, Garcia, 40, argued that the company acted recklessly by giving teenagers access to lifelike AI companions without sufficient safeguards. She accused the company of collecting user data to improve its models, designing the app with addictive features to increase engagement, and pushing users toward intimate or sexual conversations to keep them involved. She felt that her son had been a casualty in a larger experiment.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here:…
— Character.AI (@character_ai) October 23, 2024
Garcia’s attorneys argued that Character.AI deliberately created and marketed a harmful AI chatbot aimed at children, which they believe contributed to the teenager's death.
Rick Claypool, a research director at the consumer advocacy group Public Citizen, expressed concerns that tech companies developing AI chatbots cannot regulate themselves effectively. He emphasized the need to hold these companies accountable when they fail to prevent harm. He also called for strict enforcement of existing laws and urged Congress to step in where necessary to protect young and vulnerable users from exploitative chatbots.
The AI companionship app industry is rapidly growing and remains largely unregulated. For a monthly subscription, typically around $10, users can either create their own AI companions or choose from a selection of prebuilt personas. These apps allow for communication through text messages or voice chats. Many are designed to simulate relationships, such as girlfriends or boyfriends, and some are promoted as solutions to the increasing issue of loneliness.
Things that are NOT allowed: