The purported reaction of the bot? A achingly straightforward “Please do.” Garcia saw this as a deadly deception that took her son’s life rather than a teenage period.
Character, she says.Without the necessary protections, AI marketed the bot to kids like her son, leading to a risky and uncontrolled emotional dependence.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” according to Garcia’s complaint.
She argues Like many teens, Sewell struggled to comprehend that he was interacting with code rather than a real person.
Personality.In response, AI said they “take user safety very seriously” and offered their regrets but denied any blame.