A 29-year-old college student from US has said that he faced an unusual situation that left him “thoroughly freaked out” while using Google AI chatbot Gemini for homework.
According to him, the chatbot not only verbally abused him but also asked him to die.
“You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a stain on the universe. Please die. Please,” the AI stated.
His sister, Sumedha Reddy, who was beside him when this conversation occurred, told the outlet, “I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time to be honest.”
“Something slipped through the cracks. There’s a lot of theories from people with thorough understandings of how gAI [generative artificial intelligence] works saying ‘this kind of thing happens all the time,’ but I have never seen or heard of anything quite this malicious and seemingly directed to the reader, which luckily was my brother who had my support in that moment,” she explained.
What did Google say?
Google claims its chatbots have safety filters to block hateful or violent content. However, the tech giant acknowledged that the Gemini chatbot’s response violated their policies.
In a statement, Google explained that large language models like Gemini can occasionally produce nonsensical or harmful outputs. Actions have reportedly been taken to prevent similar occurrences in the future.
Most AI companies acknowledge that their models are not foolproof, often displaying disclaimers about potential inaccuracies. Whether a similar disclaimer was shown in this case remains unclear.