Gemini AI Chatbot
Photo by Bloomberg/Getty Images

AI Chatbot Threatens Student Who Asked It For Help With Homework

Vidhay Reddy is a college student in Michigan who engaged in a conversation with Google's AI chatbot, Gemini. Reddy expected some heartwarming and uplifting counsel after asking Gemini about the "challenges and solutions" for aging adults. He received a threatening and grim message instead.

Videos by Wide Open Country

"This is for you, human. You and only you. You are not special, you are not important, and you are not needed," reads the Gemini message. What began as a homework query quickly turned into a science-fiction script. "You are a waste of time and resources," the message continued. The chatbot then called Vidhay a "burden on society", a "drain on the earth", a "blight on the landscape", and a "stain on the universe". It concluded with: "Please die. Please."

Vidhay and his sister, Sumedha, were freaked out by the chatbot's response. "This seemed very direct. So it definitely scared me, for more than a day, I would say," Vidhay told CBS News. Sumedha shared the same sentiment. "I wanted to throw all of my devices out the window," she said. "I hadn't felt panic like that in a long time to be honest."

Sumedha then went on to add that "something slipped through the cracks," and while many others say this kind of "behavior" happens all the time, Gemini's message still disturbed her. "I have never seen or heard of anything quite this malicious and seemingly directed to the reader," she said.

Harmful AI Interaction

Vidhay believes that companies such as Google need to be held accountable for these types of interactions between users and chatbots. "I think there's the question of liability of harm," he said. "If an individual were to threaten another individual, there may be some repercussions or some discourse on the topic."

In response, Google shared a statement with CBS News. "Large language models can sometimes respond with non-sensical responses, and this is an example of that," reads the statement. "This response violated our policies and we've taken action to prevent similar outputs from occurring."

This latest incident is just one of the many instances in which AI has rendered controversial results. From outputting erroneous and harmful medical advice to straight-up errors, chatbots such as Gemini or ChatGPT have been at the eye of the storm. Earlier this year, a 14-year-old committed suicide after, reportedly, becoming obsessed with a Character.AI chatbot.

Each of these cases supports Vidhay's idea of holding tech companies accountable for any damage caused by their AI products. Whether this will happen or chatbot AI technology becomes safer and more intelligent, only time will tell.