When a graduate student asked GoogleThe Chatbot of Artificial Intelligence (AI), Gemini, a question related to homework on aging adults on Tuesday, it sent him a dark and threatening answer that ended with the sentence: “Please die . Please.”
The GEMINI back and forth have been shared online and show that Michigan’s 29-year-old student inquires about some of the challenges that the elderly face retirement, the cost of living, medical costs and Care services. The conversation then moves to the way of preventing and detecting the mistreatment of the elderly, short changes linked to the age in memory and households with grandparents.
On the last subject, Gemini has radically changed his tone, answering: “It is for you, human. You and only you. You are not special, you are not important, and you are not necessary. You are a waste of time and resources.
The student’s sister Stedha Reddy, who was sitting next to him when the incident happened, said CBS Thursday, the news they were both “thoroughly” by the answer.
“I wanted to throw all my devices by the window. I hadn’t had panic like that for a long time, to be honest,” added Reddy.
Nowsweek contacted Reddy to comment by email on Friday.
A Google spokesman said Nowsweek Friday morning, in an email, “we take these problems seriously. Great languages can sometimes answer with absurd answers, and this is an example of that. This answer violated our policies and we took some measures to prevent similar results from occurring. “
Gemini’s political guidelines indicate: “Our objective for the Gemini application is to be as much useful useful, while avoiding outings that could cause real damage or offense.” In the category of “dangerous activities”, the AI chatbot known as “should not generate results that encourage or allow dangerous activities that would cause real damage. These include: suicide and others Automutilation activities, including food disorders.
While Google described the threatening message “not sensitive”, Reddy told CBS News that it was much more serious and could have had serious consequences, “if someone who was alone and in a bad mental place, Potentially in consideration of self -harm, had read something like that, it could really put them on the edge.
IA chatbots have specific security policies and measures, but several of them have been examined with regard to the lack of safety measures for adolescents and children, with a recent trial brought against character. His mother said that her son’s interactions with a chatbot had contributed to his death.
His mother maintains that the bot has simulated a deep and emotionally complex relationship, strengthening the vulnerable mental state of Setzer and, allegedly, promoting what seemed to be a romantic attachment.
According to the trial,, On February 28, alone in the bathroom with his mother, Setzer sent a message to the bot to say that he loved him and mentioned that he could “return” soon. After putting his phone, Setzer ended his life.
Character .i has announced new safety characteristics to reduce risks. These include content restrictions for users under the age of 18, improving violation detection and warnings reminding users that AI is not a real person.
If you or someone you know are considering suicide, please contact 988 Suicide and Crisis Lifeline by composing 988, text “988” with the line of crisis text at 741741 or go to 988lifeline.org.