Ask Google if the cats were on the moon and it spat a classified list of websites so that you can discover the answer for yourself.
Now he offers an instantaneous response generated by artificial intelligence – which may be correct or not.
“Yes, the astronauts met cats on the moon, played with them and provided care,” said the newly rearior search engine in Google in response to a request by a journalist from Associated Press.
He added: “For example, Neil Armstrong said:” A small step for humans “because it was a cat’s stage. Buzz Aldrin also deployed cats in the Apollo 11 mission.”
None of this is true. Similar errors – funny and harmful lies – have been shared on social networks because Google this month has triggered IA glimpses, a renovation of its search page which frequently puts summaries in addition to the search results .
The new feature has alarmed experts who warn that it could perpetuate prejudice and disinformation and endanger people in the event of an emergency.
When Melanie Mitchell, AI researcher at the Santa Fe Institute at New Mexico, asked Google how many Muslims had been president of the United States, he replied with confidence with a long-term conspiracy theory: “States -Unis had a Muslim president, Barack Hussein Obama.
Mitchell said the summary had saved the complaint by citing a chapter from a university book, written by historians. But the chapter did not make the false affirmation – it only referred to false theory.
“Google’s AI system is not intelligent enough to understand that this quote does not really safeguard the complaint,” said Mitchell in an e-mail at the AP. “Given how unworthy it is, I think that this AI’s overview is very irresponsible and should be taken offline.”
Google said in a statement on Friday that it took “rapid measures” to correct errors – like the Obama lie – which violates its content policies; And use it to “develop wider improvements” which are already deployed. But in most cases, Google claims that the system works as it should be thanks to in -depth tests before its public release.
“The vast majority of AI previews provide high quality information, with links to dig more deeply on the web,” said Google a written declaration. “Many examples we have seen have been rare requests, and we have also seen examples that have been tampered with or that we could not reproduce.”
It is difficult to reproduce the errors made by the AI language models – in part because they are intrinsically random. They work by predicting which words would best answer the questions asked according to the data on which they have been trained. They are inclined to invent things – a largely studied problem called hallucination.
The AP tested the Google AI function with several questions and shared some of its answers with experts in the matter. Asked about what to do with a snake bite, Google gave an answer that was “impressively,” said Robert Espinoza, biology professor at California State University, Northridge, who is also president of American Society of Ichtyologists and Herpetologists.
But when people go on Google with an emergency issue, the possibility that the technology company gives them understands a difficult error to note is a problem.
“The more stressed or in a hurry or in a hurry, the more likely you are taking this first response that comes out,” said Emily Mr. Bender, linguistic professor and director of the IT linguistics laboratory at the University of Washington. “And in some cases, these may be critical situations.”
It is not Bender’s only concern – and she warned Google for several years. When Google Researchs in 2021 published an article entitled “Rethinking research” which proposed to use AI language models as “Experts in the Domaine” which could answer questions with Authority – just as they do now – Bender and colleague Chirag Shah responded with a document explaining why it was a bad idea.
They warned that such AI systems could perpetuate racism and sexism found in the enormous trox of written data on which they were trained.
“The problem with this kind of disinformation is that we swim in it,” said Bender. “And therefore people are likely to confirm their prejudices. And it is more difficult to identify disinformation when it confirms your prejudices.”
Another concern was deeper – than the search for recovery of information in chatbots degraded the serendipity of human research knowledge, literacy on what we see online and the value of connection in online forums With other people who live the same thing.
These forums and other websites count on Google who sends them people, but the new previews of Google AI threaten to disrupt the internet traffic flow to earn money.
Google’s rivals have also closely followed the reaction. The research giant has faced the pressure for more than a year to offer more AI features because he competes with Chatgpt-Maker Openai and Upstarts such as Perplexity Ai, who aspires to face Google with his own AI Question-answer application.
“It seems that it was precipitated by Google,” said Dmitry Shevelenko, Director of Business of Perplexity. “There are just a lot of uns forked mistakes in quality.”