
Google has made changes to its new IA overview search function After that, he gave strange and inaccurate results, like telling people to put glue on pizza and eat rocks. The company has implemented new rules to try to prevent its AI from giving bad information.
Last week, Google published IA glimps to everyone in the United States. The new feature uses artificial intelligence To summarize information from the websites and give direct answers to research questions. This, many people quickly found examples of the AI giving very strange and incorrect answers.
In a case, when someone searched “how many rocks should I eat?” Presentation of the AI said that eating rocks could offer health benefits, even if it is not true. In another example, AI told people to use glue to improve pizza cheese. And to question, the AI said
Liz Reidthe head of Google searchwrote a blog article on Thursday explaining what was wrong. She said that for the question of the rocks, almost no one had ever sought this before. One of the only web pages of the subject was a joke article, but AI thought it was serious.
“Before these screenshots become viral, almost no one asked Google this question,” wrote Reid. “Nor is there a lot of web content that seriously envisages this question. This is often called an” data vacuum “or an” information gap “, where there is a limited quantity high quality content on a subject ”.
Reid said that the response to pizza cheese glue came from a discussion forum post. She explained that even if the forums often have useful first -hand information, they can also contain bad advice that AI has taken care of.
She defended Google, saying that some of the worst supposed examples spreading on social networks, such as the AI saying that pregnant women could smoke, were false screenshots, which they were really.
Google makes changes to Search an overview of the AISo it no longer tells people to eat stone or glue
The research executive has said that Google AI’s previews are designed differently from chatbots, as they are integrated into the main web classification systems of the company to surface and high quality. For this reason, she argued, AI generally does not “hallucinate” information as other models of great language, and has an equal precision rate with Google Star.
Reid recognized that “strange, inaccurate or useless IA glimps have certainly presented themselves”, highlighting the areas of improvement. Thus, Google has now made more than a dozen changes to try to solve these problems.
Google has improved its AI ability to recognize idiot, or as Reid says “absurd” questions to which he should not answer. He also made AI composed less on forums and publications on social networks that could be lost.
For serious subjects such as health and news, Google already had more strict rules on the moment when AI could give direct answers. Now he has added even more limits, especially for health related research.
In the future, Google says he will keep an eye on AI’s overviews and quickly solve any problem. “We will continue to improve when and how we show IA previews and strengthen our protections, including for Edge cases,” wrote Reid. “We are very grateful for the current comments.”