What you need to know
- Last week, the functionality of Google’s overview was spotted by generating misleading answers, in particular by recommending to eat rocks and glue.
- The company says that a vacuum or an information difference on specific subjects on the web has greatly contributed to cases where the functionality generates deceptive research results.
- Google has improved the user experience of the tool with better detection mechanisms for insensitive requests that should not display an AI preview.
Each large technological company quickly jumps into the large race of AI. But above all forgotten or ignored, the rhythm does not necessarily equivalent to a perfect execution. Microsoft mainly had a fluid sail with technology after doing a Investment of several billion in OpenAi technology. Its success in the category is well described in its latest gains call report, and it is now The most precious society in the world before Apple With more than 3 billions of dollars in market capitalization.
Unfortunately, Google’s blatant attempt to hunt Microsoft’s traces and success in AI apparently failed if The show of the IA previews last week is something to pass. The functionality has been identified strangely recommending to eat rocks and glue, and potentially even commit suicide, although it has recently acquired exclusive rights to the content of Reddit to feed your AI.
Have Google AI’s glimps go crazy?
Google recently published a new blog article highlighting what happened to the functionality of the AI and contributed to the generation of misleading information and recommendations for requests. Leader of Google Search Liz Reid said AI’s overviews are different from chatbots or LLM products which are widely available and create responses according to training data. Instead, the functionality is fueled by a personalized language model integrated into Google’s web classification systems. In this way, the functionality presents well -organized and high quality search results to users, including relevant links.
According to the lead in Google research:
“AI glimps do not generally do not hallucinate” hallucinous “or do not do things the way other LLM products could. Have a lot of information available.”
While approaching the erroneous and “insane” search results of the functionality of the IA previews, Reid said that the function was optimized for precision and was carried out through in -depth tests, including robust efforts of Red equipment, before it is shipped. The research manager also indicated that the team had spotted “absurd” research potentially aimed at creating erroneous results.
Reid also indicated that certain screenshots widely shared on social media platforms had been manufactured, which prompted users to think that Google had returned dangerous results for subjects such as smoking during pregnancy. The company says that this is not the case and recommends that users try to perform this research using the tool to confirm.
Google admitted that the functionality has provided inaccurate inaccuracies for certain research. Interestingly, Google says “that these were generally queries that people do not generally do”. The company also indicated that before the screenshots of people using the functionality to discover “How many rocks should I eat?” has become viral, practically no one asked this question.
In addition, Google says that there is limited quality content covering such subjects while referring to the phenomenon as a vacuum. “In other examples, we saw IA glimps which included sarcastic or troll-y of discussion forums. Use glue to get cheese to stick to pizza.”
What does Google do to solve these critical problems?
Google has highlighted several measures which will not necessarily solve the requests one by one, but will approach large sets of requests via updates and “a dozen technical improvements” to its basic research systems, in particular:
- Google has put better detection mechanisms for absurd requests which should not show an overview of AI and limited the inclusion of satire and humor content in search results.
- It also limits the use of the content generated by the user in the context of requests to requests to promote quality search results.
- Google triggers restrictions for requests where IA previews have been deemed unnecessary.
- The company indicates that it already has solid railings for news and health.
Finally, Google also indicated that it would keep a trace of the comments according to the user experience of the tool and external reports to shed its decisions on how to improve its experience and even more.