Up
Google said Thursday that it restricted its new research results generated by AI after the tool has produced “strange, inaccurate or useless” summaries that has become viral On social networks, the last of a series of high -level AI Flubs for the search engine giant after its Image generator AI product Historically inaccurate results earlier this year.
Key facts
Google’s research manager Liz Reid said in a blog that the company would reduce the use of its newly implemented AI search tool, IA previews and the implementation of additional railings to technology after its automatic being rolled up to users of the United States a fortnight ago.
The tool, designed to improve research, uses a generative AI to summarize research requests at the top of the results page and although Reid said that functionality was a precious addition to Google research, it recognized several failures High level becoming viral on social networks – research The engine told people to eat rocks, to put glue on their pizzas and that Barack Obama was Muslim, to list some – which highlighted the improvement zones.
Reid said Google had updated its systems to limit the use by the AI of content generated by users as social media and forum publications during the generation of responses, because they are more enclosed “Offer misleading advice”.
The research giant will also arouse AI by showing summaries on certain subjects where the challenges are higher, in particular health related requests, as well as the limitation of summaries for “inemical”, humorous and satirical queries that seem Be designed to arouse just as unsafe responses not inserted.
The company also “added trigger restrictions for requests where IA previews do not prove as useful,” said Reid, adding that Google “brought more than a dozen technical improvements” in the ‘together.
Despite the Viral Flubs, Reid defended the functionality and declared that the IA previews have led to “higher satisfaction” among users and people asking “longer and more complex questions than they know that Google can now help ”.
What happened with the previews of Google AI?
Google, by far the most popular search engine in the world, automatically deployed AI previews to American users earlier this month. The deployment, which pushes the typical links associated with a lower search result on the page after an answer generated by AI, was automatic and as the tool cannot be deactivated, it sparkled A degree of reaction among users. More worrying concerns for Google were the inaccurate, strange and sometimes ridiculous summaries that started to spread on social networks, similar to the way in which Inaccurate images Its gemini tool has produced a spread earlier this year. While many viral articles on social networks were authentic, such as developer people with kidney stones to drink liters of urine to help pass a kidney stone or that Eat rocks is good for your health, a good number was not. De facto auditors have rejected a variety of TRUFFICATED OR FALSE Summary screenshots leaving social media, such as answers saying that doctors recommend that pregnant people smoke 2-3 cigarettes per day, suggests that depressed users are jumping from the Golden Gate bridge and providing instructions for the Automutilation, and Reid said that Google encourages “anyone meeting these screen rains to do a search themselves to check. Many temporary requests are the results of an “vacuum” or “information gap”, said Reid, occupying the areas where there is little reliable information, which means that the content satirical, such as the recommendation to eat rocks, can slip.
Get Forbes Breaking News Text Alerts: We are launching SMS alerts so that you will always know the biggest stories shaping the titles of the day. “Alerts” text at (201) 335-0739 or register here.
Crucial quote
“On a web scale, with billions of requests that arrive every day, there are necessarily quirks and errors,” said Reid. “We have learned a lot in the past 25 years on how to build and maintain a high quality research experience, including how to learn from these errors to improve research for everyone. We will continue to improve when and how we show AI and reinforce our protections, including for EDGE cases, and we are very grateful for continuous comments. “”