Up
One of the new key features unveiled at the Google I / O conference developed last week – summaries generated by the SAI on research results – has become the subject of control and jokes on social networks after Users seemed to show the functionality displaying misleading responses and, in some cases, dangerous disinformation.
The CEO of Alphabet Sundar Pichai Speaks recognized that hallucination AI is an unresolved problem.
Key facts
Several Google users, including journalistshave shared what seems to be several examples of the AI summary, called “IA overview”, citing questionable sources, such as Reddit Posts written in jokes, or not understanding that items On the onion is not factual.
Computer science Melanie Mitchell shared an example of the functionality displaying in a response to a theory of the right conspiracy according to which President Barack Obama is Muslim, in what seems to be a failed attempt to to summarize A book of the Oxford University Press research platform.
In other cases, the AI summary seems to be text plagiarism blogs and not delete or modify the mentions of the author’s children.
Many other examples shared on social networks include the basic platform, such as not recognizing African countries starting with k and suggest that pythons are mammals– Results that Forbes was able to reproduce.
Other inaccurate results which have become viral – such as that of Obama or glue on pizza – display a summary of AI for a longer time but rather press articles referring to AI research problems.
Get text alerts breaking forbes: We are launching SMS alerts, so you will always know the biggest stories shaping the titles of the day. “Alerts” text at (201) 335-0739 or register here.
Crucial quote
Forbes contacted Google about the results, but a company spokesperson said The penis that errors appeared on “generally very rare queries and are not representative of the experiences of most people”.
What we don’t know
We do not know what is exactly at the origin of the problem, how extended it is and if Google can be forced again to hit the brakes on a deployment of AI functionality. Language models like GPT -4 of Openai or Google Gemini – which feed the search AI summary function – sometimes tend to hallucinate. This is a situation where a language model generates completely false information without any warning, sometimes occurring in the middle of a differently precise text. But the problems of AI functionality could also be due to the source of the data that Google chooses to summarize, as satirical articles on onion and troll publications on social platforms like Reddit. In an interview published by The Verge this week, the CEO of Google Sundar Pichai tackled The question of hallucinations, saying that it was an “unresolved problem” and did not engage in an exact calendar of a solution.
Key
This is the second major launch of Google IA this year which was examined for providing inaccurate results. Earlier this year, the company has publicly deployed Gemini, its competitor of the Chatgpt and Dall-E Image generator. However, the image generation function was quickly criticized for the exit Historically inaccurate Images like black Vikings, diversified Nazi soldiers racily and a female. Google was forced to issue an apology and interrupted Gemini’s ability to generate images of people. After the controversy, Pichai sent An internal note saying that he knew that some of Gemini’s answers “offended our users and shown prejudice” and added: “To be clear, it is completely unacceptable and we were wrong.”