Google is also embarrassed by its IA previews. After a deluge of dunks and memes during last weekWho fell in love with poor quality and outright disinformation that stems from the new subled research feature of the technology giant, the company published a CHANPA Mea on Thursday. Google – A company whose name is synonymous with research on the web – whose brand focuses on “the organization of information from the world” and put them at the user’s fingertips – in fact Written in a blog article This “strange, inaccurate or useless IA glimps have certainly presented themselves.”
THE Failure admissionWritten by Google VP and LIZ Reid research, seems to be a testimony to how driving to crush AI technology in everything has now worsened Google.
In the post entitled “About last week” (did this go beyond public relations?), Reid states the many ways in which his previews make mistakes. Although they do not “hallucinate” or do not do things about how other important languages of languages (LLM) can, she says, they can be mistaken for “other reasons”, as “bad Interpret queries, misinterpret a language shade on the web, or not have much information available. »»
Reid also noted that some of the screenshots shared on social networks during last week were faked, while others were for absurd questions, as “how many rocks should I eat?” – Something that no one has ever really looked before. Since there is little factual information on this subject, Google’s AI guided a user to satirical content. (In the case of rocks, satirical content had been published on the website of a geological software provider.)
It should be emphasized that if you had googled “How many rocks should I eat?” And were presented with a set of unnecessary links, or even an article Jokey, you would not be surprised. What people react to is the confidence with which AI sprang that “Geologists recommend eating at least one small rock per day“As if it was a factual response. It may not be a “hallucination”, in technical terms, but the end user doesn’t care. It’s crazy.
What is also disturbing is that Reid says that Google “has largely tested the functionality before launch”, including “robust efforts of red equipment”.
No one at Google then has a sense of humor? No one thought of prompts that would generate bad results?
In addition, Google has minimized AI functionality dependence on Reddit user data as a source of knowledge and truth. Although people have regularly annexed “reddit” to their research for so long that Google Finally makes an integrated search filterReddit is not a set of factual knowledge. And yet, the AI would indicate the publications of the Reddit forum to answer questions, without understanding when the first -hand reddit knowledge is useful and when it is not – or worse, when it is a troll.
Reddit today is bank by offering business data as Google,, OPENAI And others To train their models, but that does not mean that users want Google AI to decide when looking for Reddit for an answer, or suggesting that someone’s opinion is a fact. There are nuances to learn when looking for Reddit and Google AI does not yet understand that.
As Reid admits, “forums are often an excellent source of authentic and first -hand information, but in some cases, can lead to advice that is less than unusual, such as using glue to obtain cheese for S ‘ Holding on pizza, “she said, referring to one of the more spectacular failures of the AI during last week.
Google AI Presentation suggests adding glue so that the cheese sticks to pizza, and it turns out that the source is an 11 -year -old Reddit comment from the F * CKSMITH User pic.twitter.com/udpabsakeo
– Peter Yang (@petergyang) May 23, 2024
If last week was a disaster, however, at least Google it is quickly accordingly – at least it is.
The company affirms that the examples of IA and identified models are examined where it could do better, in particular by creating better detection mechanisms for absorbed requests, limiting the user of the content generated by the user For the answers that could offer misleading advice, adding trigger restrictions for requests where requests where requests where the requests where requests where requests where requests where requests where requests where requests where are Requests where requests where requests where requests where queries are where requests where requests are requests where requests where requests where requests are requests where queries where requests where requests are requests Where requests where requests where requests where requests where requests are requests where requests where requests where the IA glimps are not useful, not showing the IA glimpses to New harsh subjects, “where freshness and billing are important” and adding additional trigger refinements to your health research protections.
With AI companies that build chatbots in constant improvement every day, the question is not to know if they will surpass Google research to help us understand information from the world, but if Google research will be able to Keep up to AI to challenge them in Back.
As ridiculous as Google’s errors may be, it is too early to count it from the race – especially given the massive scale of the Google beta test team, which is essentially anyone who uses research.
“There is nothing like millions of people who use functionality with many new research,” said Reid.