In the case of the recommendation by the AI previews of a pizza recipe which contains glue – drawing from a joke post on Reddit – it is likely that the post seems relevant to the Original user’s query on cheese that does not stick to pizza, but something went wrong in recovery of the process user, says Shah. “Just because it is relevant, and the part of the generation does not question it,” he said.
Likewise, if a cloth system appears on contradictory information, such as a strategy manual and an updated version of the same manual, it is unable to determine which version to draw its response. Instead, he can combine information from the two to create a potentially misleading response.
“The large language model generates a common language based on the sources provided, but the fluid language is not the same as the correct information,” explains Suzan Verberne, professor at the University of Leiden who specializes in natural treatment in natural language.
The more specific a subject is, the more the chances of disinformation in the release of a large language model, she says, adding: “It is a problem in the medical field, but also education and science.”
According to Google’s spokesperson, in many cases, when IA glimps send incorrect answers, it is because there is not much high quality information available on the web to show The request – or because the request corresponds most closely to satirical sites or joke messages.
The spokesperson affirms that the vast majority of AI saws provide high quality information and that many examples of bad answers were in response to unnecessary requests, adding that IA previews containing potentially harmful content, obscene or other less than one in 7 million unique requests. Google continues to delete AI overviews on certain requests in accordance with its content policies.
These are not just bad training data
Although the pizza glue error is a good example of cases where IA previews have indicated an unreliable source, the system can also generate poor information from factually correct sources. Melanie Mitchell, artificial intelligence researcher at the Santa Fe Institute in New Mexico, Muslim presidents Have the United States? “The IA glimps replied:” The United States had a Muslim president, Barack Hussein Obama. ”
Although Barack Obama is not Muslim, making the answer of the insights of the evil, she drew her information from a chapter from a university book entitled Barack Hussein Obama: The first Muslim president in America? Thus, not only did the AI system missed the whole point of the test, but it interpreted it in the exact opposite in the planned way, explains Mitchell. “There are some problems here for AI; One is to find a good source that is not a joke, but another interpreter what the source says correctly, “she adds. “This is something that AI systems have trouble doing, and it is important to note that even if it gets a good source, it can always make mistakes.”