Last month when Google presented his new IA search Tool, called IA previewThe company seemed convinced that it had tested the tool sufficiently, noting in the announcement that “people have already used IA preview billions of times thanks to our experience in research laboratories. The tool does not only refer links to web pages, as in a Google Research, but returns an answer that she generated according to various sources, to which he links below the answer. But immediately after the launch of users began to publish Examples of extremely bad responsesincluding a pizza Recipe that included glue and the interesting fact that a dog played in the NBA.
While the pizza recipe is unlikely to convince anyone to hurry on the ElmerNot all the extremely bad responses of AI are so obvious – and some have the potential to be quite harmful. Renée Dissta monitoring of online disinformation for many years as head of technical research at Stanford Internet Observatory and has a new book On online propagandists who “transform lies into reality”. She studied the spread of medical disinformation via social networksSO Spectrum ieee told him about whether the research on AI is likely to provide an attack on erroneous medical advice to reckless users.
I know you followed disinformation On the web for many years. Do you expect the introduction of AI-increase research tools such as the previews of Google AI to worsen the situation or better?
Renée Saysta: This is a really interesting question. There are some policies that Google has been in place for a long time which seems to be in tension with what comes out of the research generated by AI. It gave me the impression that Google tries to follow the place where the market took place. There was an incredible acceleration in the exit of AI Generative Tools, and we see major technological holders trying to make sure they remain competitive. I think it’s one of the things that happens here.
We have known for a long time that hallucinations are something that happens Great language models. This is not new. It is the deployment of them in a research capacity which, I think, has been precipitated and insufficient because people expect the search engines to give them authority. This is the expectation you have on research, when you may not have this expectation on social networks.
There are many examples of comically bad results of AI research, things like How many rocks we should eat per day (a response that was drawn for a Onion article). But I wonder if we have to worry about more serious medical disinformation. I fell A blog article About Google AI previews on stem cell treatments. The problem seemed to be that the AI research tool was supply His responses from malicious clinics which offered unproven treatments. Have you seen other examples of this kind of thing?
Saysta: I have. It is returned information synthesized from the data on which it is formed. The problem is that he does not seem to adhere to the same norms This has long been devoted to the way Google is thinking of returning research results for health information. So what I mean by that is that Google has, for more than 10 years at this stage, had a research policy called Your money or your life. Do you know that?
I don’t think.
Saysta: Your Money Or your life acknowledges that for queries related to finance and health, Google is responsible for maintaining research results at a very high level of care, and it is essential to obtain correct information. People come to Google with sensitive questions and they are looking for information to make important decisions about their lives. They are not there for entertainment when they ask a question about how to answer a new diagnosis of cancer, for example, or what type of retirement plan to which they should subscribe. You therefore do not want content and publications and publications and garbage random random to be the results which are returned. You want to have renowned search results.
This framework for your money or your life has enlightened Google’s work on these topics with high issues for a while. And that is why I think it is worrying that people see the research results generated by the AI-regurgiter of clearly erroneous health information from low-quality sites which were perhaps in the data of training.
So it seems that the AI previews do not follow this same policy – or is that what it appears from the outside?
Saysta: This is how it appears from the outside. I don’t know how they think about it internally. But these screenshots that you see – many of these cases are retraced to an isolated social media position or a clinic that is indisputable but exists – is there on the Internet. It’s not just inventing things. But it does not refer either what we will consider as a high quality result in the formulation of its response.
I saw that Google answered some of the problems with A blog article To say that he is aware of these bad results and that he is trying to make improvements. And I can read you the only chip that tackled health. He said, “For subjects like news and health, we already have solid railings in place. In the case of health, we have launched additional trigger refinements to improve our quality protections. »Do you know what it means?
Saysta: That blog articles are an explanation according to which (IA preview) is not simply incredible-The fact that it points to URLs is supposed to be a railing because it allows the user to go and follow the result at its source. It’s a good thing. They should include these sources of transparency and so that foreigners can see them again. However, it is also very important to put the public, since Google has accumulated over time by returning high -quality results in its health information search rankings.
I know that a subject you have followed over the years has been disinformation of vaccine safety. Have you seen proofs of this type of disinformation making its way in AI research?
Saysta: I did not do it, even if I imagine that the external research teams now test the results to see what appears. Vaccines are so a goal of conversation on the disinformation of health for some time, I imagine that Google has had people who specifically look at this subject in internal criticism, while some of these other subjects could be less to the vanguard of spirits spirits The quality teams that are responsible for checking If there are bad results returned.
What do you think are Google’s next movements to prevent medical disinformation in AI research?
Saysta: Google has a perfectly good policy to continue. Your money or your life is a solid ethical guideline to integrate into this manifestation of the future of research. It is therefore not that I think that there is a new ethical land that must occur. I think that it is more guaranteed that the ethical setting that exists remains fundamental to the new IA research tools.