Google recently unveiled plans to integrate its search engine with artificial intelligence (AI).
The company is launching a new search engine functionality called IA previewwhich generates an overview of the subject that a user searches and displays links to find out more. The traditional search results always appear below, but the IA previews, known as Google, will analyze various information to give you a faster answer. The new feature has raised concerns Of some web publishers, who are worried, will bring a hard blow to the traffic of their site.
Currently, AI previews do not appear for each subject, but users based in the United States will begin to see them appear this week. Google expects the functionality to be available for more than a billion people by the end of the year.
Google’s idea is great, but needs additional validation, according to Kristian HammondIT teacher at the McCormick School of Engineering and director of the two Center for the advancement of the safety of the machine intelligence and the Master of Science in Artificial Intelligence program. An AI pioneer, he also co -founded the technological startup Narrative scienceA platform that used AI to transform megadonts into prose. Narrative science was acquired by Salesforce at the end of 2021.
Hammond recently shared the main dishes to remember from Google’s announcement with Northwestern now.
The integration of AI into research is a great idea, but publishing it before it is really ready could have consequences
“The integration of AI into research is an incredibly excellent idea, but it is not ready. Since it is not ready, Google essentially transforms the whole world into beta testers for its products. Research is at the heart of how we use the Internet daily, and now this new integrated research is imposed on the world. Running too fast could be bad for products, bad for use and bad for people in general.
“In terms of technology at the heart of the model, he has not yet reached a point where we can definitely say that there is enough railing on language models to prevent them from saying lies. It has still not been tested enough or quite verified. Research will block users of the content or offer the content of users without allowing them to make decisions on what is a more authoritarian or less authoritarian source. »»
We will not know what is blocked
“With language models like Gemini and Chatgpt, the developers have put a lot of work excluding or limiting the quantity of dangerous, offensive or inappropriate content. They block the content if they believe it could be reprehensible. Without our knowing the decision -making process behind the content labeling, as appropriate or inappropriate, we will not know what is blocked or authorized. This, in itself, is dangerous. »»
Consequences for content creators
“The new search will provide information from other websites without driving users to these sites. Users will not visit the source sites, which provide information and will make it possible to use their content. Without traffic, these sites will be threatened. People who provide the content that form the models, will not win anything. »»
The war of functionalities moves too quickly
“We are in the midst of a war. Technological companies like Google incorporate new features that are not massive innovations. It is not that technology evolves too quickly; It is the features that are hung on these technologies that move quickly. When a new feature arrives, we are distracted until the next feature is released. It is a bunch of different companies that slam their features against each other. This ends up being a battle between technological companies, and we are the shepherds. There is no time when we can take a break and assess these products. »»