The Google technology giant has announced upgrades to its artificial intelligence technologies, just a day after its rival Openai announced modifications similar to its offers, the two companies trying to dominate the market emerge quickly where beings Humans can ask questions about IT systems – and get answers in the style of a human response.
This is part of a push to make AI systems such as Chatgpt not only more quickly, but more complete in their answers without having to ask several questions.
On Tuesday, Google demonstrated how AI answers would be merged with certain results of its influential search engine. As part of its annual conference for developers, Google promised that it would start using AI to provide summaries to questions and research, at least some of them being labeled like AI at the top of the page .
The summaries generated by Google are only available in the United States, but they will be written using conversational language.
Meanwhile, the recently announced GPT-4O system of Openai will be capable of conversational responses in a more human voice.
He drew attention on Monday for being able to interact with users while using a natural conversation with very little delay – at least in demonstration mode. Openai researchers have shown that the new Chatgpt vocal assistant capacities, in particular using new vision and voice capacities to speak to a researcher by resolving a mathematical equation on a sheet of paper.
At one point, an Openai researcher told Chatbot that he was in a good mood because he demonstrated “how useful and incredible”.
Chatgpt replied: “Oh, stop it! You make me blush!”
“It looks like the movies,” wrote Sam Altman, CEO of Openai, in a blog article. “Talking to a computer has never felt really natural for me; now it is.”
To give advice and analyze graphics to guide someone through a mathematical equation and even to make a joke, the new model of Chatgpt, called GPT-4O, is presented as having real-time responses in tone in tone natural human.
AI answers are not always good
But researchers in the technology and artificial intelligence sector warn that, as people get information from AI systems more user -friendly, they must also be careful to monitor inaccurate or misleading responses to their requests.
And because AI systems often do not disclose how they have reached a conclusion because companies want to protect trade secrets behind their operation, they are not treated to show so many gross results or source data that traditional search engines.
This means, according to Richard Lachman, they can be more inclined to provide answers that seem confident, even if they are incorrect.
The associate professor of digital media at the RTA School of Media of the Metropolitan University of Toronto says that these changes are a response to what consumers ask for the use of a search engine: a quick and final response when ‘They need information.
“We do not necessarily seek 10 websites; we want an answer to a question. And that can do so,” said Lachman,
However, he points out that when AI gives an answer to a question, it can be false.
Unlike more traditional research results where several links and sources are displayed in a long list, it is very difficult to analyze the source of a response given by an AI system such as Chatgpt.
Lachman’s point of view is that it might seem easier for people to trust an answer from an AI chatbot if he is convincingly as a human by making jokes or simulating emotions that produce a feeling of comfort.
“This makes you more comfortable than you should be with the quality of the answers you get,” he said.
The company sees the momentum in AI
Here in Canada, at least one business working in artificial intelligence is excited by a more human interface for AI systems like Google or Openai.
“Make no mistake, we are in a competitive arms race here with regard to generating AI and there is a huge amount of capital and innovation,” said Duncan Mundell, with Alberta Alberta.
“This simply opens the door to additional capacities that we can exploit,” he said about artificial intelligence in a general sense, mentioning products that his company creates with AI, as software that Can predict the movement of forest fires.
He stressed that even if technological upgrades are not revolutionary in his opinion, they move artificial intelligence in a direction he welcomes.
“What Openai has done with this version brings us a step closer to human cognition, right?” said Mundell.
The researcher calls for “nonsense”
Upgrades of Google or Openai AI systems could remind the science fiction fans of the very conversational computer on Star Trek: The next generationBut a researcher at Western University says he considers that the new upgrades are decorative, rather than really changing the way the information is processed.
“Many of the notable characteristics of these new versions are, I suppose you could say, bells and whistles,” said Luke Stark, assistant professor at the Faculty of Information and Media at Western University.


“Regarding the capacities of these systems to go beyond what they have been able to do so far … It is not a big jump,” said Stark, who called the idea That a sensitive artificial intelligence could exist with today’s technology “in a way nonsense”.
Companies that push the innovations of artificial intelligence make it difficult to clarity “what these systems are good and not so good,” he said.
It is a position reproached by Lachman, which says that the lack of clarity will force users to be warned about what they read online in a new way because
“Right now, when you and I talk, I’m used to thinking that everything that looks like a person is a person,” he said, stressing that human users can assume everything that seems that another human will have the same basic understanding of how the world works.
But even if a computer seems to look like a human, he will not have this knowledge, he says.
“It does not have this feeling of common understanding of the basic rules of society. But it looks like.”