AI research tools are becoming more and more popular, with one in four American reporting using AI instead of traditional search engines. However, here is an important note: these AI chatbots do not always provide precise information.
A recent study by the Tow Center for Digital Journalism, reported by Columbia Journalism ReviewIndicates that chatbots are struggling to recover and cite the content of the news. Even more about is their tendency to invent information when they do not have the right answer.
Ai chatbots Tested for the survey included many “best”, especially CatPerplexity, pro perplexity, In depthCopilot of Microsoft, Grok-2, Grok-3 and Google Gemini.
In tests, AI chatbots have received direct extracts from 10 online articles published by various points of sale. Each chatbot received 200 requests, representing 10 articles out of 20 different publishers, for 1,600 requests. The chatbots were invited to identify the title of each article, its original publisher, the date of publication and the URL.
Similar tests carried out with traditional search engines managed to provide correct information. However, IA chatbots did not work as well.
The results have indicated that chatbots often find it difficult to refuse the questions they cannot answer with precision, frequently providing incorrect or speculative responses instead. Premium chatbots tend to provide incorrect responses with confidence more often than their free counterparts. In addition, many chatbots seemed to ignore the preferences of the ROBOT exclusion protocol (REP), that websites use to communicate with web robots such as engine search robots.
The survey also revealed that generative research tools were subject to making links and citing unionized or copied versions of articles. In addition, content license agreements with information sources do not guarantee specific quotes in Chatbot responses.
What can you do?
What comes out the most in the results of this survey is not only that AI chatbots often provide incorrect information, but that they do it with alarming confidence. Instead of admitting that they do not know the answer, they tend to respond with sentences like “it appears”, “it is possible” or “could”.
For example, Chatgpt has wrongly identified 134 articles, but only reported uncertainty 15 times on 200 responses and has never refrained from providing an answer.
Based on the investigation results, it is probably wise not to count exclusively on AI chatbots to obtain answers. Instead, a combination of traditional research methods and AI tools is recommended. At the very least, the use of several AI chatbots to find an answer can be beneficial. Otherwise, you risk obtaining incorrect information.
For the future, I would not be surprised to see a consolidation of IA chatbots because the best are distinguished by the best quality. Finally, their results will be as precise as those of traditional search engines. When it happens, it is the assumption of anyone.