In early February OPENAI, the most famous artificial intelligence company in the world, published Deep Research, which is “designed to carry out in -depth research and in several stages”. With a few keyboard strokes, the tool can produce an article on any subject in a few minutes. Some economists go further. “I’m sure * for level B journals, you can publish articles that you have” written “in one day,” said Kevin Bryan of the University of Toronto. “I think of quality as comparable to a good doctoral level research assistant, and to send this person with a task for a week or two,” said Tyler Cowen by George Mason University, an economy with a cult or two status in the Tyler Cowen status.
Should you pay $ 200 per month for in-depth research? Mr. Cowen has aroused fashions in the past, as he did with Web3 and Clubhouse, a social network of the formerly popular media. On the other hand, if approximate research is close to a form of artificial superintendent, as many believe, then $ 2,400 per year is the biggest affair in world history. To help you decide, your columnist has launched the tires of the new model. What is the quality of an in -depth research assistant for economists and others?
The obvious conclusions first. In -depth research is unable to conduct primary research, from the organization of surveys to Peru to an idea of the body language of a managing director of which you could be short. Nor can he prepare coffee, making it a bad substitute for a human assistant. Another complaint is that deep research production is almost always in prose in mind, even if you ask it to be more alive. Again, most people were never good writers anyway, so it will hardly care that their assistant AI is a little dull.
Use in -depth research as assistant for a while, and three more important problems emerge: “data creativity”, the “tyranny of the majority” and “intellectual shortcuts”. Start with data creativity. The Openai model can manage simple questions – “What was France’s unemployment rate in 2023?” – Without breaking. He can manage marginally more complex questions – “Tell me the average unemployment rate in 2023 for France, Germany and Italy, weighted by the population” – with ease.
However, with regard to data issues requiring more creativity, the model has trouble. This wrongly estimates the average amount of money that an American household led by a man aged 25 to 34 spent in whiskey in 2021, even if anyone knows the data from the Bureau of Labor Statistics can find the exact answer ($ 20) in a few seconds. It cannot tell you with precision what part of British companies currently use AI, even if the statistics office produces a regular estimate. The model has even more difficulties with more complex questions, including those involving the analysis of the source data produced by statistical agencies. For such questions, human assistants retain an advantage.
The second issue is the tyranny of the majority. In -depth research is formed on a huge range of public data. For many tasks, that’s a plus. It is surprisingly good for producing detailed and original summaries. Mr. Cowen asked him to produce a ten -page article explaining David Ricardo’s rent theory. The exit would be a respectable addition to any manual.
However, the volume of content used to form the model creates an intellectual problem. In -depth research tends to rely on frequently discussed or published ideas, rather than the best things. Information volume tyrannies the quality of the information. This happens with statistics: in -depth research is subject to easily available consultation sources (such as newspapers), rather than better data that can be behind a paid wall or that are more difficult to find.
Something similar happens with ideas. Consider the question – a lot discussed by economists – to know if the inequality of American income increases. Unless you are invited to do otherwise, the model assumes very well that inequality has skyrocketed since the 1960s (like conventional wisdom) rather than staying flat or has increased only a little (the sight of many experts). Or consider the true meaning of Adam Smith’s “invisible hand”, the fundamental idea in economics. In an article published in 1994, Emma Rothschild of Harvard University demolished the idea that Smith used the term to refer the advantages of free markets. Deep research is aware of the in -depth research of Ms. Rothschild, but reveals not the assistant on popular consensus. Cognoscenti.
The silly trap
A third problem with the use of in -depth research as an assistant is the most serious. This is not a problem with the model itself, but how it is used. Ine to elected, you find yourself taking intellectual shortcuts. Paul Graham, an investor of Silicon Valley, noted that the models of AI, by offering people to write for them, are likely to make them stupid. “The writing is to think,” he said. “In fact, there is a kind of thought that can only be done by writing.” The same goes for research. For many jobs, research is to think: note contradictions and gaps in conventional wisdom. The risk of outsourcing all your research to a Supergenius assistant is that you reduce the number of opportunities to have your best ideas.
Over time, Openai can solve his technical problems. At some point, in -depth research can also be able to offer incredible ideas, transforming it of an assistant to the principal researcher. Until then, use in-depth research, even at $ 200 per month. Do not expect him to replace the research assistants so early. And make sure it doesn’t make you stupid.
© 2025, The Economist Newspaper Limited. All rights reserved. From The Economist, published under license. Original content can be found on www.economist.com