“It is possible that we will have superintelligence in a few thousand days (!); It may take longer, but I am confident we will get there.
This is what Sam Altman, CEO of OpenAI, a technology company based in San Francisco, California, wrote on September 23. This was less than two weeks after the company behind ChatGPT released o1, its most advanced extended language model (LLM) to date. Once confined to the realm of science fiction, the rise of LLMs in recent years has renewed the relevance of the question of when we might create artificial general intelligence (AGI). Although lacking a precise definition, AGI broadly refers to an AI system capable of human-level reasoning, generalization, planning, and autonomy.
How close is AI to human intelligence?
Policymakers around the world are asking questions about AGI, including its benefits and risks. These questions are not easy to answer, especially since much of the work takes place in the private sector, in which studies are not always published openly. But what is clear is that AI companies are working to equip their systems with the full range of cognitive abilities that humans enjoy. Companies developing AI models have a strong incentive to maintain the idea that AGI is near, in order to attract interest and therefore investment.
There was consensus among the researchers who spoke to Nature for a news article published this week (see Nature 63622-25; 2024) that large language models (LLMs), such as o1, Google’s Gemini, and Claude, made by San Francisco-based Anthropic, have not yet provided AGI. And, drawing on lessons from neuroscience, many argue that there are good reasons to believe that LLMs will never be and that another technology will be needed for AI to achieve intelligence of human level.
Despite the breadth of their capabilities – from generating computer code to summarizing academic papers and answering mathematical questions – there are fundamental limitations in the operation of the most powerful LLMs, which essentially involve devouring a mass of data and use it to predict the next “token” in a series. This generates plausible answers to a problem, rather than actually solving it.
AI and science: what 1,600 researchers think
François Chollet, a former software engineer at Google based in Mountain View, Calif., and Subbarao Kambhampati, a computer scientist at Arizona State University in Tempe, tested o1’s performance on tasks that require abstract reasoning and planning, and discovered this was happening. running out of AGI. If AGI were to occur, some researchers believe that AI systems would need consistent “world models,” or representations of their environment, that they could use to test hypotheses, reason, plan, and generalize. knowledge acquired in one area to other potentially unlimited situations.
This is where ideas from neuroscience and cognitive science could power the next advances. Yoshua Bengio’s team at the University of Montreal, Canada, for example, is exploring alternative AI architectures that would better support the construction of coherent global models and the ability to reason using such models.
Some researchers say the next advances in AI could come not from the biggest systems, but from smaller, more energy-efficient AI. Smarter systems of the future might also require less data to train if they had the ability to decide which aspects of their environment to sample, rather than simply ingesting whatever they’re fed, says Karl Friston, a theoretical neuroscientist at University College London.
Such work demonstrates that researchers from various fields need to be involved in the development of AI. This will be necessary to verify what the systems are actually capable of, ensure they live up to the tech companies’ claims, and identify the advancements needed for development. However, at present, access to leading AI systems can be difficult for researchers who do not work at companies that can afford the large amount of graphics processing units (GPUs) needed for training systems (A. Khandelwal and others. Preprint at arXiv 2024).
ChatGPT has broken the Turing test – the race is on for new ways to evaluate AI
To give an idea of the scale of activity, in 2021, US government agencies (excluding the Department of Defense) allocated $1.5 billion to AI research and development, and the European Commission spends about 1 billion euros ($1.05 billion) per year. In contrast, companies around the world spent more than $340 billion on AI research in 2021 (N.Ahmed et al. Science 379884-886; 2023). There are ways in which governments could fund AI research on a larger scale, such as by pooling resources. The Confederation of European Artificial Intelligence Research Laboratories, a non-profit organization based in The Hague, Netherlands, has suggested creating a “CERN for AI” capable of attracting the same level of talent as AI companies and thus create a cutting-edge research environment.
It’s difficult to predict when AGI might happen – estimates range from a few years to a decade or more. But other huge advances in AI are sure to come, and many of them will likely come from industry, given the scale of investment. To ensure that these advances are beneficial, research from technology companies must be verified using the best current understanding of what constitutes human intelligence, according to neuroscience, cognitive science, social science, and other relevant fields. This publicly funded research is expected to play a key role in the development of AGI.
Humanity must harness all its knowledge to ensure that applications of AI research are robust and its risks are mitigated as much as possible. Governments, businesses, research funders and researchers must recognize their complementary strengths. If they don’t, information that could help improve AI will be missed – and the resulting systems are likely to be unpredictable and therefore dangerous.