During a recent dinner with business leaders in San Francisco, a comment that I made a thrill reject on the room. I had not asked my restoration companions of everything that I considered a little extremely false: if they thought that today’s AI could one day obtain a human intelligence (that is to say act) or beyond.
It is a more controversial subject than you think.
In 2025, there is no technological CEO shortage offering the case of a bull for large language models (LLM), which for power chatbots like Chatgpt and Gemini, could reach intelligence of human or even superhuman level in the short term. These leaders argue that the very competent AI will cause widespread societal advantages – and widely distributed.
For example, Dario Amodei, CEO of Anthropic, written in a test This exceptionally powerful AI could arrive in 2026 and be “more intelligent than a winner of the Nobel Prize in the most relevant fields”. Meanwhile, the CEO of Openai Sam Altman claimed his business knows how to build an AI “Superintelligent”and predicted that it can “massively accelerate scientific discovery.“”
However, not everyone finds these convincing optimistic statements.
Other AI leaders are skeptical about the fact that today’s LLMs can reach AG – and even less in -law – except new innovations. These leaders have historically kept a low profile, but others have started to speak recently.
In a room this month, Thomas Wolf, co-founder of Hugging Face and Director of Sciences, called certain parts of the vision of Amodei “Pious wishes at best. “Informed by his research as a doctorate in statistical and quantum physics, Wolf thinks that breakthroughs at the Nobel Prize level have not just answered known questions – something that IA excels – but rather to ask questions that no one thought of asking.
In the opinion of Wolf, today’s LLMs are not up to the task.
“I would like to see this” Einstein model “there, but we have to dive into the details how to get there,” Wolf in Techcrunch told an interview. “This is where it starts to be interesting.”
Wolf said he had written the play because he thought there was too much media threw on AGE, and not enough serious evaluation of how to get there. He thinks that, in the current state of things, there is a real possibility that AI transforms the world into the near future, but does not reach intelligence or superintelligence at the level of man.
A large part of the AI world has become committed by the promise of Ag. Those who do not believe that this is possible are often labeled as “anti-technology”, or otherwise bitter and poorly informed.
Some could Peg Wolf as a pessimist for this point of view, but Wolf considers himself an “enlightened optimist” – someone who wants to push AI forward without losing reality. Admittedly, he is not the only leader in AI with conservative predictions on technology.
The CEO of Google Deepmind Demis Hassabis has would have said staff This, in its opinion, the industry could be up to a decade of developing AG – noting that there are many tasks that AI simply cannot do today. Meta-chief chief scientist Yann Lecun also expressed doubts about the potential of LLM. Speaking on Tuesday at the Nvidia GTC, Lecun said that the idea that the LLM could achieve Act was “absurd” and called For entirely new architectures to serve as a foundation for superintendent.
Kenneth Stanley, former Openai principal researcher, is one of the people who are digging details of how to build an advanced AI with today’s models. He is now director of Lila Sciences, a new startup that raised 200 million dollars in venture capital To unlock scientific innovation via automated laboratories.
Stanley spends her days trying to extract original creative ideas from AI models, a research sub-champ on AI called open. Lila Sciences aims to create AI models that can automate the entire scientific process, including the very first step – arriving at very good questions and hypotheses that would ultimately lead to breakthroughs.
“In a way I wish to have written a test (Wolf), because it really reflects my feelings,” said Stanley in an interview with Techcrunch. “What (he) noticed is that being extremely competent and qualified has not necessarily led to having really original ideas.”
Stanley believes that creativity is a key step on the Way of Act, but notes that the construction of a “creative” AI model is easier to say than to do.
Optimists like Amodei point to methods such as AI “reasoning” models, which use more computing power to verify their work and answer certain questions correctly, as proof that IGI is not terribly far away. However, offering original ideas and questions may require a different type of intelligence, says Stanley.
“If you think about it, the reasoning is almost antithetic to (creativity),” he added. “The models of reasoning say:” Here is the goal of the problem, let’s go directly to this objective “, which essentially prevents you from being opportunistic and seeing things outside this goal, so that you can then diverge and have a lot of creative ideas.”
To design really intelligent AI models, Stanley suggests that we must reproduce the subjective taste of a human to promise new ideas. Today’s AI models work fairly well in academic fields with clear answers, such as mathematics and programming. However, Stanley underlines that it is much more difficult to conceive an AI model for more subjective tasks which require creativity, which do not necessarily have a “correct” response.
“People avoid (subjectivity) in science – the word is almost toxic,” said Stanley. “But nothing prevents us from managing subjectivity (algorithmic). It’s just part of the data flow.”
Stanley says that it is happy that the open opening field now more attention, with research laboratories dedicated to Lila Sciences, Google Deepmind and the AI Sakana startup now working on the problem. He is starting to see more people talk about creativity in AI, he says – but he thinks there is much more work to do.
Wolf and Lecun would probably agree. Call them the realists of the AI, if you want: the leaders of AI approaches AG and the superintendent with serious questions and anchored on its feasibility. Their objective is not to make poo-poo progress in the field of AI. Rather, it is a question of launching a large -scale conversation on what is located between the models of AI today and AG – and the super -intelligence – and of continuing these blockers.