An arms race for artificial intelligence (AI) supremacy, triggered by recent panic over Chinese chatbot DeepSeek, risks amplifying the existential dangers of superintelligence, according to one of the “godfathers” of AI.
Canadian machine learning pioneer Yoshua Bengio, author of the first International AI Safety Report to be presented at an international AI summit in Paris next week, warns unchecked investment in computational power for AI without oversight is dangerous.
“The effort is going into who’s going to win the race, rather than how do we make sure we are not going to build something that blows up in our face,” says Mr Bengio.
Military and economic races, he warns, “result in cutting corners on ethics, cutting corners on responsibility and on safety. It’s unavoidable”.
Bengio worked on neural networks and machine learning, the software architecture that underpins modern AI models.
He is in London, along with other AI pioneers to receive the Queen Elizabeth Prize, UK engineering’s most prestigious award in recognition of AI and its potential.
He’s enthusiastic about its benefits for society, but the pivot away from AI regulation by Donald Trump’s White House and frantic competition among big tech companies for more powerful AI models is a worrying shift.
“We are building systems that are more and more powerful; becoming superhuman in some dimensions,” he says.
“As these systems become more powerful, they also become extraordinarily more valuable, economically speaking.
“So the magnitude of, ‘wow, this is going to make me a lot of money’ is motivating a lot of people. And of course, when you want to sell products, you don’t want to talk about the risks.”
But not all the “godfathers” of AI are so concerned.
Take Yann LeCun, Meta’s chief AI scientist, also in London to share in the QE prize.
“We have been deluded into thinking that large language models are intelligent, but really, they’re not,” he says.
“We don’t have machines that are nearly as smart as a house cat, in terms of understanding the physical world.”
Within three to five years, LeCun predicts, AI will have some aspects of human level intelligence. Robots, for example, that can perform tasks they’ve not been programmed or trained to do.
Read more:
AI may increase speed of breast cancer treatment
The material that could hold key to near-limitless energy
But, he argues, rather than make the world less safe, the DeepSeek drama – where a Chinese company developed an AI to rival the best of America’s big tech with a tenth of the computing power – demonstrates no one will dominate for long.
“If the US decides to clam up when it comes to AI for geopolitical reasons, or, commercial reasons, then you’ll have innovation someplace else in the world. DeepSeek showed that,” he says.
The Royal Academy of Engineering prize is awarded each year to engineers whose discoveries have, or promise to have, the greatest impact on the world.
Previous recipients include the pioneers of photovoltaic cells in solar panels, wind turbine technology and neodymium magnets found in hard drives, and electric motors.
Science minister Lord Vallance, who chairs the QE prize foundation, says he is alert to the potential risks of AI. Organisations like the UK’s new AI Safety Institute are designed to foresee and prevent the potential harms AI “human-like” intelligence might bring.
But he is less concerned about one nation or company having a monopoly on AI.
“I think what we’ve seen in the last few weeks is it’s much more likely that we’re going to have many companies in this space, and the idea of single-point dominance is rather unlikely,” he says.