Since the release of Deepseek R1, there has been an avalanche of opinions on subjects ranging from Nvidia, the supremacy of Chinese AI and my favorite, Jevon’s paradox. Unfortunately, separating the signal from the difficult noise was difficult, due to the volume of opinions.
In this article, I will discuss Deepseek’s real impact on the AI ecosystem, compared to a media and speculations. In last week’s postI predicted that Deepseek would virally climb to enter the markets of American consumers and companies. Well, it happened (now available on AWS And Azure) – So now what?
The feeling of the Internet is capricious: the pendulum has gone overnight from “Openai will be an undisputed winner” at “Openai has no pits, because China and the Meta merchants will models of AI”. The truth is somewhere between the two.
We will therefore think more deeply about how Deepseek can affect “the current order” led by Openai and Nvidia.
One of the biggest contributions in Deepseek has been to show that a simple approach to strengthening – and not a search for complicated trees, etc. – Can produce models of reasoning. Deepseek also confirmed independently than spending more calculation during the inference time produced better results. Finally, Deepseek followed a path for 2-5X cost optimizations in the LLM training.
But if these things are true, then he also proved that Optaai – which has been seated on these ideas on the models of reasoning for at least a full year – quietly climbed the curves of right to scale. Of course, Openai had to bear the additional cost of innovation by being the pioneer, but many people do not fully respect the importance of the example of Openai.