Ancient Intel (Intruder,, Financial) The chief executive, Pat Gelsinger, said that Nvidia (Nvda,, Financial)) Pricing strategy for artificial intelligence fleas is excessively high and not viable for large -scale AI inference.
Speaking on the podcast acquired at the NVIDIA 2025 GPU Conference 2025, Gelsinger said that the essential step of the Inferch in the deployment of AI models where the industry is directed and that the current Nvidia technology is poorly prepared to serve that requires profitable training. Gelsinger said that the processors that Nvidia used for artificial entry cost 10,000 times more than that was necessary for entry. Although he admitted that most of the first increases in generative artificial intelligence were motivated by NVIDIA graphic processing units, he argued that the strong power of the company by the Cuda software platform does not persist once the inference will take the first step. He pointed out that if Huang’s early prediction on graphic processors for general use and AI workloads paid, success also results from a good calendar. “Jensen was lucky,” said Gelsinger in the interview. The Gaudi accelerator chips of the company have lagged behind the hopper and micro advanced devices ”(Dmla,, Financial) Instinctive NVIDIA products. Intel has since put aside its artificial intelligence platform on the banks of Falcon and is currently focusing on a new generation project under Jaguar Shores.Gelsinger has also noted a possible change in IT architecture because quantum computer science could become commercially possible at the end of the decade. He left aside any indication of the possible Intel posture in this change. Reflection of the most important difficulties of Intel by taking advantage of the explosion of the demand for automatic learning infrastructure, Intel’s Intel revenues are still dragging considerably behind those of its rivals.
This article appeared for the first time on Gurufocus.