eWEEK content and product recommendations are editorially independent. We may earn money when you click on links to our partners. Learn more.
Silicon Valley startup d-Matrix has received $160 million in funding, including an investment from Microsoft’s venture capital arm MSFT.O, for a new AI chip it plans to deliver here next year. Headquartered in Santa Clara, California, and backed by tech giant Microsoft, d-Matrix invited customers to testing its new AI inference hardware, Corsairwhich focuses on the interactions of generative models with end users. The chip is designed to meet the requirements of providing AI services to end users, such as chatbots and video generators.
Nvidia has dominated the AI chip market in recent years. Rather than compete, d-Matrix plans to complement the tech giant by focusing on the requirements of the system’s end users. Nvidia’s chips are used to train AI systems that manage huge amounts of data. d-Matrix designed its chips to respond to requests and requests from users looking to generate more AI responses. Designed to meet a high volume of user requests, Corsair is ideal in AI applications with multiple users requesting simultaneous system responses, especially with custom outputs.
The future is video
d-Raised matrix $110 million last year at a time when microchip makers were struggling to attract new investment. This year, co-founders Sid Sheth and Sudeep Bhoja led the company to promote its latest chip, which offers high performance, efficiency and scalability.
“We’re seeing a lot of interest in video use cases where customers come to us and say, ‘Hey, look, we want to generate videos, and we want a set of users, all interacting with their own respective video “, said the CEO. » said Sid Sheth.
Corsair promises to address the challenges of energy efficiency, speed of token generation, cost and scalability of AI inference. It offers the potential to revolutionize the way Generative AI manages data in complex workloads and requirements. d-Matrix was one of the first companies to use a chipset-based architecture in its chip, highlighting flexibility and scalability while reducing costs, and it can be installed and integrated into data center servers. data, which makes Generative AI commercially viable.