If your organization is considering developing the server infrastructure needed to leverage the revolutionary power of generative AI, you’ll find many vendors willing to sell you the necessary hardware. Yet hardware is only one piece of the puzzle. Operating with low power consumption efficiency (PUE), even in harsh environments, is another, as is fine-tuning a large language model for your needs. And all of this precedes the actual processes of effectively deploying your AI infrastructure.
This is where ASUS offers a key advantage over any other server vendor on the market. Not only do we offer a top-to-bottom hardware solution for AI that extends from server systems to end-user devices, but we also have direct experience with every step of designing and operating a data center for AI applications. . Additionally, we are also innovating LLM (big language) model development, especially for businesses and governments developing LLMs for languages other than English.
When it comes to AI supercomputing, ASUS is your expert in this field.
Proven expertise in professional server tuning
ASUS has over 25 years of expertise in the server industry. In the years since we joined SPEC.org, a leading performance standards organization, our servers have set more than 1,959 world records.
One of our biggest achievements in this field is the Taiwania 2 supercomputer. In 2018, together with Taiwan’s National Center for High Performance Computing (NCHC) and other industry partners, we built this supercomputer system with public cloud services capable of scaling resources on demand to efficiently run AI workloads. by user requests. Taiwania 2 provides efficient and intuitive AI and big data cloud services and tools for AI developers and data scientists to quickly and easily configure, build and manage development and production environments.
Too often, performance and energy efficiency are treated as opposing objectives. ASUS excels at both. Taiwania 2 debuted at 10th on the Green500 list for power efficiency and 20th on the TOP500 supercomputer list for powerful performance.
One of our latest projects just caught on the 31stst place on the TOP500 — and, thanks to its very efficient design, the 44th position on the Green500. We worked with Ubilink to build Taiwan’s new largest supercomputing center. Built in just three months, this ambitious project is equipped with 128 NVIDIA HGX H100 servers and 1,024 GPUs, achieving an impressive 45.82 PFLOPS.
With this solution, we achieved an efficiency (Rmax / Rpeak) reaching 66.08%. These results were reported to Top 500.org. Under the same GPU and high-speed network card conditions, the ASUS solution is more efficient, delivering 1.23 times the overall performance of its competitors.*
1.17 PUE: ASUS Leads the Way in Data Center Energy Efficiency
Today, work is underway on the Forerunner 1 supercomputer, and once again, ASUS technologies are key to its development. The data center we built for Forerunner 1 is designed from the ground up to address a major technical challenge for any data center in Taiwan: local environmental conditions. High temperatures and high humidity levels are quite common on the island of Taiwan. Facilities in regions with comparable environments often aim for a PUE of 1.5 to accommodate increasing cooling costs. For Forerunner 1, we were able to exceed these expectations with an incredibly low PUE of 1.17.
Our proven expertise in building highly efficient supercomputers and server systems, testing software stack issues, and creating middleware makes ASUS an invaluable partner for organizations that need a complete solution of AI. We have already served more than 100 customers in Taiwan, including government research centers and companies in various industries.
Innovative LLM approaches
Great language models have taken the world by storm, but the first generation of LLMs were trained primarily, if not entirely, on English-language papers. To empower businesses around the world and democratize the potential of AI, ASUS subsidiary TWSC launched the Formosa Foundation Model, or FFM-Llama2.
Trained on Llama 2, a large open-source language model created by Meta, FFM-Llama2 leverages AIHPC supercomputing, parallel computing, and local language data to improve its proficiency in traditional Chinese. We designed this LLM to support fine-tuning and customization to broaden its impact. We have already created a version to support Hakka, a local language group in Taiwan.
What does this mean for your organization? If you are looking to generate your own LLM in a different language, you will not need to start from scratch with a new hardware installation and the processing time required for training. Instead, we can help you refine your project from our software. We already have this experience and we are ready to put it at your service.
Unbeatable TCO for Total AI Infrastructure Solutions
Considering the total cost of ownership (TCO) of operating a data center, not just the upfront hardware costs, demonstrates the value of ASUS solutions. ASUS stands out from all other manufacturers on the market with the total AI infrastructure solution that we are able to offer.
Not only do we manufacture servers, we also have proven experience designing supercomputers and data centers for optimal efficiency and robust integration with your existing IT infrastructure. Our customizable LLM is ready to be tailored to your needs. And our teams are ready to help you with advice, personalized support, server installation and validation, and much more.
All of this translates into a lower total cost of ownership for you. Other suppliers may sell you equipment. Only ASUS has the proven experience to guide you through the complete deployment of an AI infrastructure solution. To learn more about how we can help you stay ahead of the AI race, contact our server team for more information.
*Based on data from Top500.org. All data used in this analysis comes from published information from the Top 500 list, June 2024. We evaluate the performance of each data center by comparing IT efficiency, represented by the ratio of Rmax to Rpeak.