Microsoft’s CEO Satya Nadella has praised AMD’s Instinct MI300X AI accelerators, stating they offer the best “price-to-performance” on the market.
Microsoft’s CEO Says That AMD’s MI300X Delivers The Best Price-To-Performance Figures At GPT-4
At the Build 2024 keynote, Satya Nadella gave a rundown on the developments the firm has been making in the AI sector, claiming that rapid scalability has occurred over the past few months, which is why he categorized this era as the “golden age of system” (via CRN). Expanding on this concept, Satya said that while Moore’s Law depends upon a two-year cycle, the Neural Scaling law is working upon a six-month timeline, and networks have been witnessing a doubling of capabilities.
Microsoft’s CEO believes that such rapid progress is possible because of the efforts of AMD and NVIDIA, which have contributed significantly to the markets through their AI offerings, such as accelerators and other product configurations. Satya says that Microsoft is deeply involved in a partnership with AMD and sees Team Red’s Instinct MI300X AI GPU as the best option regarding the value it brings onboard. He says that the platform can deliver the “best price/ performance on GPT-4 inference.”
Best price to performance on GPT-4? @satyanadella says it’s on @AMD and the MI300X. pic.twitter.com/cffGMEoFh7
— Ryan Shrout (@ryanshrout) May 21, 2024
The statement by Microsoft’s CEO is certainly a win for AMD and its efforts toward making a stride in the AI markets through its offerings. Several other statements have been made by professionals, who have categorized the Instinct MI300X as a superior option to NVIDIA’s alternatives, showing that the market is inclined towards Team Red slightly. Still, we have yet to see the spark NVIDIA’s Hopper generation witnessed last year.
Apart from AMD, Satya Nadella revealed that Microsoft is one of the first cloud firms to offer the power of NVIDIA’s Blackwell B100 and GB200 Superchips to its clients, and this is simply due to “this very deep, deep partnership with NVIDIA.” This shows that the AI computing race will be more aggressive than ever.
Meanwhile, AMD’s CEO recently attended the ITF World in Antwerp where she received the IMEC Innovation award and laid down the future of efficient AI computing in the coming years. Main highlights include being on track to achieve the 100x perf/watt improvement by 2027 & the following:
- AI is driving exponential growth of compute demand and associated power consumption. As model sizes grow, energy requirements to train them grow too.
- Looking to the future, we will eventually reach the practical limits imposed by the power grid and power plant generation capability.
- Meeting this growing demand for AI compute will require holistic innovation and deep industry collaboration that spans new architectures, advanced packaging, system-level tuning, software and hardware co-design, and much more.
- AMD recognized improving efficiency would require optimizations beyond the device and processor level, moving to system-level improvements — a holistic approach to design.
- Through a holistic, system-level approach AMD continues to see opportunities to accelerate node-level performance per Watt, with targets that exceed 100x the 2020 baseline by the 2027 time frame.
As reported previously, Team Green expects to massively upscale its Blackwell production numbers in the upcoming quarters, which shows the massive demand for these products. And, with the debut of AMD’s “supposed” MI400 lineup, the industry is in for a treat.
News Source: Tomshardware