Supermicro Unveils Next-Gen AI Systems Powered by NVIDIA's Blackwell Architecture

 

Supermicro have announced a brand new lineup of systems designed specifically for large-scale generative AI. These systems leverage the cutting-edge power of NVIDIA's next-generation data centre products, including the powerhouse NVIDIA GB200 Grace Blackwell Superchip and the B200 and B100 Tensor Core GPUs.

A wide range of GPU-optimized Supermicro systems will be ready for the NVIDIA Blackwell B200 and B100 Tensor Core GPU and validated for the latest NVIDIA AI Enterprise software, which adds support for NVIDIA NIM inference microservices. The Supermicro systems include:

  • NVIDIA HGX B100 8-GPU and HGX B200 8-GPU systems
  • 5U/4U PCIe GPU system with up to 10 GPUs
  • SuperBlade® with up to 20 B100 GPUs for 8U enclosures and up to 10 B100 GPUs in 6U enclosures
  • 2U Hyper with up to 3 B100 GPUs
  • Supermicro 2U x86 MGX systems with up to 4 B100 GPUs

Faster Deployment and Upgraded Performance:

Supermicro isn't just introducing new systems, they're also streamlining existing ones. Their current NVIDIA HGX H100/H200 8-GPU systems are getting an upgrade, making them drop-in compatible with the upcoming NVIDIA HGX B100 8-GPU. This means faster deployment times for customers eager to leverage the B200's capabilities.

 

Supercharged Training and Inference:

The new lineup focuses on two key areas of AI: training and inference.

  • Training Champions: Supermicro's HGX B200 8-GPU systems are built for blazing-fast training of massive AI models. These liquid-cooled beasts pack 8x NVIDIA Blackwell GPUs connected by a high-speed fifth-generation NVLink, delivering double the performance of the previous generation.
  • Inference Powerhouses: For demanding inference workloads, Supermicro introduces new MGX systems built around the mighty NVIDIA GB200 Grace Blackwell Superchip. This innovative chip combines an NVIDIA Grace CPU with two NVIDIA Blackwell GPUs, resulting in a staggering 30x performance leap compared to the NVIDIA HGX H100.

 

Scalability:

Supermicro are also offering rack-level solutions like the NVIDIA GB200 NVL72, which seamlessly integrates 36x Grace CPUs and 72x Blackwell GPUs within a single rack. All these GPUs are interconnected with high-speed NVLink, maximising communication and performance.

The Future of AI Infrastructure:

Supermicro's announcement signifies a significant leap forward in AI infrastructure. By providing powerful, scalable, and flexible solutions, they're empowering researchers and developers to tackle more complex problems and unlock the full potential of AI across various industries.

 

Source: https://www.supermicro.com/en/pressreleases/supermicro-grows-ai-optimized-product-portfolio-new-generation-systems-and-rack