We use cookies to make your experience better. To comply with the new e-Privacy directive, we need to ask for your consent to set the cookies. Learn more.
Supermicro have announced a brand new lineup of systems designed specifically for large-scale generative AI. These systems leverage the cutting-edge power of NVIDIA's next-generation data centre products, including the powerhouse NVIDIA GB200 Grace Blackwell Superchip and the B200 and B100 Tensor Core GPUs.
A wide range of GPU-optimized Supermicro systems will be ready for the NVIDIA Blackwell B200 and B100 Tensor Core GPU and validated for the latest NVIDIA AI Enterprise software, which adds support for NVIDIA NIM inference microservices. The Supermicro systems include:
Faster Deployment and Upgraded Performance:
Supermicro isn't just introducing new systems, they're also streamlining existing ones. Their current NVIDIA HGX H100/H200 8-GPU systems are getting an upgrade, making them drop-in compatible with the upcoming NVIDIA HGX B100 8-GPU. This means faster deployment times for customers eager to leverage the B200's capabilities.
Supercharged Training and Inference:
The new lineup focuses on two key areas of AI: training and inference.
Scalability:
Supermicro are also offering rack-level solutions like the NVIDIA GB200 NVL72, which seamlessly integrates 36x Grace CPUs and 72x Blackwell GPUs within a single rack. All these GPUs are interconnected with high-speed NVLink, maximising communication and performance.
The Future of AI Infrastructure:
Supermicro's announcement signifies a significant leap forward in AI infrastructure. By providing powerful, scalable, and flexible solutions, they're empowering researchers and developers to tackle more complex problems and unlock the full potential of AI across various industries.