We use cookies to make your experience better. To comply with the new e-Privacy directive, we need to ask for your consent to set the cookies. Learn more.
GPU Solutions
Supermicro offers a wide variety of high performance GPU server solutions with massive parallel processing processing power and netowrking flexibility. With support for NVIDIA’s Tesla V100 GPUs, these solutions provide the parallelism needed for today’s high performance server applications such as AI. Supermicro GPU solutions are available in a 1U, 2U, 4U and Tower form factors with optomization for HPC workloads, Computational Finance and Oil and Gas Simulation. Nvidia NV-Link greatly improves the efficiency of parallel processing tasks by removing the bandwidth bottleneck associated with PCIe 3.0, which will translate to improved performance in applications such as AI. Supermicro GPU Servers can support up to 6TB DDR4-2933MHz Memory across 24 DIMM slots with support for Intel's 2nd Gen. Scalable CPUs and Optane DC Persistent Memory.
Deployed in fields such as:
Key Features/Applications:
High Performance Computing, AI/Deep Learning Training and Inference, Large Language Model (LLM) and Generative AI
CPU:
NVIDIA 72-core NVIDIA Grace CPU on GH200 Grace Hopper™ Superchip
Chassis:
1U Rackmount Liquid Cooling
Drive:
4 front hot-swap E1.S NVMe drive bay(s)
RAM:
Slot Count: Onboard Memory, Max Memory: Up to 480GB ECC LPDDR5X, Additional GPU Memory: Up to 96GB ECC HBM3
Network Ports:
1 RJ45 1 GbE Dedicated IPMI LAN port(s)
Key Features/Applications:
High Performance Computing, AI/Deep Learning Training and Inference, Large Language Model (LLM) and Generative AI
CPU:
NVIDIA 72-core NVIDIA Grace CPU on GH200 Grace Hopper™ Superchip
Chassis:
1U Rackmount Liquid Cooling
Drive:
8 front hot-swap E1.S NVMe drive bay(s)
RAM:
Slot Count: Onboard Memory, Max Memory: Up to 480GB ECC LPDDR5X, Additional GPU Memory: Up to 96GB ECC HBM3
Network Ports:
1 RJ45 1 GbE Dedicated IPMI LAN port(s)
Key Features/Applications:
Artificial Intelligence (AI), HPC, AI / Deep Learning, Deep Learning/AI/Machine Learning Development
CPU:
Dual-Socket, AMD EPYC™ 9004 Series Processors
Chassis:
4U Rackmount Liquid Cooling
Drive:
Default: Total 8 bay(s) / 8 front hot-swap 2.5" NVMe drive bay(s)
RAM:
24 DIMM slots Up to 6TB: 4800 ECC DDR5
Network Ports:
1 RJ45 1 GbE Dedicated IPMI LAN port(s)
Key Features/Applications:
Virtualization, Cloud Computing, High End Enterprise Server
CPU:
1 x Socket SP3 (LGA 4094)3rd Gen AMD EPYC™ 7002/7003 processors
Chassis:
2U Rackmountable
Drive:
24 x 2.5” hot-swap drive bays(24x U.2/U.3 NVMe or 16x U.2/U.3 NVMe + 8x SATA/SAS or 12x U.2/U.3 NVMe + 12x SATA/SAS), 2 x M.2 connector (NGFF 22110/2280/2260/2242. SATA 6Gb/s*2 or PCIe Gen4 x4 link*1 or PCIe Gen4 x2 link*2)*SAS support only from SAS HBA/RAID Card
RAM:
16x DDR4 3200/2933 RDIMM, DDR4 3200/2933 LRDIMM, DDR4 3200/2933 LR-DIMM 3DS
Network Ports:
1 x Dual Port Intel I350-AM2 1GbE LAN controller + 1 x Mgmt LAN
Key Features/Applications:
Virtualization, Cloud Computing, High End Enterprise Server
CPU:
1 x Socket SP5 (LGA 6096) AMD EPYC™ 9004 Series Processors
Chassis:
2U Rackmountable
Drive:
Front bays: 16 NVMe configuration SKU: 24 x 2.5” hot-swap drive bays Support 16x NVMe + 8 SAS/SATA 12 NVMe configuration SKU: 24 x 2.5” hot-swap drive bays Support 12x NVMe + 12 SAS/SATA 24 NVMe configuration SKU: 24 x 2.5” hot-swap drive bays Support 24x NVMe Rear bays:2 x 2.5” SATA hot-swap drive bays 2 x M.2 connectors
RAM:
24 x DIMM slots DDR5 4800/4400/4000/3600 RDIMM/ 3DS RDIMM Maximum 6144GB
Network Ports:
1 x Dual Port Intel I350-AM2 GbE LAN Controller 1 x Management Port