We use cookies to make your experience better. To comply with the new e-Privacy directive, we need to ask for your consent to set the cookies. Learn more.
Key Features/Applications:
High Performance Computing, AI/Deep Learning Training and Inference, Large Language Model (LLM) and Generative AI
CPU:
NVIDIA 72-core NVIDIA Grace CPU on GH200 Grace Hopper™ Superchip
Chassis:
1U Rackmount Liquid Cooling
Drive:
4 front hot-swap E1.S NVMe drive bay(s)
RAM:
Slot Count: Onboard Memory, Max Memory: Up to 480GB ECC LPDDR5X, Additional GPU Memory: Up to 96GB ECC HBM3
Network Ports:
1 RJ45 1 GbE Dedicated IPMI LAN port(s)
Key Features/Applications:
High Performance Computing, AI/Deep Learning Training and Inference, Large Language Model (LLM) and Generative AI
CPU:
NVIDIA 72-core NVIDIA Grace CPU on GH200 Grace Hopper™ Superchip
Chassis:
1U Rackmount Liquid Cooling
Drive:
8 front hot-swap E1.S NVMe drive bay(s)
RAM:
Slot Count: Onboard Memory, Max Memory: Up to 480GB ECC LPDDR5X, Additional GPU Memory: Up to 96GB ECC HBM3
Network Ports:
1 RJ45 1 GbE Dedicated IPMI LAN port(s)
Key Features/Applications:
All-Flash NVMe Hyperconverged Infrastructure, Container-as-a-Service; Application Accelerator, High-Performance File System, Diskless HPC Clusters
CPU:
Dual Socket E (LGA 4677) support 5th/4th Gen Intel® Xeon® Scalable processors per node
Chassis:
2U Rackmount
Drive:
Default: Total 6 bay(s) 2 front hot-swap 2.5" PCIe 5.0 NVMe/SATA drive bay(s) 4 front hot-swap 2.5" PCIe 4.0 NVMe/SATA drive bay(s) per node
RAM:
16 DIMM Slots supporting up to 4TB of memory; ECC RDIMMs up to DDR5-5600 per node
Network Ports:
AIOM
Requesting Quote:
Please provide full specification including Processor, Memory, Storage, AIOM, AOC, TPM and any other requirements such as RAID