Site Search

NVIDIA DGX Systems

NVIDIA DGX™ H100

A milestone in AI infrastructure

NVIDIA DGX™ H100

The NVIDIA DGX™ H100 helps you innovate and optimize your business. The latest addition to NVIDIA's legendary DGX system and the foundation of the NVIDIA DGX SuperPOD™, the DGX H100 is powered by the groundbreaking NVIDIA H100 Tensor Core GPU and accelerates the use of AI. Designed to maximize AI throughput, it provides enterprises with a highly sophisticated, systemized, and scalable platform to help them achieve breakthroughs in natural language processing, recommendation systems, data analytics, and more. Available on-premise or with a variety of access and deployment options, the NVIDIA DGX H100 delivers the performance enterprises need to solve their biggest challenges with AI.

Product spec

NVIDIA DGX™ H100

GPUs

8x NVIDIA H100 GPUs

TFLOPS

32 Peta FLOPS FP8

GPU memory

80GB per GPU/640GB per

DGX H100 Node

system memory

2TB

storage

Data cache drives: 30TB

(8x3.84TB)

OS drives: 2x 1.92TB NVME SSDs

network

4x OSFP ports serving 8x single-port

NVIDIA ConnectX-7

400Gb/s InfiniBand/Ethernet

2x dual-port NVIDIA ConnectX-7 VPI

400Gb/s InfiniBand/Ethernet

Detailed information on the NVIDIA DGX H100 is available.

Reference materials that explain the NVIDIA DGX H100 from a network perspective,

We have prepared a report entitled "A detailed analysis of the NVIDIA DGX™ H100 from a network perspective - The role of NVIDIA ConnectX®-7 in making the most of the GPU." Please make use of it.

You can view the full text without entering any personal information.

Contents of the materials (excerpts)

Click image to enlarge.

table of contents
NVIDIA DGX H100 Overview Page

Case study

Other GPU Servers

We also carry Supermicro GPU servers. Please contact us for more details.

item

NVIDIA HGX H100/H200 Server

NVIDIA H100 Server (Scalable ①)

NVIDIA H100 Server (Scalable ②)

Water-cooled Server

SYS

SYS-821GE-TNHR

SYS-741GE-TNRT

SYS-521GE-TNRT

SYS-421GE-TNHR2-LCC

CPU

Dual Socket E (LGA-4677)5th Gen Intel® Xeon® / 4th Gen Intel® Xeon® Scalable processors

Dual Socket E (LGA-4677)5th Gen Intel® Xeon® / 4th Gen Intel® Xeon® Scalable processors

Dual Socket E (LGA-4677)5th Gen Intel® Xeon® / 4th Gen Intel® Xeon® Scalable processors

Dual Socket E (LGA-4677)5th Gen Intel® Xeon® / 4th Gen Intel® Xeon® Scalable processorsSupports Intel Xeon CPU Max Series with high bandwidth memory (HBM)​

Memory

Slot Count: 32 DIMM slotsMax Memory (1DPC): Up to 4TB 5600MT/s ECC DDR5 RDIMMMax Memory (2DPC): Up to 8TB 4400MT/s ECC DDR5 RDIMM

Slot Count: 16 DIMM slotsMax Memory (1DPC): Up to 2TB 5600MT/s ECC DDR5 RDIMMMax Memory (2DPC): Up to 4TB 4400MT/s ECC DDR5 RDIMM

Slot Count: 32 DIMM slotsMax Memory (1DPC): Up to 4TB 5600MT/s ECC DDR5 RDIMMMax Memory (2DPC): Up to 8TB 4400MT/s ECC DDR5 RDIMM

Slot Count: 32 DIMM slotsMax Memory (1DPC): 4TB 5600MT/s ECC DDR5 RDIMMMax Memory (2DPC): 8TB 4400MT/s ECC DDR5 RDIMM

form

8U Rackmount type

Tower Rackmount type

5U Rackmount type

4U Rackmount type

GPU Model

NVIDIA SXM: HGX H100 8-GPU (80GB), HGX H200 8-GPU (141GB)

NVIDIA PCIe: H100 NVL, H100,

NVIDIA PCIe: H100 NVL, H100

NVIDIA SXM: HGX H100 8-GPU (80GB), HGX H200 8-GPU (141GB)

Cooling Method

Air Cooling

Air Cooling

Air Cooling

Water Cooling

Image

Quote / Inquiry

AI TRY NOW PROGRAM

This is a support program that allows you to verify the latest AI solutions on an NVIDIA environment before introducing them into your company.

You can verify the benefits of implementing software products provided by NVIDIA, including NVIDIA AI Enterprise and NVIDIA Omniverse, as well as the AI learning environments and tools that we can provide, and the feasibility of applications before purchasing.

Related product page