NVIDIA DGX H100
The gold standard of AI infrastructure
NVIDIA DGX ™ H100 helps you innovate and optimize your business. The latest in NVIDIA's legendary DGX system and the foundation of NVIDIA DGX SuperPOD™, the DGX H100 is powered by the breakthrough NVIDIA H100 Tensor Core GPU to push the boundaries of AI. Designed to maximize AI throughput, it provides enterprises with a highly sophisticated, systemized and extensible platform that enables breakthroughs in natural language processing, recommendation systems, data analytics, and more. help you achieve it. NVIDIA DGX H100 can be deployed on-premises or with a variety of access and deployment options to deliver the performance your enterprise needs to solve AI-at-scale challenges.
Product spec
NVIDIA DGX™ H100 |
|
GPUs |
8x NVIDIA H100 GPUs |
TFLOPS |
32 Peta FLOPS FP8 |
GPU memory |
80GB per GPU/640GB per DGX H100 Node |
system memory |
2TB |
storage |
Data cache drives: 30TB (8x3.84TB) OS drives: 2x 1.92TB NVME SSDs |
network |
4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 400Gb/s InfiniBand/Ethernet 2x dual-port NVIDIA BlueField-3DPUs VPI 1x 400Gb/s InfiniBand/Ethernet 1x 200Gb/s InfiniBand/Ethernet |
Reference materials that explain the NVIDIA DGX H100 from a network perspective,
"NVIDIA DGX™ H100 Anatomy from a Network Perspective - The Role of NVIDIA ConnectX®-7 in Maximizing GPU Usage -"
are available. Please use all means.
NVIDIA DGX A100
The world's first AI system built on NVIDIA A100
NVIDIA DGX ™ A100 is a universal system for all AI workloads for training, inference and analytics, delivering unprecedented compute density, performance and flexibility in the world's first 5 petaflops AI system. NVIDIA DGX A100 features the world's most advanced accelerator, the NVIDIA A100 Tensor Core GPU, delivering 20x the performance of VOLTA generation GPU systems.
Product spec
|
NVIDIA DGX A100 (640GB) |
NVIDIA DGX A100 (320GB) |
GPUs |
NVIDIA A100 8x80GB GPUs |
NVIDIA A100 8 x 40GB GPUs |
GPU memory |
640 GB total |
320 GB total |
Arithmetic performance |
5 petaFLOPS in AI 10 petaOPS on INT8 |
|
NVSwitch |
6 |
|
CPU |
Dual AMD Rome 7742,128 cores total, 2.25GHz(base), 3.4GHz (max boost) |
|
system memory |
2TB |
1TB |
Network |
single port Mellanox ConnectX-6 VPI 200Gb/s HDR InfiniBand x 8 dual port Mellanox ConnectX-6 VPI 10/25/50/100/200Gb/ Second Ethernet x 2 |
single port Mellanox ConnectX-6 8 VPIs 200Gb/s HDR InfiniBand dual port Mellanox ConnectX-6 1 VPI 10/25/50/100/200Gb/ seconds Ethernet |
storage |
OS: 1.92TB M.2 NVME 2 drives Internal storage: 30TB (3.84TB x 8) U.2 NVMe drives |
OS: 1.92TB M.2 NVME 2 drives Internal storage: 15TB (3.84TB x 4) U.2 NVME drives |
software |
Ubuntu Linux OS others: Red Hat Enterprise Linux Cent OS |
|
weight |
123.16 kg (maximum) |
|
packing weight |
163.16 kg (maximum) |
|
size |
Overall height: 264.0mm Width: 482.3 mm Depth: 897.1mm |
|
Operating temperature range |
5ºC to 30ºC |
NVIDIA DGX STATION A100
Workgroup Alliance for the Age of AI
The NVIDIA DGX Station A100 brings AI supercomputing to data science teams, delivering data center technology without the need to build out data centers or IT infrastructure. Designed to allow multiple users to connect simultaneously, DGX Station A100 utilizes server-grade components in an office-friendly form factor. It's the only system with four fully interconnected Multi-Instance GPU (MIG)-enabled NVIDIA A100 Tensor Core GPUs and up to 320GB of total GPU memory that plugs into any standard power outlet, A powerful AI appliance that can be placed anywhere.
Product spec
|
NVIDIA DGX Station A100 320GB |
NVIDIA DGX Station A100 160GB |
GPUs |
4x NVIDIA A100 80GB GPU |
4x NVIDIA A100 40GB GPUs |
GPU memory |
Total 320GB |
Total 160GB |
Performance |
2.5 petaFLOPS AI 5petaOPS INT8 |
|
System power consumption |
1.5kW (at 100-120Vac) |
|
CPU |
Single AMD 7742, 64 cores, 2.25GHz (base) ~3.4GHz (maximum boost) |
|
system memory |
512GB DDR4 |
|
Network |
Dual port 10Gbase-T Ethernet LAN Single port 1Gbase-T Ethernet BMC management port |
|
storage |
4GB GPU memory, 4x Mini DisplayPort |
|
System acoustic characteristics |
less than 37dB |
|
software |
Ubuntu Linux OS |
|
system weight |
43.1kg (91.0lbs) |
|
System package weight |
57.93kg (127.7lbs) |
|
system size |
Height: 639mm (25.1in) Width: 256mm (10.1in) Length: 518mm (20.4in) |
|
Operating temperature range |
5-35ºC (41-95ºF) |
Case study
Inquiry / Quotation
AI TRY NOW PROGRAM
This is a support program that allows you to test the latest AI solutions on the NVIDIA development environment before introducing them into your company.
You can deepen your understanding of software products such as NVIDIA AI Enterprise and NVIDIA Omniverse and investigate the feasibility of your implementation objectives in advance.