NVIDIA DGX

products_7002_hed

NVIDIA DGX STATION A100

Workgroup Alliance for the Age of AI

7002_dgxstationA100

The NVIDIA DGX Station A100 brings AI supercomputing to data science teams, delivering data center technology without the need to build out data centers or IT infrastructure. Designed to allow multiple users to connect simultaneously, DGX Station A100 utilizes server-grade components in an office-friendly form factor. It's the only system with four fully interconnected Multi-Instance GPU (MIG)-enabled NVIDIA A100 Tensor Core GPUs and up to 320GB of total GPU memory that plugs into any standard power outlet, A powerful AI appliance that can be placed anywhere.

Product spec

 

NVIDIA DGX

Station A100

320GB

NVIDIA DGX

Station A100

160GB

GPUs

4x NVIDIA A100

80GB GPU

4x NVIDIA A100

40GB GPUs

GPU memory

Total 320GB

Total 160GB

Performance

2.5 petaFLOPS AI

5petaOPS INT8

System power consumption

1.5kW (at 100-120Vac)

CPU

Single AMD 7742, 64 cores,

2.25GHz (base)

~3.4GHz (maximum boost)

system memory

512GB DDR4

Network

Dual port 10Gbase-T

Ethernet LAN

Single port 1Gbase-T

Ethernet BMC management port

storage

4GB GPU memory,

4x Mini DisplayPort

System acoustic characteristics

less than 37dB

software

Ubuntu Linux OS

system weight

43.1kg (91.0lbs)

System package weight

57.93kg (127.7lbs)

system size

Height: 639mm (25.1in)

Width: 256mm (10.1in)

Length: 518mm (20.4in)

Operating temperature range

5-35ºC (41-95ºF)

NVIDIA DGX A100

The world's first AI system built on NVIDIA A100

dgx_sys07

NVIDIA DGX ™ A100 is a universal system for all AI workloads for training, inference and analytics, delivering unprecedented compute density, performance and flexibility in the world's first 5 petaflops AI system. NVIDIA DGX A100 features the world's most advanced accelerator, the NVIDIA A100 Tensor Core GPU, delivering 20x the performance of VOLTA generation GPU systems.

Product spec

 

NVIDIA DGX A100 (640GB)

NVIDIA DGX A100 (320GB)

GPUs

NVIDIA A100

8x80GB GPUs

NVIDIA A100

8 x 40GB GPUs

GPU memory

640 GB total

320 GB total

Arithmetic performance

5 petaFLOPS in AI

10 petaOPS on INT8

NVSwitch

6

CPU

Dual AMD Rome 7742,128 cores total, 2.25GHz(base), 3.4GHz (max boost)

system memory

2TB

1TB

Network

single port

Mellanox ConnectX-6

VPI

200Gb/s HDR

InfiniBand x 8

dual port

Mellanox ConnectX-6

VPI

10/25/50/100/200Gb/

Second Ethernet x 2

single port

Mellanox ConnectX-6

8 VPIs

200Gb/s HDR

InfiniBand

dual port

Mellanox ConnectX-6

1 VPI

10/25/50/100/200Gb/

seconds Ethernet

storage

OS: 1.92TB M.2 NVME

2 drives

Internal storage:

30TB (3.84TB x 8)

U.2 NVMe drives

OS: 1.92TB M.2 NVME

2 drives

Internal storage:

15TB (3.84TB x 4)

U.2 NVME drives

software

Ubuntu Linux OS

others:

Red Hat Enterprise

Linux Cent OS

weight

123.16 kg (maximum)

packing weight

163.16 kg (maximum)

size

Overall height: 264.0mm

Width: 482.3 mm

Depth: 897.1mm

Operating temperature range

5ºC to 30ºC

NVIDIA DGX-2

The world's most powerful AI system to tackle complex AI challenges

dgx_sys04

The world's first 2-petaflops system, NVIDIA® DGX-2™, delivers 10x deep learning performance with 16 fully interconnected GPUs, breaking the speed and scale barriers of AI.

Powered by NVIDIA® DGX™ software and a scalable architecture based on NVIDIA NVSwitch technology, it enables you to tackle the world's most complex AI challenges.

Product spec

 

DGX-2

GPUs

Tesla V100 × 16

GPU memory

512 GB (total of 16 GPUs)

Arithmetic performance

2 petaFLOPS

NVIDIA CUDA® Core

81920

NVIDIA Tensor Cores

10240

NVSwitch

12

CPU

Dual Intel Xeon Platinum

8168, 2.7GHz, 24 cores

system memory

1.5TB

Network

100 Gb/sec × 8 or

8x Infiniband/100GigE

Dual 10/25Gb/sec Ethernet

storage

OS: 2 x 960GB NVME SSDs

Internal storage: 30TB

[3.84TB × 8] NVME SSDs

software

Ubuntu Linux OS

Maximum power consumption

10 kW

weight

154.2kg

size

Overall height: 440mm

Width: 482mm

Depth: 795 mm (without front bezel)

834 mm (with front bezel)

Operating temperature range

5℃-35℃

Case study

Inquiry / Quotation