NVIDIA DGX-2 was installed in spring 2019 with 2 PFlop/s in AI. It is equipped with 16 powerful data center accelerators – the NVIDIA Tesla V100 GPU. They are interconnected with revolutionary NVSwitch 2.0 technology that delivers a total bandwidth of 2.4 TB/s. The systems include 512 GB of HBM2 memory. The NVIDIA DGX-2 also offers 30 TB of internal capacity on fast NVMe SSD disks. Interconnection to the surrounding infrastructure is provided via eight 100 Gb/s Infiniband/Ethernet adapters.

 

One NVIDIA DGX-2 can replace 300 dual-socket servers with Intel Xeon Gold processors for deep neural network training (ResNet-50). NVIDIA DGX-2 is powered by the DGX software stack; NVIDIA-optimized and tuned AI software that runs the most popular machine learning and deep learning frameworks with maximized performance. NVIDIA DGX-2 can also be used for traditional HPC workloads to deliver a theoretical peak performance of 130 TFlop/s. 

130
TFlop/s
theoretical peak
2
PFlop/s
in ai
100
Gb/s
interconnect

technical information of the nvidia dgx-2 system

put into operation

spring 2019

Theoretical peak performance

130 TFlop/s

operating system

CentOS 64bit 7.x

compute nodes

1

CPU

2x Intel Xeon Platinum, 24 cores, 48 cores in total

RAM per compute node

1,5 TB DDR4, 512 GB HBM2 (16 x 32 GB)

accelerators

16 x NVIDIA Tesla V100

32 GB HBM2

storage

30 TB NVMe

interconnect

8 x Infiniband or 8 x 100 GbE

 

 

Learn more at docs.it4i.cz.

Gallery