Term of the Moment

FLOPS


Look Up Another Term


Redirected from: DGX SuperPOD

Definition: DGX


A line of workstations and servers from NVIDIA that are designed for AI and high-performance computing. All the following products are specialized GPUs (see GPGPU). See HGX.

DGX Servers Use 8x GPU, SXM and NVLink
DGX servers have eight NVIDIA GPUs (8x GPU) that plug into NVIDIA's SXM sockets and link together via the NVLink protocol. Other NVIDIA systems use four GPUs. See NVLink.

DGX-1, DGX-2
A rack-mounted server with an Intel Xeon CPU, eight GPUs, 512GB RAM and 2TB storage. Launched in 2016 and using NVIDIA's Pascal or Volta microarchitecture, the DGX-1 was designed to accelerate AI deep learning. The dual-Xeon DGX-2 came out in 2018 with 16 Volta-based GPUs,

DGX Station
A water-cooled Xeon-based tower computer with 256GB RAM and 8TB storage. Four NVIDIA accelerators use the Volta or Tesla microarchitecture.

DGX A100 or H100 Server
A server with eight A100 GPUs or eight H100 GPUs. See A100 and H100.

DGX SuperPOD
A multi-million-dollar datacenter platform comprising eight NVIDIA modules, each containing 20 DGX A100 or DGX H100 GPUs. See A100 and H100.

DGX Cloud Platform
Co-engineered with major cloud providers, the DGX Cloud is a "full-stack" AI platform designed for cloud datacenters to support advanced generative AI models (see generative AI). See Blackwell.