Topic List

Click a topic to learn more.


Term of the Moment

GitHub


Look Up Another Term


Redirected from: H200

Definition: H100


(Hopper 100) A Tensor core GPU from NVIDIA. Introduced in 2022 and based on NVIDIA's Hopper architecture, the H100 was designed for AI training, AI inference and other high-performance computing (HPC) functions in datacenters. The H100 supports new instructions that speed up operations considerably faster. Comprising 80 billion transistors and superseding NVIDIA's A100 GPU, up to 256 H100s can be connected via NVIDIA's NVLink. See A100, Grace Hopper Superchip, Blackwell and Tensor core.

SXM, PCIe and NVL Interfaces
H100 chips connect to each other three ways: NVIDIA's SXM socket, NVIDIA's multi-channel NVLink and PCIe. The fastest is NVL (see NVLink).

H200
Announced in late 2023, the H200 successor is the first GPU to use HBM3e memory, providing double the capacity of the A100 GPU (see high bandwidth memory).




DGX H100 Module
A DGX H100 module comprises eight H100s, and thousands of modules are combined to create an AI supercomputer to train large language models with a trillion or more parameters (data examples). Considering each H100 costs roughly $25,000, it is no wonder why NVIDIA became one of the most valuable companies in the world. See large language model. (Image courtesy of NVIDIA Corporation.)