Term of the Moment

ASCII chart


Look Up Another Term


Redirected from: H100 Tensor Core GPU

Definition: H100


A Tensor core GPU from NVIDIA. Introduced in 2022 and based on NVIDIA's Hopper architecture, the H100 was designed for AI training, AI inference and other high-performance computing (HPC) functions in datacenters. The H100 supports new instructions that speed up operations considerably faster. Comprising 80 billion transistors and superseding NVIDIA's A100 GPU, up to 256 H100 chips can be connected via NVIDIA's 4th-generation NVLink. See A100, Grace Hopper Superchip, Blackwell and Tensor core.

SXM, PCIe and NVL Interfaces
H100 chips connect to each other three ways: NVIDIA's SXM socket, NVIDIA's multi-channel NVLink and PCIe. The fastest is NVL (see NVLink).

H200
Announced in late 2023, the H200 successor is the first GPU to use HBM3e memory, providing double the capacity of the A100 GPU (see high bandwidth memory).




DGX H100 Module
One DGX H100 module comprises eight H100 GPUs, and thousands of modules can be combined to create an AI supercomputer for training large language models with a trillion or more parameters (data examples). Considering each H100 costs in the neighborhood of $25,000, it is no wonder why NVIDIA became one of the most valuable companies in the world. See DGX, Tensor core and H100. (Image courtesy of NVIDIA Corporation.)