Term of the Moment

AI hallucination


Look Up Another Term


Definition: H100


A Tensor core GPU from NVIDIA. Introduced in 2022 and based on NVIDIA's Hopper architecture, the H100 was designed for AI training, AI inference and other high-performance computing (HPC) functions in datacenters. The H100 supports new instructions that speed up operations considerably faster. Comprising 80 billion transistors and superseding NVIDIA's A100 GPU, up to 256 H100 chips can be connected via NVIDIA's 4th-generation NVLink. See A100, NVIDIA Grace Hopper Superchip and Tensor core.

H200
Announced in late 2023, the H200 successor is the first GPU to use HBM3e memory, providing double the capacity of the A100 GPU (see high bandwidth memory).

SXM, PCIe and NVL Interfaces
H100 chips connect to each other three ways: NVIDIA's SXM socket, NVIDIA's multi-channel NVLink and PCIe. The fastest is NVL (see NVLink).




DGX H100 Module
One DGX H100 module comprises eight H100 GPUs, and thousands of modules can be combined to create an AI supercomputer. In the 2020s, NVIDIA became one of the most valued companies in the country due to its AI products. (Image courtesy of NVIDIA Corporation.)