(
NVIDIA
LINK) A high-speed interface between CPU and GPU chips from NVIDIA. NVLink offers much faster data transfer than the commonly used PCIe bus (PCI Express). Introduced in 2014, each NVlink path is eight bidirectional channels, and up to 18 NVLinks between chips provide 144 lanes of simultaneous traffic. In 2024, data rates between chips using the maximum NVLink configuration reached 900 gigabytes per second. See
NVIDIA Grace Hopper Superchip.
NVSwitch
Introduced in 2018, NVSwitch provides the switching fabric so that data can be transferred from any GPU to any other. NVSwitch can support up to 256 H100 GPUs communicating via NVLink. See
H100.
NVL72 and NVL144 Server Racks
Introduced in 2024, NVL72 is an AI server platform comprising 36 Grace CPUs and 72 Blackwell GPUs connected via NVLink. Expected in 2026, NVL144 uses 36 Vera CPUs and 72 Rubin GPUs. Each Rubin GPU has two dies per package, hence the "144." See
Rubin and
Blackwell.
NVLink Fusion
Introduced in 2025, NVLInk Fusion allows non-NVIDIA processors such as Intel x86 CPUs and Google TPUs to work with NVLink. Introduced in 2025, NVLink Fusion enables NVIDIA products to integrate into more existing datacenter infrastructure, which is dominated by x86 CPUs. See
NVLink,
Tensor Processing Unit and
x86.
NVLink Connections
PCI Express can be eliminated entirely when NVLink is used from CPU to GPU. Although multiple NVLinks are used between chips, eight channels of one NVLink are diagrammed here. PCIe can also be configured as a multi-lane bus but is considerably slower than NVLink deployments. See
PCI Express.