A GPU-based chip used to train and execute AI systems. The training side (deep learning) requires the most processing, and large language models cause quadrillions of calculations per second to be executed for days, weeks and even months.
The execution side, called "inference," also requires high-performance chips. When people type a prompt into a chatbot, they expect results in a few seconds, and GPU chips are used for inference processing as well.
Capitalizing on its vast experience with graphics processors (GPUs) that perform parallel operations, NVIDIA is the world leader in AI chips, each of which can cost tens of thousands of dollars (see
A100,
H100,
Blackwell and
GPU). See
Tensor core,
neural processing unit,
deep learning and
Cerebras AI computer.
The Xilinx Versal System-on-Chip
Today's chips often include AI processing. This Versal SoC contains more than 30 billion transistors and provides the parallel processing required for AI (green). It also contains programmable hardware, a rarity on any SoC (red) (see
SoC and
FPGA). See
Versal.
(Image courtesy of Xilinx.)