Term of the Moment

first-party cookie


Look Up Another Term


Redirected from: COODA

Definition: CUDA


(Compute Unified Device Architecture) A platform from NVIDIA that enables the GPUs on its graphics cards to be used for general-purpose processing, which includes cryptography, physics simulation, computer vision, 3D rendering, as well as AI. The CUDA programming interface (API) exposes the GPU's parallel processing capabilities to the developer.

CUDA was introduced in 2007 when NVIDIA's GPU was the GeForce 8 and AI was not on the tip of everyone's tongue. Unlike a CPU, which may contain a dozen or more cores for general data processing, a GPU can contain thousands of CUDA cores that perform a single multiply-and-accumulate calculation in parallel for graphics rendering and high-performance computing. See GeForce and CUDA core.

CUDA and Tensor Cores
NVIDIA GPUs also include Tensor cores, which were designed for AI and operate in parallel on matrices (see Tensor core). However, CUDA was created for general-purpose parallel computing, and CUDA programming is commonly used to efficiently control the Tensor cores. See AI programming and PhysX.

CUDA C/C++ and CUDA Fortran
CUDA operations are typically programmed in C++ and compiled with NVIDIA's CUDA compiler. A CUDA Fortran compiler was developed by the Portland Group (PGI), which was acquired by NVIDIA. See GPU.