To store data locally in order to speed up subsequent retrievals. Pronounced "cash." See Web cache
and browser cache
Reserved areas of memory (RAM) in every computer that are used to speed up processing. Pronounced "cash," they serve as high-speed staging areas that are constantly filled with the next set of instructions or data. Caches have faster input/output than the areas that feed them. For example, memory caches are high-speed memory, which is faster than main memory, and disk caches are main memory, which is faster than disk.
A memory cache, also called a "CPU cache," is a memory bank that bridges main memory and the processor. Comprising faster static RAM (SRAM) chips than the dynamic RAM (DRAM) used for main memory, the cache allows instructions to be executed and data to be read and written at higher speed. Instructions and data are transferred from main memory to the cache in fixed blocks, known as cache "lines," using a look-ahead algorithm. See cache line
, static RAM
and dynamic RAM
Temporal and Spatial (Time and Space)
Caches take advantage of "temporal locality," whereby unchanging data constants such as high-low limits, messages and column headers are used over and over again. Caches also benefit from "spatial locality," because the next instruction to be executed or the next set of data to be processed is often next in line. The more sequential they are, the greater the chance for a "cache hit." If the next item is not in the cache, a "cache miss" occurs, and it must be retrieved from slower main memory.
Levels 1, 2 and 3 (L1, L2, L3)
Today's CPU chips contain two or three caches, with L1 being the fastest. Each subsequent cache is slower and larger than L1, and instructions and data are staged from main memory to L3 to L2 to L1 to the processor. On multicore chips, the L3 cache is generally shared among all the processing cores. See write-back cache
and write-through cache
Memory Cache Hierarchy
The whole idea is to keep staging more instructions and data in a memory that is closer to the speed of the processor. The caches are generally built into the CPU chip. See L2 cache
A disk cache is a dedicated block of memory (RAM) in the computer or in the drive controller that bridges storage and CPU. When the disk or SSD is read, a larger block of data is copied into the cache than is immediately required. If subsequent reads find the data already stored in the cache, there is no need to retrieve it from storage, which is slower to access.
If the cache is used for writing, data are queued up at high speed and then written to storage during idle machine cycles by the caching program or the drive controller. See cache coherency
, write-back cache
, write-through cache
, pipeline burst cache
, lookaside cache
, inline cache
, backside cache
and NV cache
Disk caches are usually a part of main memory comprising common dynamic RAM (DRAM) chips, whereas memory caches (CPU caches) use higher-speed static RAM (SRAM) chips. See dynamic RAM
and static RAM
Disk caches are a reserved area in main memory, whereas memory caches are hardware components in the CPU.