Term of the Moment

KVM switch


Look Up Another Term


Definition: floating point


A way to represent very large and very small numbers using the same quantity of numeric positions. Floating point also enables calculating a wide range of numbers very quickly.

Although floating point dates back thousands of years, the concept is the same today. In a computer, the same quantity of floating point bits (8, 16, 32 or 64) can represent an extremely large or extremely small number and every number in between based on the power to which the numeric value is raised.

Mantissa and Exponent
In decimal arithmetic, the decimal point is always implied between the hundreds and tens columns. However, in floating point arithmetic, the "radix point" is said to float because its location is determined by the power of 10 to which the numeric value is raised. The power is called the "exponent."

In the computer, the bits devoted to floating point are divided between the numeric value, known as the "mantissa," and the exponent power. Every floating point number has a mantissa and an exponent (see below).

Mostly Hardware
Although floating point calculations can be implemented in software, most floating point processing is performed in a floating point unit (FPU) in the computer, which may be a circuit or a separate "math coprocessor" chip. See math coprocessor, binary numbers, Bfloat16 and NaN.


       FLOATING POINT EXAMPLES
       Mantissa  Exponent  Value

         71        0         71
         71        1        710
         71        2       7100
         71       -1          7.1


Numbers Are Stored Four Ways
These examples show how the value 7100 can be stored in the computer.