Binary Floating Point Converter
IEEE 754 Format
Understand how computers represent decimal numbers in binary floating point (IEEE 754). Convert between decimal and binary floating point — with sign bit, exponent, and mantissa breakdowns.
5Number Systems
∞Precision
0msLatency
Interactive Tool Module
Hardware Float Shift Emulation
When you move a decimal slider on a computer, the CPU translates those tiny micro-movements into massive bit shifting operations inside the IEEE-754 mantissa blocks. Drag the slider below to watch the precision algorithm shift the bytes in 32-bit layout instantly.
1.25
More
Other Number System Conversions
Related numeral systems converters for number conversion between binary, decimal, hexadecimal, octal, and ASCII text.
Binary to Decimal Converter Binary Fraction to Decimal Signed Binary to Decimal Binary to Hexadecimal Binary to Octal Binary Base Converter Binary Calculator Binary Addition Binary Subtraction Step-by-Step Solver Binary Table Generator Binary to Text Text to Binary Bit Size Converter Bitwise Operations Binary Decoder
FAQ
Frequently Asked Questions
What is IEEE 754 floating point?
IEEE 754 is the standard for representing floating-point numbers in binary. A floating-point number has three parts: a sign bit (0=positive, 1=negative), an exponent (biased), and a mantissa (significand). Single precision uses 32 bits (1+8+23), double precision uses 64 bits (1+11+52). This format allows computers to represent very large and very small numbers.
Why can't 0.1 be represented exactly in binary floating point?
0.1 in decimal is a repeating fraction in binary: 0.0001100110011…₂ (repeating forever). Since floating point has finite bits for the mantissa, the value must be rounded, introducing a tiny error. This is why 0.1 + 0.2 ≠ 0.3 in many programming languages — the result is actually 0.30000000000000004.
What is the difference between single and double precision?
Single precision (float) uses 32 bits: 1 sign + 8 exponent + 23 mantissa, giving about 7 decimal digits of precision and a range of ±3.4×10³⁸. Double precision (double) uses 64 bits: 1 sign + 11 exponent + 52 mantissa, giving about 15-16 decimal digits of precision and a range of ±1.8×10³⁰⁸.
What are special floating point values?
IEEE 754 defines special values: Positive/negative zero (sign differs, all other bits 0), Positive/negative infinity (exponent all 1s, mantissa all 0s), NaN (Not a Number — exponent all 1s, mantissa non-zero). NaN results from undefined operations like 0/0 or √(-1). Infinity results from overflow or division by zero.
What is the mantissa (significand) in floating point?
The mantissa stores the significant digits of the number. In IEEE 754, it uses an implicit leading 1 (normalized form), so the stored bits represent the fractional part after the leading 1. For example, the value 1.101₂ stores only '101' in the mantissa. This implicit bit effectively gives one extra bit of precision.
What is floating point precision loss?
Precision loss occurs because floating point can only represent a finite number of values. Numbers that require more mantissa bits than available get rounded to the nearest representable value. This is why very large numbers lose integer precision: in 32-bit float, 16777216 + 1 = 16777216 because the mantissa can't represent that many significant digits.