Binary Floating Point Converter

10

IEEE 754 Binary Representation

Sign -
Exponent -
Mantissa -
IEEE 754 Format

Understand how computers represent decimal numbers in binary floating point (IEEE 754). Convert between decimal and binary floating point — with sign bit, exponent, and mantissa breakdowns.

5Number Systems
Precision
0msLatency

What is Binary Floating Point Converter?

The Binary Floating Point Converter is a tool that converts decimal numbers into their IEEE 754 floating-point binary representation and vice versa. It breaks down the binary encoding into its three components — sign bit, exponent, and mantissa (significand) — showing exactly how computers store fractional and very large or very small numbers.

This tool exists because IEEE 754 is the universal standard for floating-point arithmetic in computing, used by every modern CPU, GPU, and programming language. Understanding how decimal numbers like 0.1 or 3.14 are encoded in binary — and why precision loss occurs — is critical for developers working with scientific computing, financial calculations, graphics, and machine learning.

Whether you're debugging why 0.1 + 0.2 ≠ 0.3 in your code, studying computer architecture, or working with low-level data formats, this converter reveals the exact binary representation that the CPU sees.

Interactive Demo

Binary Floating Point Converter Formula

Enter a decimal number to see how it's decomposed into the three fields of a 32-bit IEEE 754 single-precision float: Sign, Exponent, and Mantissa.

32-Bit Float Analyzer
S
1
Sign (1 bit)
1 (negative)
Exponent (8 bits)
129 − 127 = 2
Mantissa (23 bits)
1.101100…
Formula: (-1)¹ × 2² × 1.1011 = -6.75
Concept Guide

Why 0.1 + 0.2 ≠ 0.3

This is the most famous floating-point "bug" — but it's actually by design. The decimal fraction 0.1 is a repeating fraction in binary: 0.000110011001100… (forever).

Since IEEE 754 has finite bits (23 for the mantissa in single precision), the value gets rounded. When you add two rounded values, the rounding errors accumulate, producing 0.30000000000000004 instead of exactly 0.3.

In Code console.log(0.1 + 0.2) // → 0.30000000000000004

This happens in every language that uses IEEE 754: JavaScript, Python, Java, C++, Rust, Go — all of them.

Precision Comparison
0.1 in binary (truncated)
0.00011001100110011001100…
0.2 in binary (truncated)
0.00110011001100110011001…
0.3 in binary (exact)
0.01001100110011001100110…
Quick Reference

Special IEEE 754 Values

ValueSignExponentMantissaNotes
+0000000000000…000Positive zero
−0100000000000…000Negative zero (equals +0)
+∞011111111000…0001.0 / 0.0
−∞111111111000…000−1.0 / 0.0
NaN0/111111111non-zero0/0, √(−1)
Smallest Normal000000001000…000≈ 1.18 × 10⁻³⁸
Largest Normal011111110111…111≈ 3.40 × 10³⁸
Smallest Subnormal000000000000…001≈ 1.40 × 10⁻⁴⁵
FAQ

Frequently Asked Questions

What is IEEE 754 floating point?
IEEE 754 is the standard for representing floating-point numbers in binary. A floating-point number has three parts: a sign bit (0=positive, 1=negative), an exponent (biased), and a mantissa (significand). Single precision uses 32 bits (1+8+23), double precision uses 64 bits (1+11+52). This format allows computers to represent very large and very small numbers.
Why can't 0.1 be represented exactly in binary floating point?
0.1 in decimal is a repeating fraction in binary: 0.0001100110011…₂ (repeating forever). Since floating point has finite bits for the mantissa, the value must be rounded, introducing a tiny error. This is why 0.1 + 0.2 ≠ 0.3 in many programming languages — the result is actually 0.30000000000000004.
What is the difference between single and double precision?
Single precision (float) uses 32 bits: 1 sign + 8 exponent + 23 mantissa, giving about 7 decimal digits of precision and a range of ±3.4×10³⁸. Double precision (double) uses 64 bits: 1 sign + 11 exponent + 52 mantissa, giving about 15-16 decimal digits of precision and a range of ±1.8×10³⁰⁸.
What are special floating point values?
IEEE 754 defines special values: Positive/negative zero (sign differs, all other bits 0), Positive/negative infinity (exponent all 1s, mantissa all 0s), NaN (Not a Number — exponent all 1s, mantissa non-zero). NaN results from undefined operations like 0/0 or √(-1). Infinity results from overflow or division by zero.
What is the mantissa (significand) in floating point?
The mantissa stores the significant digits of the number. In IEEE 754, it uses an implicit leading 1 (normalized form), so the stored bits represent the fractional part after the leading 1. For example, the value 1.101₂ stores only '101' in the mantissa. This implicit bit effectively gives one extra bit of precision.
What is floating point precision loss?
Precision loss occurs because floating point can only represent a finite number of values. Numbers that require more mantissa bits than available get rounded to the nearest representable value. This is why very large numbers lose integer precision: in 32-bit float, 16777216 + 1 = 16777216 because the mantissa can't represent that many significant digits.
Copied to clipboard!