Binary Bit Size Converter
Data Size Reference
Understand binary bit sizes and their decimal value ranges. Convert between bits, nibbles, bytes, and larger units — with clear explanations of unsigned and signed ranges for each bit width.
5Number Systems
∞Precision
0msLatency
Interactive Tool Module
Hardware Physical Scale Map
When you download a 1 MB file, your hard drive doesn't store "1 Megabyte". It physically stores exactly 8,388,608 tiny electrical on/off magnetic bits! Hover through the scale multiplier below to realize the immense physical storage size of modern data.
1 Megabyte
requires exactly
8,388,608
physical bits of electrical charge.
More
Other Number System Conversions
Related numeral systems converters for number conversion between binary, decimal, hexadecimal, octal, and ASCII text.
Binary to Decimal Converter Binary Fraction to Decimal Signed Binary to Decimal Binary to Hexadecimal Binary to Octal Binary Base Converter Binary Calculator Binary Addition Binary Subtraction Step-by-Step Solver Binary Table Generator Binary to Text Text to Binary Floating Point Converter Bitwise Operations Binary Decoder
FAQ
Frequently Asked Questions
What is a bit?
A bit (binary digit) is the smallest unit of data in computing. It has exactly two possible values: 0 or 1. All digital data — text, images, video, software — is ultimately stored and processed as sequences of bits. The word 'bit' comes from 'binary digit.'
How many values can 8 bits (1 byte) represent?
8 bits can represent 2⁸ = 256 different values. For unsigned integers, the range is 0 to 255. For signed integers (2's complement), the range is -128 to 127. One byte is enough to store one ASCII character, one color channel value (0-255), or one small integer.
What is the difference between a kilobyte and a kibibyte?
A kilobyte (KB) is 1,000 bytes (decimal, SI standard). A kibibyte (KiB) is 1,024 bytes (binary). The difference matters: a 1 TB hard drive is 1,000,000,000,000 bytes (SI), but the OS reports it as ~931 GiB (binary). Storage manufacturers use decimal (larger numbers), while operating systems often use binary.
What are common bit sizes in computing?
Common bit sizes: 1 bit (boolean), 4 bits/nibble (hex digit), 8 bits/byte (character), 16 bits/word (short integer), 32 bits/dword (integer, IPv4 address), 64 bits/qword (long integer, modern pointers), 128 bits (UUID, IPv6 address), 256 bits (cryptographic hash).
How do you calculate the range of n-bit unsigned integer?
For an n-bit unsigned integer: minimum value = 0, maximum value = 2ⁿ - 1, total values = 2ⁿ. Examples: 8-bit: 0 to 255 (256 values), 16-bit: 0 to 65,535, 32-bit: 0 to 4,294,967,295, 64-bit: 0 to 18,446,744,073,709,551,615.
Why do computers use powers of 2 for sizes?
Computers use powers of 2 because binary circuits naturally subdivide into halves. Memory chips are organized in rows and columns that double: 256, 512, 1024, 2048. Address buses with n lines can access 2ⁿ locations. This makes powers of 2 the natural sizing unit for all digital hardware.