If you progress far enough in computer science you might learn some weird things. Weird things like "9.4 - 9.0 - 0.4 != 0 ". I put a code example here. This weird thing is an example of a rounding error. The purpose of this article is not to show why these happen. It is to explain how data is represented inside of a computer at an introductory level.


Binary is a number scheme that only has two valid characters to represent numbers: 0 and 1. In contrast the number scheme that most humans use to communicate numbers has 10 possible characters: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Our number scheme is called decimal.

If we look at the places for number, for example the number 22, we have a 2 in the 10's place and a 2 in the 1's place. This is the same as having 2 tens or '10 + 10' and 2 1's or '1 + 1'.

We can also represent decimal numbers in the following way.

1000's | 100's | 10's | 1's
 10^3  | 10^2  | 10^1 | 10^0

If you are not familiar with this notation, the '^' symbol just means 'whatever number is to the left multiply itself by itself the number to the right times. so 10^2 is '10 times 10'. Also anything to the power of zero is one.

The following is a chart for the "place's" of binary numbers

 8's | 4's | 2's | 1's
 2^3 | 2^2 | 2^1 | 2^0

Binary follows the same pattern as decimal, only because we have 2 characters instead of 10 we do 2^n instead of 10^n where n is the number indicating the position.
So if we have the number 0101 in binary that would be the same as '4 + 1' which is '5'.

Binary follows a pattern. As an exercise try writing all the numbers in binary 0 - 15. You can check your work by looking it up on the Internet.


Hex means 6 and decimal means 10. Put them together and you get hexadecimal- it's base 16. Hexadecimal is what computer scientists use as a shorthand for binary in coding applications. Hexadecimal has 16 characters to use. Those characters are: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. The symbols after 9 represent the numbers 10 - 15. Hexadecimal (hex for short) follows the same pattern as the other numbers.

 16^3 | 16^2 | 16^1 | 16^0

So the number 002F is the equivalent of  '16 + 16 + 15' or '47'.

We use hex as a shorthand for binary because it's convenient. Let me show you.

In binary we may have a long number such as '0000 0010 1001 1111' (we put spaces between sets of four in binary to improve readability). Every set of four places (or bits, as the places are referred to in binary) can be represented by a single hex value. So our number before in hex will be '029F'

Binary and Computer Memory

 You can think of computer memory as a long list of rows. Each row has a certain number of bits (binary number places). Each bit can be either on or off. On is represented with a 1 and off is represented with a 0. At a much lower level than most programmers go you can actually specify a place in memory by its number and retrieve the value that resides there. For example:

 location | value
 0000 | 0000 0000 0000 0000
 0001 | 0000 0000 0100 0001
 0002 | 0000 0000 0000 0000
 0003 | 0000 0000 0000 0000
 FFFF | 0000 0000 0000 0000

In computer memory everything is stored as binary values. Images, movies, sound, text... Everything. It becomes up to the programmer to interpret what those values actually mean.


A long time ago, near the beginning of modern computers, ASCII was developed. ASCII was / is a set of rules to interpret a number as a letter or character. In our previous example at location hex 1, we have a value that equals the number 65. If the programmer was using interpreting that location as a character it would be the letter 'A'.

Floating Point Numbers

Also a long time ago a standard was formed by IEEE to interpret numbers that have a decimal point. In our examples above we can only interpret them as whole numbers (numbers with no decimal point). We call numbers that have decimal places a 'float' or 'floating point'. Almost all computers use this standard.

High Level Programming Languages

In high level programming languages, you won't have to worry about the binary representation of variables. In C, C++, Java, or similar strongly typed languages, the compiler knows what binary representation to use when you declare a variable. 'int' declares a (depending on the language) 32 bit long integer number. a 'char' would declare a character value (at least 8 bits long). etc...

The point is that most of the time you don't have to deal directly with interpreting bytes and binary notation for different types.


Most of the time you won't need to deal with how your variables are represented. You won't need to deconstruct it or any of that (unless you are doing lower level activities or interfacing directly with hardware).
However it will affect you as a programmer and it is happening whether you know about it or not.