 So far we've looked at a lot of numbers, but we've seen all of them from a mathematical perspective. This meant that we had all the symbols we could possibly want. We can have pluses, minuses, we can have arithmetic symbols, we can have a symbol to specify which base we're in, but in a computer we don't have that luxury. All we have are zeros and ones to represent anything we want. So to represent a number, we'll have to find a way to encode that number using just zeros and ones. Sine and magnitude is the simplest way of doing this because all we're going to do is take our regular number in binary, use one bit for the sine, and the remaining bits to hold how you have the number that we've got. So if I've got a 32-bit number, I will use my first bit to represent whether it's a positive or a negative number. And then use the other 31 bits to encode that value. It's a really simple and straightforward way. It's easy to read to see what this number is, but it turns out the arithmetic is hard. So we don't actually use this format in hardware for integer numbers anymore, but we will see something like it again when we get to looking at floating point numbers.