 We've seen how we can represent integers in a computer, but we'd also like to be able to represent numbers from the real number line. We've tried a few different ways of implementing real numbers over the years, but the only one that's really stuck is the IEEE 754 standard. This is a relatively straightforward method, but it has a few different pieces that we'll have to work with, and each of them works a little differently. First of all, this is a method for encoding normalized scientific notation. So the first thing we'll have to do is take our number, convert it to binary, and then convert the binary number into normalized scientific notation. So we'll have one dot something, something, something times two to some exponent. The format then will have three parts. We've got a sign as in the sign of magnitude format. So that will just take up our leftmost bit. Then we've got an exponent. That's used for holding our exponent from our scientific notation. Lastly, we've got the mantissa. And that's just going to hold all of those bits after that leading one. In normalized scientific notation, we need to have exactly one non-zero digit ahead of the decimal point. So in binary, this means we've got to have a one up there. So since we always have a leading one, there's really no reason to encode that in our format. We can just accept that it will always be there and not worry about writing it down. So instead, I just have to write down everything after the decimal point. And all of that will end up in the mantissa. Now there are lots and lots of real numbers. So we end up having two issues to worry about. We have scale, how large or how small of a number can I have, as well as precision. How accurate is this representation of my number? So we address that by having both mantissa for precision and the exponent for scale. In the 32 bit format, we use 8 bits for our exponent and 23 bits for our mantissa. If we move up to a 64 bit format, then we use 11 bits for our exponent and 52 bits for our fraction. There are other versions of this format such as 128 bits or 256 bits that increase these further. But for most purposes, we find that 64 bits work reasonably well these days. It gives us a good balance between range and precision. Without requiring specialized hardware or additional computation to handle all the extra bits. For most of the examples, I'll stick to using 32 bits because that's reasonable to write out in a small space like this. But for a computer, you probably want to use your double precision 64 bit floating point numbers. So this format does have a couple odd things to it. First is that we can represent more than just regular real numbers. We also have three special values that we can represent. First is positive infinity. And we represent positive infinity by setting our sign bit to positive. Setting all of our exponent bits to one because infinity is larger than anything else. And then we set all of the fraction bits to zero. For negative infinity, we do pretty much the same thing except we set the sign bit to negative. Exponent bits are all still one. Mantissa bits are all still zero. The last option we have there is not a number. Not a number is what you get when you set all of the exponent bits to one and have any non-zero bits in your fraction. IEEE 754 standard will allow us to do some things that we might not otherwise. We can do division by zero. We divide a positive number by zero, we get positive infinity. We divide a negative number by zero, we get negative infinity. We can also do some basic arithmetic with our infinities. We can add things to them. We can multiply by them. We can divide by them. As long as we don't try doing things like zero times infinity or infinity divided by infinity. In those cases, we still don't know what to do, and we end up getting not a number. Once you get in NAN, then you're in trouble. Because we really can't do any sort of arithmetic with this. We have no idea what we have, no idea what it should be, or how to work with it. So if you have a NAN, any sort of arithmetic you do with it will also produce NAN. If you try to do any comparisons with NAN, you'll get back false. That includes if you compare NAN to itself. Fortunately, the only way you can get NAN is by doing indeterminate operations. Things like zero times infinity or infinity divided by infinity or zero divided by zero. Those are cases where we just don't know what the answer should be. There's actually an infinite number of possible solutions and the computer has no way to pick. But the IEEE 754 standard will allow us to do some arithmetic that we might not otherwise be able to do.