 We saw the IEEE 754 standard could give us some interesting results when we said all of our exponent bits to one. What happens if we set all of those exponent bits to zero? In this case, we expect to have a really, really small number. And traditionally, we would just say, oh, this is our smallest possible number. We have that leading one that's fixed there. But sometimes we want to go even smaller than that. Simply two to the minus 127th might not be sufficient for us. We might want something that's actually really, really small. This might not seem like a big issue, but it turned out it came up a whole lot in scientific computing. Lots of cases where, got the difference between two values was just very, very tiny, greater than zero, but smaller than two to the minus 127. So the IEEE 754 standard gives us a way to work around this. Once we set all of those exponent bits to zero, we're going to say we have a denormalized number. This means that we suddenly don't know what that leading bit is. It's not in normalized scientific notation anymore. So that leading bit could be a one or it could be a zero. All we know is we've got one bit ahead of our binary point and then another 22 or 51 bits after it. So this time we will have to go to the trouble of encoding that initial bit because it's uncertain whether it's a one or a zero. But that means I can have all of these extra leading zeros before I actually get to my number. I'm going to lose a whole lot of precision if I've added an extra 20 zeros, but I can get a number that's much, much smaller. So instead of having two to the minus 127th, I might have two to the minus 147th instead. It's much closer to zero and it turns out that this alleviates a whole lot of the problems with floating point under flow. The encoding method looks horrible and ugly and hard to understand. But it increases the range of numbers that we have to work with and that solves a major problem for us.