 We're going to start by talking about metric prefixes. Chances are you've seen some of these before in a physics course or a chemistry course. You may have actually worked with some of them. They're designed to give us a way to talk about a wide range of numbers without having to write out all of the decimal digits that might otherwise exist. And it allows us to talk about things that are very, very large, as well as things that are very, very small. So as an example of something that's very, very large, the diameter of the observable universe is about 870 yotta meters. The observable universe is very, very large, but I'm able to say just 870 yotta meters. I don't have to say 870 or 24 zeros or even 870 septillion meters. Any of those can be very, very large and complex. And in a lot of cases, we're used to seeing things in a lot of these scales. On the small side, the mass of a proton is about 1.67 yachtograms. So a yachtogram is then something very, very light. It doesn't weigh very much. We need 24 septillion yachtograms just to make up one gram. So there's a whole lot of yachtograms in one gram, a whole lot of grams in one yachtogram. For a couple more examples, the diameter of the Earth is about 12 megameters. So, again, very large, but not too far off some of the scales that we're used to. We're used to seeing things like kilometers. So something in megameters isn't too far off. We could say that's 12,000 kilometers. On the small side, though, things like the diameter of a hydrogen atom is about 110 picometers. And the size of a flu virus is about 100 nanometers. So both of those are very, very small things. In computer science, we tend to look at things that are kind of in the medium range. We use things like terabytes and milliseconds to talk about how our software works, how much space it takes up. Our 64-bit architectures, though, can provide up to 16 exabytes of addressable memory, which is a whole lot more than we should reasonably need for the foreseeable future. The current computational power of the point computing grid is about 10 petaflops in contrast to your average desktop computer is about 5 gigaflops. So your average desktop can process about 5 billion instructions every second, whereas the point grid can process about 10 quadrillion floating point instructions every second. We kind of work with things in this general middle range. We won't be worrying about things like Centi or Hecto. Those are used in some areas, but not so much in computer science. We're also not really going to be worried about the very, very large things. Sure, we have 16 exabytes of addressable memory, but we don't really use that. And on the small side, we have some hardware features that are in the picometer range, but mostly we stay in the middle of the nano range for things like time and feature size and hardware. And we'll start using these more in the next few videos when we start looking at power estimation and computing performance.