 This time we're going to start looking at memory. So far we've seen how memory works from a programming perspective. We've seen how we can write things to memory and how we can read them back in from memory. How we can use memory to help us write programs. But now we're going to start looking at how we can actually build memory. How do we structure it? How are things related? And what issues arise in working with memory? So we pretty much know that memory holds data and code for us. In the MIPS architecture we put data in one part of memory and our code in a different part of memory. But in other architectures we kind of mix them up and just bundle them up based on individual processes that are running. We're going to be looking at several different types of memory. Each of these types of memory will have different properties. We'll look at some that are really really fast. You can get them to run at the same speed as the processor. Which is great for sticking into the actual architecture. It means we'll be able to pull out one or even more instructions every cycle. We can read and write data from those every cycle. But those are actually really small and they turn out to be really expensive. There's only so much room on our processor. So there's only so much room for a really fast bit of memory. Other types of memory are really large. They can hold huge amounts of stuff. More than you might ever want to have in memory at any one time. These are also nice because they're cheap. But they can take a whole lot longer to access. You can take 10 or 20 clock cycles to get a piece of data out of that kind of memory. So what we did was we built a hierarchy. This will allow us to exploit the features of memory that's small but fast. As well as memory that's large but slow. As well as having some in between. So we actually have some memory that's physically on the CPU which we call cache. As I mentioned this type of memory is really really fast. You can access some types of this in a single clock cycle. But it's tiny. I have a 2015 desktop at home with an Intel i7 processor in it. And it has three levels of this cache. The lowest level, the L1 level, it has two different types of cache. It has instruction cache and data cache so that they're separated out and you can access both of them every cycle. But these are tiny. Each one of them is a mere 32 kilobytes in size. It turns out that I have a pair of these for each core in my processor. So since there are four cores I have four times 32 kilobytes of L1 data cache and four times 32 kilobytes of L1 instruction cache. So I have 128 kilobytes of both overall. But that's still tiny. The L2 data caches are a little bit better. Each core has 256 kilobytes of memory. Overall my processor still has less than one ancient floppy disk of space. The L3 data cache is starting to look a little better. It's a whopping 8 megabytes in size. This is on the other hand shared between all of the cores in the CPU. This means that the L3 data cache is a great place for the different cores to share bits of data. They're all interested in working on a similar problem. Then they can put the shared data in the L3 data cache and they can pull from it relatively quickly. Alternatively if one of the cores is just not using much memory then a different core can take up that slack. This machine happens to have 32 gigabytes of random access memory in it, which is relatively large. I could run almost everything that's installed on this machine concurrently and it would still have trouble reaching that limit, which is great if I ever needed to do that, unlikely but possible. So I don't actually have a whole lot of dedicated virtual memory in this machine. There's only 2 gigabytes of hard drive space that's actually allocated to being virtual memory. So in this case my virtual memory is really just acting as an extension to my RAM. Just this extra chunk of space sitting over here in case the thick computer decides it actually needs it. It also means that the machine won't fall over immediately when it runs out of RAM. It will try to degrade its performance gracefully by using the virtual memory instead of just discarding programs when they run out of space.