 When I write something back to my memory, well, chances are I've got that sitting in memory already and I'm just overwriting it. So I overwrite my cache, but now what? I have a couple of options here. One is that I could copy that update to all the places in memory where I've got that piece of data. Copy it back to all of my caches and my main memory. That can be rather time consuming and would end up consuming a lot of memory bandwidth, especially as you get to the larger memory structures. So alternatively, we could write back to higher levels of memory only when we evict a block. Once we're done with this block, then we write all of its changes back to the higher level of memory. This on the other hand means that we need to have some way of indicating that data in those higher levels is not up to date. This is not the latest version of this data. We've changed it. So we use what's called a dirty bit. Just mark this piece of data as dirty. So if another core or another processor tries to read this piece of data, we say, no, no, don't use that. We'll go find the latest version of that block of data and we'll bring it to you. It will just take some time. A write back strategy then consumes less memory bandwidth, but if you have a conflict where you try to read from that dirty memory, it takes a lot longer to get that data. But I'm not always just going to overwrite data that I already had. Sometimes I'm writing to some fresh piece of memory or something that I at least haven't read from recently. In that case, I've got two options. I can either read that piece of data out of memory from where it is and overwrite it the way I've done before. That would be a write allocate strategy. However, I could just decide to write that data back to wherever it is. If that means it's going to go all the way up to the main memory, then so be it. Both of these have interesting effects. The write allocate method means that you have to load the data first. That means incurring the penalty of fetching that data from wherever it's actually at, then updating it and perhaps having to propagate that change back through the levels of memory. The no write allocate means that you just go find that piece of data and change it. This would work fine if you're just updating a tiny piece of memory, but if you're writing to an array, you're walking down the array iteratively, updating one byte at a time or one word at a time, then that would result in a whole lot of requests to high level memory. Whereas if you pulled that block memory down, you would change the block and you would have one large write back later.