 Alrighty, welcome back to Operating Systems. So yeah, today's lecture will be very, very, very, very, very, very relevant on the final. So make sure we can do all these steps. We should not take all the time today, but we have lots of time if we want to take all of it, but hopefully this lecture will be excruciatingly straightforward. So last lecture, we left off at least recently used for page replacement. And we argued that we can't actually implement it. So we have to implement an approximation. And this is one of said approximations called the clock page replacement algorithm. Why is it called a clock algorithm? Well, because if you draw it out in a certain way, it looks like a clock. We're not creative. So the data structures we use here is one way to think about. So this is not how it's actually stored, but an easier way to think about it. So assume we keep a circular list of pages in memory, and we will use a reference bit for each page to keep track of it. So before when we had least recently used, we argued that while we would put a counter or a timestamp on each page, which is slow if we have to update it every single access. So in this case, instead of keeping a whole timestamp, we'll just do it using one bit. So we're just going to use one bit to just say whether or not we have used it recently. What's recently? Well, we'll figure it out. So the other part of it is it has a hand or it's just an iterator that points at an element that we last examined. And we're going to use this for the page replacement part of the algorithm. So here's the algorithm, it is two steps. So if we have to replace a page, we check whatever the hand is currently pointing to. If that reference bit is currently zero, it hopefully means we haven't used it in a while, so we replace that page. So we kick that page out of memory, swap it to disk, read the new page into memory, then we would set its reference bit to one. Because while we just read it, or just wrote it, so we had to load it in memory, set its reference bit to one, and then advance the hand to the next element so we don't get rid of it. And then the other option is if the reference bit is currently one, which hopefully means we have used this page somewhat recently. Well, we just changed the one to a zero. We don't get rid of it yet, we give it a second shot at life. Then we just advance the hand, and then we just repeat this process until we find a reference bit that is zero, then we get rid of that page. And then the meat of this algorithm is that four page accesses, which we want to be really, really, really fast. All we do is we set a reference bit to one, and that's all we do. We don't touch the hand, we don't do anything else. Yep, so the reference bit is what we're going to use to keep track if we've used this page somewhat recently, and we'll see how it works as we go through an example. Yeah, so it will be physically stored somewhere, likely for most hardware. If you look at that page table entry part of it, and all the permission bits and everything, one of those bits is going to be a reference bit, and the MMU will update it as you are using memory. So it will be done for you, essentially it's done in hardware, and it's free, essentially. All right, any questions before we get into excruciating examples? And excruciating, hopefully, and not having to listen to me. All right, so here's what it looks like if we draw it out. So we draw out our clock, we'll assume we have space for four pages in physical memory. So to draw out our clock, we just draw a bunch of boxes. So for each box on the top here that I currently have highlighted, that is the page that is currently in memory. Initially, we'll assume there's nothing currently in memory, so we'll just use the value zero. And then in the shaded box, the value here is the value of the reference bit. So it will always be zero or one. In this case, we initialize it as zero. And to have our circular list of pages, we just one page points to the next, points to the next, points to the next, and eventually it wraps around back to the beginning. So this is our circular list of pages, kind of looks like a clock, right? Clocks look like circles. And in the middle here, this pointer, that is our hand or the current iterator, what we're currently pointing at. So if we have an access to page one, and we follow the clock algorithm, if we remember it, what should happen? Yeah, 12 o'clock spot, whatever the can currently points to. Well, reference bit is zero, so we get rid of it. It's not an actual page, so we don't write it back to disk or anything. But we get rid of it. We put in page one. The reference bit would be one, because, well, we just used it. And then we update the hand to point to the next element. So what's gonna happen if I access page two now? Yeah, same thing. So I replace this element with page two. Reference bit should be one, and then I advance the hand. So it looks like this. So any questions about that? Can I skip two steps? So yeah, yep. So you can't be too. Yeah, so two on the top is a page number. Oh, thank you. So page number, reference bit. Keep them together in the same box. All right, so reference page three, what should happen? Same thing, right? We're already looking kind of sleepy. So page three goes in, set it reference bit to one, advance the hand, same thing for four, advance the hand. All right, now we get the fun part, which isn't actually that fun, but let's try and make it fun. So reference page five, what happens? Yeah, everyone's skipping a bunch of steps already, so this lecture will probably be fast. So if I do it first in excruciating detail, if I follow the algorithm, while currently the iterator is pointing to page one, its reference bit is one. So the rule is I can't throw it out, so I have to change the reference bit from a one to a zero and advance the hand. So I do this. I still haven't brought in page five yet. I need to find something to evict, so I do it again. Oh, the reference bit is one. I change it to zero, advance the hand. Oh, the reference bit is one. Change it to zero, advance the hand. Oh, the reference bit is one. Change it to zero, advance the hand, and I've gone the whole way around the clock. So if you want a fast version of this, if all the reference bits are one, just skip the middle man and just set them all to zero and then carry on from there. So if you want a fast way to do this because if you draw it out a bunch of times, your paper will probably look like a mess and you might just write through it. So now at this point, what do I do? Yeah, throw, I put in five, set the reference bit to one and advance the hand so it should look exactly like this. So everyone on the same. Okay, I can't use the page joke. It's the same tick. What the hell you call hands of the clock? The same hand? No, that sounds way worse. Okay, the same tick. All right, we're all ticking. All right. So now we reference page two, what happens? Yeah, set the bit to one. So two is already in memory. We wouldn't add a new entry forward or anything like that. Since it's already in memory, all we do is update the reference bit and we'll see where this deviates from, because right now it essentially did the same thing as FIFO did before. So this is when it starts to deviate and you can see the idea behind this. So we changed the reference bit from zero to one and the idea behind this is now we have used it somewhat recently and if we were trying to replace something while we replaced page two because we just used it. So it makes sense that it kind of looks like LRU, but not quite. All right, now we access page three, what happens? Yeah, the reference bit changes to one. So again, we're not replacing it or doing anything. So it's already in memory. We're not messing with the hand. We're not messing with the iterator. All we're doing is a quick, quick change the reference bit. So in this case, the reference bit is currently zero. So we change it to a one, that's it. Now we have an access to page one, what happens? Yeah, we throw out four. So if we do it in excruciating detail, we are currently pointing at a page where its reference bit is one. So we change the one to a zero, advance the hand. Currently we're pointing at an element where the reference bit is one. We change it to zero, we advance the hand. Oh look, now we have our victim. So now we would get rid of page four, swap it back on the disk, load in page one, set the reference bit to one, and then advance the hand. So now our clock would look like this at the end. So all good? All good, all right. So now we use page two again. We should update the reference bit from zero to one, access page three, zero to one. So now we're done. Now if we write it out the way we wrote it out before, it would look like this. So for each access we just write out in red whatever page we brought into memory. So for the first access it was one, two, three, four and generally you would fill this in as you're doing the algorithm, keeping track of stuff. Just doesn't fit all in a slide. So now when we access page five we got rid of page one. Then for this diagram once we access page two, nothing changed it was already on disk but we know in the algorithm itself it updated the reference bit. Then we updated page three. Then we had to bring in page one and we replaced page four and then we had two page hits after that. So how many page faults did we have? Six. All right if I wanted to see how good this was what algorithm would I compare it to? The optimal, yeah the optimal. So let's do the optimal as a bit of practice because we got the time. All right so if we had the same accesses and we did the optimal algorithm, well can we agree to skip the first four steps? Yes. So I just write it out like this. Four, three, one, one, one, two, two. All right first four steps skipped. So now we are doing the optimal thing. We reference page five so we need to kick something out. If we're doing the optimal thing that means we're looking into the future and we're gonna kick out whatever page we are not going to use in the longest amount of time. So what do I kick out? Four. So if I look ahead, can't kick out two, can't kick out three, can't kick out one, so I kick out four. So if I kick out four, whoops, put in red, replace four with five and then it looks like this and now well I did the optimal thing so I get page hits the rest of the way. How many page faults do I have? Five. Five, so I got pretty close so it's pretty good. All right, we want to do lots of comparisons. We can do the clock algorithm again with the exact same sequence we saw in the lax lecture and compare it to it. So do we wanna go through this quick? Sure, it may as well because well we don't have anything better to do today. All right, so if you were to have this on, I don't know, let's say an exam that's on December 8th at someone godly hour. You might wanna know how to do this. This is not a hint or anything. I am using my subtle voice. So last time we had room for four pages in memory so we can draw out the fun clock so it would look something like this. And I will just keep it empty initially instead of just writing zero just because it saves me some writing. So here's my clock and I will just draw the hand wherever I want. Usually I like to point it up. Again, up to you as long as you're consistent. So now I access page one, what happens? Yeah, we need to load the page from disk first. Page one, so we access page one. Currently there's nothing in memory so we would load it in from memory. So now it is there. We set its reference bit to one. Am I done? Yeah, I need to move the hand by one because well I don't wanna immediately try and get rid of whatever I just put in. So we had a page fault there. Now it's gonna be the same thing. So if I try to access page two, I have to bring it in from memory so can we skip some steps? Okay, so we insert two, reference bit is one and then we advance the hand and that is our page fault. So now it looks like this. All right, can I skip three and four? All right, three and four, skip. So three, four, they both have their reference bit set and now the hand would be pointing up here. So now it would look like this. All right, did I screw up in any way? No, will you screw up on the exam? No, okay, so some of you think you are going to. All right, so hopefully this should be fairly free marks. All right, so now we access page one, what happens? Nothing happens. Nothing happens or if it makes you feel better, we change its reference bit from a one to a one. However, whatever lets you sleep cozy at night. So here it goes, now we access page two, what happens? Nothing or a one to a one, again, whatever makes you warm and fuzzy inside. All right, now we access page five, what happens? Yeah, we throw out one, if we want to do our shortcut, can I change all the ones to zero? Sure, let's just take a shortcut. So I change all the ones to zero and then carry on from there. So then I'm currently pointing at page one, its reference bit is zero, so it's out of here. So I replace it with five, set the reference bit to one. Am I done? Yeah, I have to move the hand cause I don't want to get rid of what I just put in there. So move the hand and now what I replaced? I replaced one, two, three, four. All right, now access page two, what happens? Yeah, reference bit goes from zero to one. And otherwise it was a hit. All right, access page three, what happens? Update the reference from a zero to a one. All right, now we access page five, what happens? Nothing happens. Nothing happens or again, one goes to one. However you want to think of it. Access page two, what happens? Nothing happens. Nothing happens or one goes to one, which, hey, this is good. We're not page-fulting, that means our algorithm is probably fairly decent. All right, so now we access page one. Rup-ro, we can't, we have to replace a page, whether we replace or what's our first step? All right, we already skipped ahead, we're replacing four, okay? So just in case what we would do, we're pointing at a one, change it to a zero, advance the hand, pointing at a one, change it to a zero, advance the hand, you guys are much faster than me. So we're pointing at four, current reference bit is zero, well, we get rid of it. So we change four to a one, this reference bit is one, we tick up the hand, and this is what it looks like. So it turns out we made a poor decision. So five, four, three. So now in this case, oh, whoops, we just accessed page four, what happens? Yeah, five goes, its reference bit gets set to zero, we advance the hand and oh no, two is now our victim now because its reference bit is zero, means it hasn't survived an entire revolution. So we would get rid of page two, replace it with page one, and then we go ahead, oh, page four, sorry. Sorry, replace it with page four. So now we have, we got rid of two, and now we look like this. How many page faults we got? We got, yeah, we got seven page faults, which last time when we did this, we did optimal on it, we got six, this is seven, pretty good. I forget what we got with LRU, I think six as well, maybe, but FIFO was like 10, there's something atrocious, and oh, LRU is eight, so this actually got better than LRU. So yay. All right, so any questions about that? If in some odd world it ends up on a final, we're good. All right, so we will be, hopefully December 8th will be like some weird bizarre world. All right, so now we can, we talked about this before, but we could talk it about it again. So if you care about performance, again, swapping makes things just go slow, because suddenly you start swapping, you won't even notice it as a process, unless you are keeping track of performance. Suddenly your performance goes to crap, and memory is cheap either way. So typically you really might wanna know instead if you just ran out of memory instead and you just go buy more. So if I ran out of memory, what would your process do if the machine ran out of memory and you didn't have any swap space or you filled up swap and physical memory? What would happen? So what would happen specifically in your program? Yeah, so if your program is running along and the system is currently out of memory and you call malloc, what will happen? Does malloc crash? So how many of us have actually checked for errors from malloc? Everyone's looking away from me? Yeah, I thought so. So technically, if you run out of memory, your program doesn't immediately crash. Malloc would return null, which says it has run out of memory, and you could handle it and do gracefully shut down without using more memory and do something clever like that. But for most of you, if you've never checked it, you would get a null pointer from malloc, you would try using it, and you would seg fault. So it will probably take you guys a few years, but remember in this course, I have told you to always check for errors no matter what. It will probably take you a few years of doing a natural job to believe me. So just remember I told you. So when you eventually get that, because no one, maybe I just don't remember my undergrad, but I don't ever remember being told that. It's like, don't worry about it, it'll kind of work. And then I had to figure that out on my own. So for you guys, you can just ignore like I did in my undergrad, but eventually I actually told you. Yeah, no, if malloc returns null, it means it's out of memory. Yeah, probably. Yeah, so if you check P error for malloc, it's a standard C function. So it would set error no, so P error would work. As long as P error doesn't also allocate memory. Yeah, if there's no memory, it's done. Your program could probably do something, but how, if you guys just malloc use it, it runs out of memory, you just crash. Probably better to know why things happen. So yeah, so the kernel will also do something to intervene. So if it has run out of swap space and physical memory, the kernel will run something called the OOM killer or the out of memory killer. And well, it's usually not even that sophisticated. So if it was smart, it would just pick the process that's using the most amount of memory and then send SIG kill to it. It immediately terminates and then boom, you free memory. If you kill Google Chrome or something like that, now you have like an extra 10 gigs. So sometimes the out of memory killer is not that discriminating. It just kind of randomly kills a process. And then if you run out of memory again, it just kills another one until eventually you have enough memory. So it's like that concept of just shoot first, ask questions later. So that can be a strategy for it. Cause generally if you're out of memory, it's like a panic situation. So the more you can't really make a decision that requires memory unless you're smart and you put set aside memory before you ran out of memory and so on and so forth. So sometimes just make a quick decision that doesn't actually require more memory. And another thing. So the page size of four kilobytes was set in like the seventies, which is like before I was born, even if you can believe that. And while today's day and age, four kilobytes is basically nothing. So what tends to happen now on x86 machines and some risk machines, what they do. So remember, if we go back to our page tables, remember our 39 bit address that had like three levels of page tables. So we had like nine, nine, nine and 12. If we all kind of remember that. So that had three levels of page tables, like an L2 pointing to an L1 pointing to an L0. And then that pointed to a page. So what more modern processors will do is something called huge pages. So they will get rid of a level. So they get rid of level zero and an L1 instead of pointing to an L2 page table, which points to a page. And the L1 page table, they point directly to a two megabyte page, which would be two to the 21, which should make sense because it's 12 plus nine. So that is something called huge pages. So I think even in Windows Task Manager, if you look up, you can kind of see what process is using huge pages. And all huge pages is as they skip the last level and they have two megabyte pages instead. So that trade-off means we just trade more TLB coverage for more fragmentation. What does that mean? Well, if our TLB entry just has like a set number of entries, so say it can only hold 10 entries. Well, if our pages are four kilobytes, those 10 entries cover 40 kilobytes of actual RAM until they get thrown out of the TLB, which is a cache, and things go slow. If we had huge pages, which were like two megabytes and we had 10 entries, suddenly that TLB covers 20 megabytes. So it's really unlikely that it would get evicted out of the TLB, so we'd probably get some nice speedups. There is an extreme where they remove two levels of page tables and they have one gigabyte pages. And those are unsurprisingly, all right, quick, come up with a name for that. If two megabytes are huge pages, what are one gigabyte pages called? Gigantic pages, less creative, that's too creative for us. Very huge, very huge page. Just call it one gigabyte. That's close. Gigapage, boom. All right. Yeah, so now you're ready for a job in tech, so you can name things now. So those are generally called gigapages, and you can imagine if you have those, your TLB covers will be pretty good, but then that makes that fragmentation issue we talked about when we talked about this even worse. So if your process gets a gigapage, tm, I know they already named it and it's not very creative, and we only use eight kilobytes or something like that, then most of that, we get a page anyways, and most of that page is wasted, so that would be another example of internal fragmentation. So you can see there's a trade-off here. So likely, nowadays, two megabytes is probably a better trade-off than four kilobyte pages, but probably be a while before we get a good trade-off with a gigabyte page. All right, any questions about that? All right. So we instead clock algorithm today, know it. So if we have it on the file and we do not do well on it, if it was one of you from this lecture, yeah, don't. So data structures, we had a link, or we had a circular list of pages, we had a reference bit for each page, it's in light gray, do it in however you want, had a hand iterator that points to the last element examined, our algorithm was fairly straightforward. Check the reference bit if you need to replace a page. If it's zero, put the page there. Reference bit is one, advance the hand, go on to the next step. If the reference bit is currently one, change it to zero, advance the hand until you find a victim. And then for page accesses, you do the quick thing, you just set the reference bit to one. All right, so now, our exam is at a very, very stupid time on the eighth. So I did the survey with the other sections, I'm guessing it's the same for this one. Do we want to just have two review lectures at the end? So you have essentially almost a week to actually try and study for this course. All right, boom, it has been done. So the way the lectures are gonna happen, we have two more, the two lectures after this are gonna be memory management stuff, memory allocation, then we'll talk about virtual machines, then we will have two lectures of review that will give you the most time I can give you for this course with the constraint I have because, yeah, your final is on a stupid day. All right, so any questions, comments, concerns about that? So we have the rest of the semester all planned out where I don't try and do something mean to you guys like use the December 7th lecture, lecture talk up until 7 p.m. or 6 p.m. And then have you write an exam about at 9 a.m. the next day, that would probably not be, well, technically I can do that, but probably not be cool. All right, so with that, just remember, boom for you, we're on this.