 Yeah, I think the way we had talked about this was students ask questions and we're happy to elaborate answer those questions. Any questions fly. At the moment all the questions stop, we stop. So like I said, this was going to be very unscripted. We don't have any slides. But yes, yes, if nobody else has anything, then yes, but if it's really taking going to take us a long time, what we might do is answer the concept there and explain, you know, how we came to that answer instead of going into all the details. You'll have to speak a little bit louder. How do you? Sorry, is this still on? Yeah, but it's really, it's pretty good. Like this question. From quiz three. Okay. Where do you want to start with that? Here, we can start with this. Yeah. Okay. So we created an experimental machine, whole bit virtual dresses, eight bit physical dresses, page size 16 bytes, page table entry, size of one byte, our page table entry is the following layout. So it is one bike and that's the layout. So what is there like a specific part of that you have a question about or just the whole thing? Okay. So if we're translating any address, well, we need to know how many levels of page tables we have, right? So we have a 12 bit virtual address, we have a page size of 16 bytes. So our offset, we need four bits to address all 16 bytes, right? And then if we have a multi level page table where each page table fits exactly the page, we need more bits for each index in that smaller page table. So each level would be four bits and we have two levels, right? Because we have like at the bottom, I will add more. So if we have a 12 bit address, we know that four bits are the offset and then we have two levels of page table that are each four bits like that, right? Do you understand why we need, you know, in the eight bits that are required to store the page number? Do you understand why we're breaking that into two, four bits? The trick for all these, for knowing how many bits you need for each index level, well, it's, we're trying to fit everything onto a page. So everything on a page, it's like page size equals, so if this was in C and I said it was a char, which is, we know that's only one byte, right? Then the question is, how many char's can I store on a page if I know the page size is 16 bytes? Yeah. The PTE was bigger if it was like two bytes, how many could I store on page? No, so now if my page table entry is two bytes, how many could I store on a 16 byte page? Eight. Right. So here in this question, we had said that a page table entry is one byte, right? And the page size is itself 16 bytes. So that means we can store 16 page table entries in a page which contributes to the four bits and the low and the high. Because we have eight bits, we need to be able to address eight bits, right? But both the top level and the bottom level page tables can only store four bits worth of page table entries. Does that make sense? Okay. So if I have that, that's like for every question, it's like the number of index bits you need if everything fits on a page. So however many I can hold, that's how many different pages I can point to. So for this, I can only fit 16, it's one byte, so each page table sort of 16 bytes, it could be 16 entries. So in order to address 16 things, I need four bits. So zero, da, da, da, da, da, up to 15. So that'd be 16 things. As an example, let's say the page table entry was two bytes. Okay? In this example, how many levels of page tables would we have needed? We have eight bits for the page number. All right? And let's say the page table entry was two bytes. Page size is still 16. How many levels of page tables would we have needed? Two. Why two? No, we have 12-bit virtual addresses. Yeah. Page size is 16 bytes. Let's say that the page table entry we had told you was two bytes, okay? So how many page tables would I need? You're almost there, which can address three bits, but we have eight bits in this example, right? So how many levels would we need? Three, three, two. Makes sense? Here, we had 16 page table entries that can, in that case, we can address four bits of that page number. That's why we have two levels. So depending on the page table, depending on the page size, depending on the page table entry, that tells you how many page table, size of the page table entry tells you how many page table entries will fit in a page. Based on that, that tells you how many bits of that page number can be used to index into that page table. So with eight bits there, if only three bits can be used per page table level, then we need at least three levels. So we made that simple by only giving you four and four. In that case, this was all the variants, so you got one of these. So the question said, well, there always has to be a highest level page table that's the starting L1 that you'll be given. If this was like, had multiple processes and stuff, when you context switch processes, you would change whatever page table that's pointing to, which would effectively change the address space. So some people got tripped up by assuming that it starts at zero, does not start at zero. So it says, highest level page table starts at PPN2, that's physical page number two. So it is the second index two physical page. So this is the zero page on the machine, these 16 bytes, because page size is 16 bytes, everything would be aligned to 16 bytes. So this would be zero, this would be one, and this would be two. So these 16 bytes are all of our page table entries. Another way, just mathematically, we asked you for physical page number two. And what's the page size? 16 bytes. So you shift this by 16 bits, which is in hex. Because one of these zeros is 16, sorry, four bits, sorry. So that's why the physical page number starts at two zero. Take a look too, is that, hey, for both, if we have like a very general case of like the virtual address and the physical address, the top bits of the virtual address are the virtual page number. And the bottom ones are the offset. And for physical, physical page number or physical frame number, whatever you want to call it. And then the offset bits. So if I say, hey, it's page number two, that just means this part is two. And no matter what the offset is, even if I say, hey, page size, say it's four kilobytes like that, which is, so it's always helpful to just write the power of two, which is two to 12, so 12 offset bits in there. If I say it's, my page size is four kilobytes, and it's physical page two, while the addresses I correspond to are two zero, zero, zero, all the offset bits are off two, two, f, f. And you can double check yourself because that's zero all the way to 4,095. So, which is the page size, which makes sense. Everything's quite addressable, so nothing's going to change. So, in this example, this is our highest level of page table. So if you had any of the variants that started with a one, you just go to index one in there, and everything is one byte. So I just moved one byte at a time. So this would be index one into this table, so zero, one, so it's this entry. Okay, so again, to clarify, so you've got that first page starting at two zero, that's the top level page table, right? Now, in this case, each of those entries, each byte is a separate page table entry. That's why we have 16 page table entries there, right? And that is why the top four bits of your address in the page number can be used to index there, okay? So this is why, for instance, this one is saying, it's not zero, the second entry, because that's the 0th entry, this is the first entry, okay? Now, just to be clear, if you had page table entries that were two bytes, not one byte, then where would this entry one be? It would have been the second and third entry, zero, one. It would have been those two entries would have been the page table entry if the page table entry was two bytes, all right? So we're taking into account that each page table entry is one byte, and then using this value to index into the page table. So instead of a PTE being four bytes, I just said it was an int, and this is an array. You could tell me how many things fit within a certain number of bytes, right? So it's the same thing, so you don't need to get tripped up on that. So for this, anyone that starts with a 1, index 1, so it's 0, F, sorry, 1, F. So if you look at the layout of the PTE, the bottom four bits, so nicely, the 1 hex character is four bits. So the bottom one is all of 1, so it means it's valid. For most of these, we only care if it's valid. And then the physical page number is 1. So, and because this is a multi-level page table, that is a physical address where the L0 page table starts. And it's going to be the same principle that we used to get the L1. So the L0 page table would start at address 1, 0, and go all the way to 1, F. So it would also be 16 bytes, which corresponds to this one, sorry, I wrote over it before. So it corresponds to this one. So now for address 1, 2, 3, so our L0 index is 2. So this would be index 0 because everything is 1 byte, index 1, index 2. So this would be the entry F, F. So because it's F, F, we look again. It's valid because the last hex character is an F, so that's definitely a 1. And then the beginning F is the physical page number. And this is the lowest level of page table. So we essentially hop through everything. And to get the physical address, we just take that physical page number and then keep the offset the same. So this address would be, we found it, it's F. So it would be F3 because the offset just kept the same. So the last four bits we used for the offset. So we wouldn't touch that because it's still referencing some byte in the page. Don't touch that. So you never have to translate the offset, yeah. The entry 0, F2, F can't exist anywhere in this entire thing, right? Zero X. Because I can't have something in this that would reference back to a 1. Yeah. Yeah. That would be terrible because that would allow an application to write to the top level page, its own page table, which means what could happen if that was the case? Exactly. Because if that page table, it was writable by that application. It could essentially map any of its virtual addresses to any of its physical addresses. Whatever the hell you want. So his comment is like, if this entry was like 2F instead, right? So if this entry was 2F instead, it's like pointing back to itself, which is kind of weird. And if you can access that memory, you can change your own page table. Especially if the second level page table had an entry which pointed to any page table, any of the physical pages, then you're essentially allowing an application to overwrite the page table. That's what he was asking. That was for one, two. So we do the same thing, like for anyone that began at one, we know following this, that this is our L0 page table. So if we had the one, two variant, we would just look up index to ES. So we keep the else at the same, so our translation would be E2. And then for 1, 1, 3, it would correspond to 9, 0. The 0 means it's not valid. So we would have a page fault at that. Yes. The question about, from the quiz about simulating bits. The one where it was like simulating a dirty bit in software. I don't know if you have any questions. What's your question specifically? For the nuances of tracking in software a dirty bit, the answer was when a write-only fault. Market dirty. Market dirty. So, but the OS implicitly, the OS is also tracking whether the write-only entry in the TLB is actually, sorry, read-only marker in the TLB is actually read-only, or if it's like a pseudo-read-only in which it's just a transaction. Yeah. So I'm not sure I understood the question. And was that implicit like pseudo-read-only? So think of it this way. Say you have a page table entry, which has all these bits, right? Valid, all right? And let's say it has a read-only bit, read-only. But it doesn't have, let's say, a referenced bit and a dirty bit. And this is the frame number. So let's say the hardware doesn't support a reference bit and a dirty bit, which is what you need for page replacement algorithms. So the idea is can the software, meaning the OS, simulate these two bits? Now imagine that these bits are currently unused. The hardware doesn't use them. So these bits are controlled bits that are used by the hardware. This is unused bits. And then this is the frame number in the page table entry. So the question is can we simulate this in software? So the idea is once you have, say, a right fault, so you do a store to a page and the page is not in memory, in that case, the software, when it gets that fault, the page fault, it can set both of these to one if you have a right fault. So don't worry about the TLB for a second. Imagine the no TLB, because that can be dealt with independently. So imagine that a store is attempted to a page that's not map right now. That's not in memory. You get a page fault. And the software, the OS, can set both of these bits. Saying this page is referenced and dirty. The problem is what happens if you have a load followed by a store? The problem is if you have a load, you map that page, you mark it referenced, you don't mark it dirty. But now when a store happens, you don't get a page fault. Because this page is already in memory. It's mapped. So you don't get this. So you are unable to track this in software. So what you want is, in essence, this store should also cause a page fault. The way you do that is you make this temporarily read-only. And perhaps you use another unused bit to say this is a temporary read-only page. Yes, you use a temporary bit here to say this is temporary read-only, not a real read-only page. It's not a text page. It's temporarily marked read-only. Now, because you've got a read-only page, when you get a store, that'll cause a page fault saying you did a protection fault. On a protection fault, you can say, aha, this is a temporary read-only page. I'm going to make it no longer temporary read-only. I will make it read-write, but I am going to set the dirty bit. So this is how you can simulate in software both those bits. A load is simply an instruction which reads data from memory, from physical memory. The issue is when you do a load, you get a page fault. You mark that page as referenced. But now the page table entry says it's a valid page. So next time a store is attempted, there's no page fault. So there's no way for software to simulate that dirty bit unless there's some way to force a page fault. And the software can do that by taking advantage of the fact that paging has these protection bits, like the read-only bit. So we can take advantage of that to simulate the dirty bit. This would not be an issue if the hardware supports referenced and dirty bits. In other words, on a load in a store, if the hardware itself sets these bits, software doesn't have to simulate it. The number of factors favor larger page sizes. And it says more efficient IO on a per byte basis to swap is not one of the factors in swapping. Number of factors favor larger page sizes. And more efficient IO on a per byte basis to swap is not one of the factors. What were the other factors? Other factors. Oh, sorry. The correct answer was the working set when measured in bytes is smaller. Sorry, that was the method. Yeah, the more efficient is a correct answer. Well, it's an incorrect answer in the sense that when you have larger page sizes, the data can be read from disk more efficiently. So it's not, because it said specifically, it's not one of them. So that's not correct in the sense it actually does help. Having larger pages means you can read them from disk more efficiently. Exactly, so with a larger page, what's going to happen is you will have more sequential accesses. The bigger the page size, you're accessing that entire data from disk for that page sequentially. So there's less seeks happening. Well, even in SSDs, sequential accesses are certainly still faster. But yes. Another way for the same number of entries, you're going to have larger coverages because your page size is bigger. So with the same number of entries, you cover however many more pages. So on your phone and stuff like that, they'll actually drop a level of page table so you can have two megabyte page sizes, which are called huge pages. What about the working set when measuring bytes is smaller? Yeah. So the working set is just the amount of memory your program uses in a certain amount of time. So no matter what your page size is, it's going to be the same thing. So it will not be smaller. Yeah. Not bigger, not smaller, right? Remember, a working set is the amount of memory your program needs, the number of unique bytes of data that your program has accessed in the last recent past, right? That doesn't depend on the page size. That's true. But that's why it's said measured in bytes, not measured in pages. Before you even knew it. And so it's sort of irrelevant, the page size, when you're measuring in bytes, right? Most of the time, you'll just have a certain stack size set for you that's going to be all markets valid. OK. OK. Yeah. It's not really any other trick with it. For p-threads, you can go ahead. I think in one of my lectures, I looked at the stack size. It's 8 megabytes. Yeah. Well, so here's why it's a little bit tricky. I've done that same experiment on Linux. It's 8 megabytes fixed. On Sigwin, it's four, no, 10 pages. It's literally 40 kilobytes. And it grows on demand. As you keep going down, on Sigwin, it's actually different. Demulation. The point is that normally with stacks, they can grow without requiring users to do system calls. That's the difference, the key difference between the heap and the stack. In the case of the heap to grow it, you have to do the break system call to grow it, right? In the stack, that's not required. The OS will essentially grow the stack or give you a big stack or grow the stack on demand without requiring you to do system calls. So in Sigwin, the way it works is, as you're increasing the stack, meaning going downwards, as it goes down, the OS seems to just keep increasing a little bit at a time. But if you go maybe 10 or 15 pages down and try to access data there, it crashes. It'll say, sec vault. If you have the stack, you'll mark them in the page table in valid, and then the rest of bits you can use for whatever you want. So they'll just put a special pattern there that means this is fine. This is going to be used for a stack. So if you get a page fault, the kernel goes like, okay, no, I can handle this. I'll actually back it with a physical page and then let the program carry on its very way after I give it a page. This is a little question from Chris three. Besides, which is not a factor that there was a smaller page size? And that's the way it's possible to handle it. So if you think about what is TLB handling, right? It's just a TLB lookup. And that depends on the number of entries in the lookup. If you think about a TLB, it's essentially a fully associative cache, a hardware cache where that particular virtual address, the page number is looked up in parallel with all the entries. So the speed of TLB handling is essentially proportional to the number of entries in the TLB, but that's fixed by hardware. So that has nothing to do with the page size, exactly. Makes no difference because the speed at which the TLB can look that up entirely depends on the number of entries there are. The more the entries, the slower it'll be to look up those entries in parallel also, just the way hardware works. That's why the TLBs tend to have very small numbers, 64 to 256 entries because they really want to do that fast, sub-cycle, parallel lookup. Can you speak a little louder? Yes, it does help. Smaller page size will reduce internal fragmentation. We're just confusing here. We're still working on it. Because we're switching from the online to the hard copy, we're still working on it. So I think it'll be closer to some of our older hard copy exams than these kind of papers. So we're not going to have as many of these multiple choice and true-false kinds of questions. It's really in flux. Honestly, I haven't looked at the questions yet. How about that? We are trying to have something where we try to keep this exam similar to these, but not completely just because of the nature of the exam. But it won't be, we'll try to be nice to you guys. How about that? Yeah, try to be nice, the exam covers the whole course. So hopefully equally weighted towards everything. Is there last weights on topics that's being quizzes? No, same weight for the entire course. That's right. So there was a quiz question that was about what using the XFDE and what the print-out program would be, which is similar to a past, I defined on the midterm that had a question that talked about boards and what would be printed out. If you explain your thought process for how the trace works and what gets printed out, how do part-parts work and all of this? Sorry? How do part-parts work and all of this? Like let's say my answer like that wrong because I messed up my first form and then everything's wrong because of that. How does that get worked out? We don't have a marking scheme until we have a question. It depends what you write. If you just write an answer, it's like, I don't know. Yeah, if you write a total wrong answer, exactly right. If you start writing out like parent-child relationships and stuff like that and then I can follow your answer, it'd be like, okay, they knew what was going on. A lot of it's just like marking, it's like a judgment call. Some stuff you can't even make a blueprint for because you get some interesting answers. By reading them, you can kind of tell, hey, is this person just making something up or did they make a mistake? Sometimes during the marking, we make up a rubric based on the fact that we see the same error again and again and then we are like reverse engineering the thought process because you haven't said that, right? We're like, oh, everybody, the answer should have been zero, one, two, three, four, but a lot of people have had like one, two, three, four, not zero or something like that, right? So then we kind of say, okay, maybe in that case, since many people are sort of misunderstanding, we'll give some part marks for that case. But always when writing that, you're not wrong. I think it does help that, at least you explain your thought process if you have time. Sorry? Are you expected to write any code? So fair things, writing code just from scratch, probably not reading code, adding probably locks and other synchronization stuff. Or looking for bugs. So the idea is we try not to have you write code because it's virtually impossible to debug your code. Our TAs go crazy because even if we ask you to write five lines of code, remember five lines of concurrent code can have literally infinite interleavings and now we're trying to figure out what you were thinking. So we try not to do that because invariably code is buggy. So what we do is we give you the buggy code you've shown us in the past and tell you to fix it. Once again? The reference bit? Yes, so if you remember all our page replacement algorithms, the ones that took locality into account. The simplest was FIFO, which didn't take locality into account. It just said whichever page was brought into memory first is the one or the earliest we're gonna kick it out, but that doesn't work so well because it doesn't take locality into account, right? Now if you wanna take locality into account you need to know what pages have been accessed and that's the point of that reference bit. It's saying this page has been accessed recently and because it's been accessed recently the page replacement algorithms that take locality into account will try to not evict that page. And as part of not evicting that page what they do is when they look at that page and find that it's referenced, they unset that bit. So often hardware will set the reference bit and the software will unset that bit saying okay I looked at it, I have given you an extra chance, I've not evicted you, but the next time I look at it, if I still see it's not referenced then you're eligible for eviction for instance with the clock. On the other hand, if in between the last time I saw you and now I'm seeing you again if you've been accessed the reference bit will be set and you'll get another chance. Anything that uses a reference bit is to make the actual implementation fast because if you try and implement LRU to straight up like you do timestamps or something like that or you have to scan every single page every time you have to make a decision it's gonna be really slow. So a clock is kind of like an approximation of LRU that you can implement way faster. As some of you probably noticed from your like lab five, if you implemented LRU it was probably slower. Although in the case of file caching LRU is not as slow but with page replacement LRU is just simply infeasible because in principle with LRU every load in store access changes the LRU order. Whereas in the case of our lab five it's every file access and file access means accessing a lot of data from disk. So manipulating a LRU list in memory is pretty fast compared to accessing data from disk. But in the case of page replacement LRU is just not feasible because every memory access potentially changes the LRU order. Questions hardly ask those bits. Usually they're just bits occupying the space in the virtual. I'm not sure I understand your question. How could a question like ask how the database works, how records it works? We'll just give you a page replacement algorithm to simulate and then we get to know whether you understand how these bits work, right? Yeah, page tables if we give them those dumps. That's called the page table memory dump and it invariably leads to problems because of two reasons. One, people don't understand where the page is in the page table dump and the second is they often do the wrong indexing. Often if you start indexing from one instead of zero that becomes an off by one error. So given that page table, the virtual address the lowest bits are the offset, right? The page offset. The rest are the page number. The rest of the bits are the page number and you take pieces of that. The highest bits are for the top level page table. The next set of bits are for the second level page table. Last set of bits in a three level page table scheme. Last set of bits would be for the third level page table. Okay, yeah. Okay, here, make up a bunch of numbers. We'll just make up a question. So you want 12 bit physical page number? Yeah, sure. Okay, what else do you want? How many? How many bits for the virtual page number? Virtual address or virtual page number? How big do you want? Nine bit? Okay, this will be a weird machine, page size. That has to be a power of two. Yeah. Okay, four bytes. And the physical page number is 12, you said? Okay, so first you could, do you want to tell me your page table size or page table entry size or should we come up with something reasonable? Oh, so. Yes, see the page table entry size is entirely dependent on the amount of physical memory you want to support. It has nothing to do with the virtual address, the size. That's based on those first 12 bits, right? So your page table entry must be 12 bits to be able to support physical pages or 12 bits of physical pages. Bit for the valid. That's like minimum you have to have. So it has to be at least 13 bits in this case. So you probably just ran up to the next power of two I'd call it two bytes. No, no, no, no. You use the wrong terminology, so I wanna clear. We have 12 bits for the data of the page table entry, not the address. Sorry? Yes, the page frame number is those 12 bits at the top plus at least a valid bit. And let's say round up those 13 bits to 16 bits, right? So we have three unused bits, three unused control bits, one valid bit are the four control bits plus 12 bits for the frame number. In this. That's an independent thing. Yeah, so for multi-level page table to design as always on a single page, I want to fit a bunch of page table entries and I want to know how many I can fit. So here, our page size is four bytes, weird thing. And our page table entry is two bytes. So how many could we fit? Two. So this would be a very long table. So we can fit two. So we have two page table entries. How many bits do we need to distinguish between two things? One. So for every level, I have one index bit. So if I have a eight-bit virtual address, what that's gonna look like is how many, so I have two bits for my offset. And then to fill up the rest of those eight bits. Let's ask that question. How many level page table? We have two bits for the offset. How many bits are there for the page number? If you go up a bit, the virtual address is eight-bit. So how many bits do we have for the page number at the bottom? Six. Six, all right. So how many level page table do we need in this example? Six. Each bit will take you to one level page table. That's a great example that should clarify. The important thing to realize is the size of the page table entry is determined by the amount of physical memory. The number of levels of the page table is determined by the virtual address. And those two are orthogonal. And that's why I think maybe there's some confusion about you link up the size of the page table entry with the virtual address. There's no correlation there. More or less, right? Because it entirely depends on how many bits you need for the frame number. If you need a page table, you just grab another page. Makes the kernel allocator easier. All you have to do is allocate one page at a time. And there's no special handling, no thing. No, we just wanted to mess you up even more. The virtual address is longer than... Yes. In fact, on modern 64-bit machines, you have 64-bit virtual addresses. The physical addresses that are supported on Intel can be far less. Often they're... Yeah. No, generally on machines, you want lower virtual. ARM, they will use a 39-bit virtual address. Because every time you have a larger virtual address, you're going to add another level of page table, probably. Which is going to make everything slower when you have to actually have a TLV-ness or anything. So ARM will do a 39-bit virtual address. And if you want more, you can step up to like a 48-bit virtual address. It's going to be slower, but the hardware can support address 64-bit things if you want. But even on ARM, they support 54 bits of physical address. Yeah. The linkage between the two is not there for exactly the reason that, with virtual memory, you get this nice property that you write to the architecture, not to the amount of physical memory that you have. Like, the page table entry can't fit the page frame. Like, what do we want? But the page table is larger than the actual page. Yeah, that's just weird. It's possible most kernels won't do that just because it complicates memory management. The question's trying to be like really tricky and be like, one of them was like, yeah, I use a page by dividing the four, so it's actually four page tables. There have been hardware that do that. It improves internal fragmentation. All right, so there have been some X hardware, older hardware that does things like improving. When memory is expensive, you want to think about these things, right? Today, DRAM is still expensive, but maybe not as expensive. You want to simplify all of this as much as possible. It's not used. It has to be at least two bytes because our physical frame number or physical page number is 12 bits. So we always store that in our page table entry and we'll always have to store at least one bit to say whether that entry is valid or not. So it needs to be at least 13 bits. So typically just ground up to the power of two. So I can use four control bits, fit it in two bytes, no problem. Yeah, I couldn't do one byte smaller, otherwise I'd run out of bits, but it could be four bytes if you want, and you just don't use a bunch, but you're gonna fit less page table entries on a page, so you probably don't want to do that. And decide the number of rows. That's right. One way to think about it is the number of page table entries is dependent on the virtual address. The size of the page table entry is dependent on the amount of physical memory. So the other type of question that people tend to have problems is with synchronization and locking, because that involves often thinking of interleavings. And as you've seen, we've asked you questions like that, like for instance, instead that printer problem, with that if and while example, where if that happens, what kind of problems will happen? And you have to sort of really think through potentially interleavings where what's happening when somebody's going to sleep, when somebody is sleeping, what happens if somebody's arriving, somebody's leaving. So that takes a little bit of time, but just be systematic, think of the key sort of events that are happening, and that will help you see potential interleaving issues. Where are we, time-wise? Okay, we are, maybe the last question. Yeah, and so the reference bit, it's just when you have referenced it, you set it to one, but you mentioned locality, if I referenced one page, do I set neighbor pages also? No. Locality has to do with what the program is doing, right? So if the program accesses the next page, it'll get set. We'll say that that program has spatial locality. On the other hand, if the program accesses random pages, then the reference bit of random pages will get set, in which case we will say that that program doesn't have very good page locality. Locality is referring to the program. Behavior, exactly. Whether that program is accessing data that's spatially close, which is spatial locality, or temporarily close, meaning the same data it keeps accessing in short periods of time. Because for that corresponding page, when the page eviction or the page replacement algorithm runs, it says I won't evict this page because since you've accessed it recently, I will not evict it. Please. Yeah. Is this, in previous exams, you've asked questions where you want the students to extend, you give them something that they haven't seen before, but apply what you taught. Are those fair game? Of course they are fair game. Of course. But they're not, we won't get a question on SSBs asking us if they'll apply or I don't know. No, no, no, no, no. Like a completely new file system could happen. Of course. We tell you, here's the basic file system we've talked about, and here's the new design you're coming up with. And think about it. Yeah, no, SSDs are a real, it needs another one or two weeks to really describe what happens in an SSD. No, the extension won't be on things we haven't covered together. The extension would be on whatever we have covered, possibly something extended. Yeah. Yeah, the list I posted.