 All right, good morning, everyone. So today, we'll be doing quiz three review and quickly going over what it is. And then I'll show you an example that will probably be much more difficult than you'll actually see on the quiz. And we'll get details about the quiz next morning after we finalize that. So quiz is going to cover four general topics. Scheduling, page replacement, page tables, and TLB. Virtual memory. There's going to be no sockets, no threading again. There won't be a repeat of topics that have been on any other quiz. So this one will probably be somewhere between quiz one and quiz two in terms of length and difficulty. So let's quickly go over what the topics are going to be. And then we'll dive into a much harder question on page tables than you'll probably actually see and make sure we all understand it. So scheduling, we saw that it involved a bunch of trade-offs. We looked at a few different algorithms. So we looked at first come, first serve. We looked at shortest job first. And the first two were non-preemptive. So it means we cannot switch between processes while the other remaining algorithms we looked at, we assume preemptive processes, which means we essentially can just stop them, take the CPU away from them, and then context switch them out. So we saw shortest remaining time first, which used like this idea of shortest job first. So it optimizes the lowest waiting time by selecting the shortest process to run, and then just keeps doing that until it run processes. And the issue with this is something called starvation. So starvation is a bad thing in scheduling, which means a process that has started will never actually get the CPU. So you should be able to identify which algorithms have starvation as a problem. So shortest job remaining first would have a possibility of starvation. While the last one we saw was round robin, which optimizes fairness, tries to get a good response time, and it just round robins all the active processes. So that doesn't have any issues with starvation or anything like that. And we saw how that algorithm works. So when we saw more complex scheduling, there's more solutions, more issues. We saw introducing priority and the issues with that, like priority inversion. We also saw examples of actually doing dynamic priority scheduling and the example, and it kind of lays it out for you and just again asks like all scheduling questions, like, hey, at this particular time, what process is currently executing given some algorithm. So we saw some trade-offs, some processes need good interactivity because you're actually interacting with them, which means they need good response time. Other ones might be just doing some computation that you just want to get finished as fast as possible. So there's no one true answer to optimize a scheduling algorithm for every single process in existence. So we saw a little bit about the complications when we have multiprocessors. So we might have to have different schedulers per CPU or big global scheduler. We saw what real-time was, so you won't have to do any real-time scheduling because that's actually counting clock cycles, but you should know what the term is. And then we saw the Linux scheduler, which is just the idea of the ideal scheduling, and just trying to model that and just showing you what the kernel actually does, you wouldn't need to have to actually do this on the quiz or anything like that, but it's good to know about. Okay, then we saw page tables and all their job was to map virtual addresses to physical addresses. So the MMU is the hardware that actually uses all of your page tables, so you could have one big single page table, or we could have everyone's favorite, which is multiple levels of page tables. So if you have a single large table, that may be wasteful, so you essentially have to allocate space and entries for the possibility of being able to assign every single virtual address on your machine to some physical address, and that's going to be somewhat wasteful because most programs don't use all of the memory on or all of its virtual address space unless it's like something like your web browser or VS code, which seems to like to use virtual memory. So mostly you want to use a multi-level page table to save space for sparse allocations, so the idea behind that is you have multiple levels of page table and each of them fits exactly on a page, so you don't have, so you can use that offset bits that you don't translate as actually indexing what entry to go to in a page, and we'll see an example of that when we go over the question in like great gory detail that I guess you might have to do on some type of exam or quiz, and then we saw, according to the kernel, all it cares about is allocating pages so it could use something simple, like a link list of pages and just a free list and allocate pages and assign them to virtual pages from that list. And then finally we saw the TLB to speed up memory accesses which essentially just acts as a cache between the virtual page number to physical page number or it might actually contain the entire page table entry and the page table entry also contains the physical page number or physical frame number depending on what you want to call it. So then we saw page replacement algorithms to reduce page faults. We saw the following, there's like optimal which is good for comparison, not realistic and that is we replace the page that is going to be used furthest in the future, so we're trying to minimize the amount of page faults we get. Remember with page replacement, we assume pages are on disk and we bring them into memory. So we want to reduce the number of page faults and keep things in memory that are going to be used as much we can. So optimal that's predicting the future, we kind of just use it as a benchmark and you should know how to do that. Everyone seemed pretty bored of it when I went through examples over and over again. So we saw that, we saw random so that obviously wouldn't be on the quiz but you should know that it actually works surprisingly well. Then we saw a FIFO, so easy to implement, easy to do when you actually have to write out what page faults are happening and what page gets replaced. But we saw there was kind of an anomaly there where when we had a lower amount of physical memory we also had a lower number of page faults and that is anomaly just for FIFO algorithms. Then we saw essentially what everyone uses which is least recently used, which kind of gets close to optimal but it's really expensive to implement but in terms of like your quiz things you could easily do it on paper and actually figure out what pages would get replaced using least recently used. So and throughout all of this to the pages go if you wanna name the thing where the pages get swapped to on disk there's going to be a big swap file on the disk and that's where the pages go. So swap file just contains all of the pages which may or may not be swapped into memory. So you may want to know that term for the quiz. So then I kind of talked about but didn't explicitly name a bitmap it's kind of what the slab allocator uses and what a bunch of different algorithms may use to keep track of things. So all a bitmap is is a continuous block of memory that tracks slots so it's one bit tracking one slot. So if we assume we have a bitmap that is 512 bytes large that means we have 512 times eight because there are eight bits to a byte. So that would be 4,096 bits. So we can keep track of 4,096 slots. So you could theoretically check to see if slot 500 is filled by checking if bit 500 is zero or one. So you'd have to do some arithmetic here to figure out what byte you need to address. So you need to round down to the byte and then once you find the byte you have to figure out if you have to access bit zero to seven to figure out if that's actually bit 500 and it's going to be one contiguous bitmap and all the bits there are going to represent one slot of something and in this case for like slab allocators they're all fixed sizes of whatever you want. Yeah, fixed sizes of whatever you want. Okay, and then we didn't talk about it but this is kind of one of the data structures behind the scene. I just didn't name it because we kind of understood it and we're actually not implementing a kernel. So we kind of skipped over it but there is a data structure in some kernels called a core map which manages the physical pages and remember physical pages can otherwise be called frames if you want just use one word. So it keeps track of what processes access what frames because processes can share frames. So you can think of it as just keeping track of the number of references to any physical page. So two processes could map different virtual pages to the same frame and that way they would also be sharing memory. So what the core map does is it would keep track of each frame track what processes have access to it. So it knows it can actually free the physical frame if no processes have any references to it. So you can't free it as long as one process is pointing to it and as soon as zero processes point to it then you can actually go ahead and deallocate it. And then good numbers everyone should know. So rough, remember we saw the memory hierarchy trying to get the capacity of the lower level below it with the speed of the level above it. So these are some numbers to know. So like CPU caches are really, really fast. They'll be on the order of nanoseconds memory. There'll be like hundreds of nanoseconds maybe even microseconds. So that would be like one microsecond. So it's almost like a hundred time or sorry a thousand times slower than your CPU cache. And then SSDs are not quite as fast as memory. They're only like about a factor of 10 slower. So that would be like 10 microseconds maybe depending on what SSD you have. And then disks or hard drives, the big spinning magnetic things that might possibly take up to milliseconds which is much, much, much slower. So that's another factor of a thousand above a microsecond. So one number for the quiz you might want to be interested in is just take milliseconds to do. So if you're wondering about latency is there. Okay, another thing that we kind of talked about but didn't name is something called spatial and temporal locality. So these we're kind of familiar with. Spatial locality is basically just two memory accesses being close together in terms of addresses. So they're accessed almost contiguously or very or almost. So like since we know now that things are essentially the operating system only cares about pages, while you might care that if you're using a bunch of memory that is like a few bytes here and there they should probably be located on the same page. So you have some spatial locality that are located close together. You get all the advantages if you do one translation then you'll have a TLB entry and then pass that it'll be all cache it so everything will be nice and fast for you. And then temporal locality just means accesses are close together in the same time duration. So you might know that hey if I access this variable then I'm going to access this other variable shortly after. So if you have to keep track of what memory or what pages you should probably have in memory while any accesses that are close together in terms of time should also be in memory at the same time otherwise you'd have to maybe do page swapping. You'd at least have TLB misses all the time. So you want to have things that are accessed close together close together in memory but they're two different terms here. Where hopefully both apply but they might not always apply. Yeah and then another problem that you might see with really, really well with systems that actually page out to disk all the time that don't have very much memory is this issue called thrashing and it's basically when your page fault ratio is super, super high. So it occurs when a process does not have it's working set in memory loaded and a working set is just given a period of time it's the number of pages that need to that processes uses in memory at any given time. So if a process can't fit all in memory while it's executing then it's going to have to swap back and forth to disk and it's gonna be really, really, really slow. So if it all the pages the process uses cannot fit into memory well then you're going to have to constantly evict pages and do the page replacement which would go off to disk which would be really slow which would be orders of milliseconds and instead of microseconds and be really, really, really slow. So this would cause your system to be really, really slow and unresponsive. So one thing you can do to track if this is happening if you have multiple processes on your machine while you can keep track of the page fault frequency so how many page faults a process has and you would use some heuristic where it's over a certain amount while then you probably want to evict pages from another process and give this page or process more memory because it is running very, very slowly. So it doesn't have enough memory for its working set so you have to increase the amount of memory you give to it and hopefully you can fit its working set in memory and then it's not constantly replacing pages over and over again. So I think last thing before we get into the example of demand paging so it's kind of what we talked about as a default but this is just so you have the term in mind. So basically all it does is initially when you create a process or something like that you don't actually give it physical memory you just create a page table for it and set all of its bits as invalid meaning they don't currently have memory and then whenever the kernel actually whenever the process tries to access some memory it will generate a page fault and then at that time the kernel would actually allocate a physical page for it and then populate the page table so it actually translate to that page you just gave it with a valid entry. So it's an example of doing something really lazily so it doesn't actually allocate memory until the first time you use it which if you don't use a bunch of memory then it never gets allocated and you're not actually wasting space. So this is also why like on my machine when I showed you VS Code it said it was using like some crazy amount like 20 gigabytes of RAM which is not going to be possible because I don't have 20 gigabytes of RAM on this but they're all virtual addresses and it never actually uses addresses so they wouldn't actually take up any physical memory because it never used them. So yeah, this is just what the kernel would do so page fault happens because the process accesses a virtual memory address that currently isn't backed by a physical page so the kernel would go ahead and back it but as part of that entry too the kernel would also know that if what virtual memory addresses that process should be able to access so if you access a virtual memory address that you shouldn't be able to access it's still going to generate a page fault but then you don't back it by a physical page and then you could just send a seg fault to that process indicating that it actually used a address that it shouldn't have and that's what happens to our programs all the time. So the alternative to this is to just when you launch a process you actually load everything into memory when the process starts which is going to be super, super wasteful because just because you load everything doesn't mean you're going to use everything at once so you'd want to use something like demand paging to only use memory when it's actually used as opposed to just kind of preempting everything and using it. Yeah, so part and related to the demand paging one of the nice things you can do is copy on right useful technique, not just for virtual memory we'll also, you could also apply it to file systems once we get there or other things. So, and it's got a similar idea that data races so we can share data as long as there's only reads so because of that there is a lot of interesting optimizations you can do. So the one is just default sharing so after you fork ideally both processes are completely independent of each other but as an optimization you could have both the processes actually access the same frame and as long as they read from it they both read from that same frame or that same physical memory and you don't have to allocate anything new and then as soon as the first one tries to modify that page then you actually copy that page and then do the right on one page and then you would set up the page table to point to that new page you just copied and then the old process would still be pointing to the unmodified one and then they would have their two separate views. And then you could also do some other optimizations if you know that there's read only frames especially if some of the virtual memory is being backed by files on disks so think of just your executable code for example so if you have a big page that is read only that you're just reading code from well instead of wasting time and writing that page back out to disks that I read into memory well I could just reuse that page and then when I need that again I would just read it from disks like I would just reread it from disk instead of writing it back out to disk just to read in some information I already have so that's one optimization you could do if you know your page is only read and it's otherwise on disk instead of just writing it back out to disk to replace it I can just replace it straight up and then if I need that page that was read only I'll just reread it in from the disk okay now we have enough time to do a big virtual memory example so this is from question six of the 2016 fall final and we'll do it live so this will probably be much more challenging than the one you'll see and it is fairly weird so let's take a look at what it is so it says an early 16-bit processor so this is really really old it's mostly just a small so you can actually write it out so it uses a single page table with 256 byte pages so we can just preempt it a little bit and write things in powers of two since we'll know we'll probably need to do that at some point so 256 is what to the power or two to the power of what? Eight, sweet. So let's just put that in our back pocket so the maximum amount of physical memory it can support is two to 16 bytes so your physical address will be 16-bit so then it says the TLB and the page table entry have the following format so the TLB has the virtual page number in front and then otherwise it's the same as the page table entry which has the physical, so we saw PPN for physical page number this just calls it something else it's the physical frame number you can kind of use it to interchangeably depending on how picky you are and then this is going to be a tagged TLB so we kind of touched on that a bit before because remember each process has its own virtual address space which is independent of another one so you could put the process ID as part of the page table entry to keep track of which process actually this translation is valid for so the second thing it has is it's process ID in the page table entry it doesn't tell us how big it is and then it says the rest of them are bits so S is if the if this page is currently in swap then O is if it's read only then this bit here is unused and then the last bit here is a valid bit then it says if the S bit is one so it's currently in swap the physical frame number should be interpreted as a swap location so it's just somewhere in disk given by an address so OS looks at swap bit only if the valid bit is zero okay cool so first it says calculate the maximum number of processes whose page tables entries may exist in the tagged TLB at one time so anyone take a guess of the maximum number of processes which is kind of weird in the question so to keep track of the maximum number of processes you have to know how many process IDs you can support and you have to know how big that is so we have to figure out the number of bits that this process ID takes up so we know that the page table entry is two bytes because it tells us so it tells us right here so page table entry is two bytes and then we know that hey it can support up to the 16 bits so it's like a 16-bit physical address so if we divide up our 16-bit physical well we know that because of the page table size there are eight bits here that are used for the offset so the other bits so yeah no so this is how many bytes there are in physical memory and so you have to be able to you need 16 bits to be able to address all of them because we're always going to assume it's byte addressable so in the 16-bit address so eight bits are going to be the offset because that's the page size so we're going to have eight bits here which is going to represent the physical frame number or physical page number or whatever so of the page table entry we know this is eight bits here so two bytes also is 16 bits which is a lot of views so our physical page number is eight bits and then we have four other bits to keep track of things so our process ID is going to be four bits right so in in our page table entry here we know that this is eight bits we know that these four fields here are four bits so it means this process ID here has to be four bits or the PID has to be four bits so number processes page table entry might exist so it would be two to the four which is equal to what's two to the four 16 so we can have up to 16 processes okay so then next question is what is the maximum size of the swap area that is supported so the swap area just says that the physical frame number should be interpreted as a swap location so the physical page number again is eight bits so that means it could hold two to the eight entries so there's two to the eight different physical frames it could or different values it has and then each of them are going to be the size of a page so that is also two to the eight so to the eight times to the eight we can just add the exponents because we know how exponents work that is two to the 16 and then to the 16 if we want write it out in better numbers is going to be 64 kilobytes so all i did was you know kilobytes is just two to the 10 so it's just two to the six kilobytes and then two to the six is just 64 so maximum size of the swap file area is 64 kilobytes that says this processor supports a limited amount of physical memory as a result the page tables and four processes are packed in a frame which is really really weird and only for this question because doing the actual implementation of this would be awful but for example so page tables of four processes are packed into a frame so one frame holds four page tables so each page table is going to be a fourth of a frame so again a frame here is going to be the size of a page so it's 256 bytes so the page table size is going to be 256 divided by four which is going to be 64 bytes so each page table is 64 bytes everyone good with that all right so did you yeah so it says for example page table to process four five six seven are located one frame and process five uses the second chunk as this page table actually implementing that would kind of be a mess but hey we were just writing it by hand so it says what is the maximum amount of virtual memory that is available to each process so each each page table there's only going to be one page table for each process each page table holds 64 bytes well how many entries could it have so how big is a page table entry yeah so page table entry is two bytes so if we have 64 bytes for a page table and each entry is two bytes well let's divide them so 64 divided by two that means we have 32 entries so then the maximum of a virtual memory that is available to each process while the 32 entries we can write that in out into powers of two it would be two to the five so remember if we're doing our translation we'd have our offset and it would be the size of a page so it would be eight bits and then since we could support 32 addresses that's to the five we'd have five bits here for our virtual page number so our entire virtual address would be up to 13 bits five plus eight so two to the sorry that was two that yeah eight plus five it's to the 13 I hope which is going to be they can access up to eight kilobytes I hope right so yeah so two the three is eight so it's eight kilobytes so maximum virtual memory that is filled at each process is eight kilobytes that says if a processor uses 64 bytes for the TLB what is the maximum number of TLB entries that can be supported for this this is a bit messier because it's not going to be terribly realistic because it's not going to be powers of two so if we scroll back up we can see it tells us what's in the TLB entry and now we know how many number of bits for each of these there are so the TLB entry well the virtual page number we just found it out it is five bits and then we also know the rest of it is essentially the same as the page table entry which we know is two bytes so the rest of it is going to be 16 bits so our page table or our TLB entry is going to be 21 bits will you actually be able to implement this on any realistic machine hell no but it's this weird question so we go with it so and we just do some arithmetic so every entry would be minimum 21 bits that's not realistic but we're just told to calculate a minimum so if we have 64 bytes well that is 512 bits so we would just divide 512 bits divided by that 21 bits for each entry to see the minimum or the maximum number of TLB entries we could have if we do this it's like 24.3 or something like that so we couldn't have a partial entry so we just say that it is 24 max entries would that be realistic to implement no but it fulfills the question so it'd be 24 max entries so any question so far all right this is where it gets kind of ugly because it gives you a gigantic page so it gives you all of memory so this is a frame so remember it says that there are four page tables on a frame so this is going to be an entire frame so it is five or sorry 256 bytes so they actually divided it nice out so each kind of division there is two bytes so they're all page table entries so it says here the following frame is used to store the page tables for processes zero to three use an outline to show the location of the page table for process zero so you'd assume it would split this up into four and then it would go there's the page table for process zero one two three in that order so the first fourth of that is going to be the page table for process one so if we divide it up it's going to be the first four rows so it's going to be this so this is going to be our page table for process zero so a few ways to double check it is well one if this is a frame everything needs to be a fourth so we could divide the lot so we could split in half and then split it in half again oops split in half split in the half again so those would be our four page tables this would be for process zero process one process two and then process three and then another way to double check it is there would have to be 64 bytes in the page table entry so each of the entries here are going to be two bytes so there should be 32 of them like we said before so there are eight per row so if there's 32 entries there should be how many rows well eight times four is 32 so that jives as well so this gives us our 32 entries so that says using the page yeah any questions about that okay so it says using the page table above so we're assuming we are doing the page table for process zero so using this above translate the virtual addresses shown below for process zero to their corresponding physical addresses for each physical address show the corresponding page table entry and the physical address if the physical address is invalid write fault under pte if the page table entry is not valid write invalid under the physical address if the page is in swap write swap x where x is a dislocation and you have to scroll back because this is weird where if that swap bit is one it says the physical page number is a swap location so you just write that out so this is annoying because we actually have to keep track of the last four bits there so it also gives you an example so one thing we can do two we're given virtual addresses we know that there are eight offset bits so thankfully we don't have to write them out in binary because each hex digit is four bits so if our page size is eight bits then we don't have to work the last two hex digits are our offset so we could divide it up like this so everything here is our offset so then the other part of the virtual address the thing to the left of that line is going to be the virtual page number so it also gives you an example here so they're all going to be entries in our entries in our page table so virtual address if it starts with zero one that means it's the zeroth entry in our table and they also nicely give you the page table entry so you can double check so it should be this is going to be virtual page number one right there or sorry zero and then one's going to be with a one right beside it so this would be entry one so that's where it gets the physical page number from and then if you do the translation well in the page table entry thankfully this the top two hex digits like the 58 here well that's the physical frame number that we just substitute in for a physical address and then we just keep the offset the same so the only thing we have to do for this because they're being super explicit is we have to check the valid bit so the valid bit is the least significant bit here in the page table entry whoops so if it's valid then we can actually just do the translation so in this case it's a one so we can do the translation so we just swap out the virtual page number with the physical page number so that oh one becomes five four and then we keep the offset the same which was just zero zero so any questions about how they got this virtual address or sorry physical address physical address okay well then let's do the other translation so the next one let's start with the virtual virtual page number is one five well does that mean i can just look at hey one five so that should be this one right no i'm seeing shaking heads no so they have this laid out weird we're on the left here these are actual like physical memory addresses and at the top here are just some entries and they don't correspond because each entry is actually two bits so this right here this address at the beginning of this line is actually entry eight not which would be at physical address 16 because there's two bytes per entry so it's kind of weird i think they just put that there to screw with you so for virtual address one five well if we want to discount that is going to be what uh let's bring this into decimal so that's going to be 16 plus five so it's going to be index 21 so this up here this is index zero this would be entry starting at eight this would be entry starting at 16 and this would be entry starting at 24 so this is this one right here would be 16 so 16 17 18 19 20 21 is this so this would be the page table entry for index one five so i'll just so it would be ac zero three so if we do that its valid bit would be a one if we look here its valid bit would be a one and that unused bit is also set doesn't really matter so if it so it's valid so we would just to do the physical address we would just take the physical frame number which is the first two hex digits so it would be ac and then we take the offset from the virtual address to be ac b4 okay everyone good okay next one sets you off into a wild goose trace so the index is two zero what's that in decimal 32 is that valid so there's going to be 32 entries and they're going to start at zero so only zero through 31 are valid so this doesn't even make any sense it's outside our range so we just write fault there because not supported we don't have a 32 entry we're out of virtual page numbers so can't translate okay next one is going to be zero five so zero five is a bit easier to find it is right here so so if we go along at the top zero one two three four five so it would be the page table entry is d db o eight so this is going to be annoying because eight if you write it out eight in binary is one zero zero zero so that means that that valid bit is not set and it means that swap bit is set so that means this is in swap so we have to just write swap and then part of the question says if there's swap just interpret the physical frame number as being the address it is in swap so we just write out db so swap db because that's what the question told us to do all right any questions about that all right next one is zero a so a if we convert that is going to be ten so this is entry eight here nine ten so our entry is going to be or our physical our page table entry is zero three zero e so e if we write that out in texas one one one zero again it's uh it's valid bit is not set it's swap bit is set so again we say swap and we do the same thing as last time so it's going to be swap zero three all right last one is going to be one e so that's close to the end of the range the last element is going to be zero f so i'll just back up one so one e is going to be this element in our list it is what's that 30 so we look at 30 so it's going to be the second last one so the page table entry is going to be c whoops that is a terrible look in c c six zero seven so if we write out the bits of seven that is zero one one one which means a valid bit is set so we can go ahead and actually just do the translation like we think we should do so the physical page number is given there it is going to be c six and then our offset we pull out from the virtual address is going to be f zero any questions about that okay four minutes let's wrap this up because this is weird so table on the left shows the current constants of the tlb for each operation shown on the right indicate whether the tlb hit or miss occurs assume the tlb contents are not modified on a hit or a miss so let's just divide this up so we know what is what so all the way on the left of the tlb is the virtual page number and then in the middle there those two bits are the physical frame number and then this hex character here is four bits that's going to represent the process id and then the last hex digit are all our permissions which we annoyingly have to check here so for this we have to look up to see if there's you know if the virtual memory is actually mapped and if it's mapped to that process so the first one is going to be process one reads from zero three four nine so this is going to be the virtual page number so that is an entry in our tlb right here but it says here this is process one reading it if we look at our bit for the process id here it's a zero so it's not tagged properly so it doesn't have an entry so that's just a straight up miss because we can translate that virtual address for process zero but not process one so that's a big ol miss now it's going to be process one reads from address one nine and whatever that's the virtual page number so there is an entry here that starts off with one nine so that's good and then it's process one so it reads so it's actually in here and then we have to check its permissions so its permissions here it's a six so we can just easily check if it's valid by if it's odd or not so this is even so its valid bit is not set so that's a big ol miss because it is invalid okay next one is process zero writes to address one d whatever so one d is here so there's a virtual page number for one d the process id lines up and if we look at the permissions its valid bit is a one so it's actually a hit and it doesn't say to translate it so we can just right hit but if we wanted to translate it well it would just be ee four nine if we really want to be that but that's bonus yeah yeah so that's the next one so so the annoying read only bit is going to come into effect for the last one so for the last one says process zero writes to address one b eight eight so one b we would check it is this entry so one b is here its process id is one versus zero so it's actually there and then if we check its permission its permission is five which was zero one zero one and if we look at this chart it means that's read only so the old bit means it's read only and then this says it's trying to write there so it would be a miss and then miss on permission because it's trying to write to it and read only cool well that's it we're out of time so your quiz will probably have easier questions than this because that took most of the lecture and we'll have more details of the quiz for the next one but it shouldn't be too bad all right so just remember pulling for you we're all in this together