 begin This is an unusually a large number of people here today wonder Why that is So it's midterm week. I'm sure you guys were At home over spring break thinking I can't wait to get back to Beautiful Buffalo so that I can take midterm in 421 521 So but that's what we're going to do this week So today able to review Wednesday's the exam and on Friday We will go over the exam and solutions in class and we will resume our normally scheduled programming next Monday as I've announced several times there are Reduced office hours this week partly because we don't have an assignment out also partly so that the TAs won't kill me when I asked them to grade a hundred and fifty exams starting on Friday So we will probably also have office hours a little skimpy next week as well I know that assignment 3 will be out we'll keep I think we'll keep to the original schedule But some of the normal TAs may be busy grading your exams rather than helping you with assignment 3 All right, so today what we'll do is we'll go over a couple of practice exams. There are now I think five exams posted online maybe six midterms Two two five I think there are six actually I think there are six previous midterms There were two years where I gave two exams last year I gave one and there was a practice for the first year. So there's plenty of ways to Prepare I would suggest that if you guys want to practice you sit down with one of these and try to do it in 50 minutes So it's these are available All the lecture slides are up this year in a nice format. So I feel like you guys have plenty of information to prepare So anyway, let's just go through the exam people haven't taken a look at it before the first ten questions are supposed to be quite easy and Usually people find them that way Hello, oh, oh, hello, okay Forget to do something All right Okay, oh Whoops my apologies Oops All right, so The place where people will probably have the most fun is on the long answer questions We'll get to those in a minute. The short answer questions are not intended to be very difficult. So here's an example We've talked about several cases where operating systems provide some sort of illusion to processes. What are some examples of these? What's one? somebody anybody Right so by time-slicing the processor we can make it look like each process has its own CPU and Potentially we can hopefully make it look to processes like there are many more CPU cores on the machine than the machine actually physically has So that's one illusion. What's another one? Yeah Yeah, Adamissides, so that's not a bad one using synchronization primitives We can make it look like a single of act a set of actions It actually required doing multiple things that aren't atomic from the perspective of the machine our atomic with respect to the code That's executed. So they exit they they sort of complete as a block, right another good one What's another one more recent from memory? Yeah Yeah, we did concurrency got currency animosity. What else? Yeah, address basis So I'm gonna make it look like you have this huge contiguous, you know, two to the 31 bytes of virtual address Of memory, right? We know now that those addresses are virtual addresses and that memory isn't necessarily real But that's the illusion we provide to processes because it's used, right? Those were probably the answers we were looking for there may be some other ones as well. All right. Oh This thing's going to drive me nuts today, okay Okay, so here we go. I like this question So we've talked about and then there may be a question like this on the exam I've written the exam so I know exactly what's on the exam So this question is not on the exam of course because I never reuse questions Which means unfortunately for you guys every year the exams get more and more bizarre, right? It's just like stranger and stranger things. I have to think I'm just kidding the exam this year is fine It's not bizarre at all So we we've talked about how locking share data structures can cause Things to slow down Ideally when I have multiple cores, I want things to be able to proceed independently on those four So they don't get in each other's way, but once I have to grab a lock Effectively what I'm doing is I'm serializing Multiple threads that I want to be able to actually execute concurrent right, so here's So here's some example code that we gave you And the question was how do you change the locking strategy in this piece of code? To improve concurrency, so let's scroll down a little bit show you the code snippet What's happening here is that there's an array of some structure and threads are looping in order to deposit some Value into this structure. They're looping through this structure Looking for a value claiming the value and writing some data into it What they're doing in this code example right now is that they're doing this while they hold some lock that protects the data structure from concurrent accesses because if I didn't have a lock it's possible that two threads Would overwrite each other's deposits or would overwrite each other's values Right, so I want to make sure that every thread finds a unique spot in this data structure to use okay So first of all, what's the problem here and that the the question also pointed out that you were you were instructed to assume that this This array is usually pretty crowded usually have a lot. It has a lot of values in it So it's unlikely that I'm going to be able to find a value quickly. I probably gonna have to investigate a couple of different entries So what's the problem? How is this going to slow things down on a multi-core system? Yeah Okay, well, we have the beginnings of a solution here, but let's let's describe the problem here first, right? So what's the problem? Yeah Yeah, so okay, so there's there's one little minor knit here, which is that we're starting from the beginning every time You know you could fix that that's not necessarily going to make a huge difference But you know if I had written this a little more careful I might have chosen a random index to start at but it's a great point I certainly if I'm putting things in an honor of an array To optimize for performance, I don't want to start at the beginning So so that's one problem, but but what's the more fundamental problem with this code? What is it? What is it doing that we don't want to do? Yeah? Yeah, there's a Great answer. There's a big critical section For performance reasons. We want to creep our critical sections as small as possible This is looping over a potentially large and crowded array while holding a lock So the critical section now includes this whole loop Which I told you could potentially take a while to terminate because there may not be very many empty entries in this particular data structure Okay, so starting with Gila's answer. What how can I do better? Do I need to do somebody give me? Okay, so so ideally I want to protect only the right operation. So so what's one sort of? What's one maybe obvious solution that comes to mind What could I do rather than having one big lock? For the entire table What else could I do? Right, so I got to have a single lock per entry unfortunately the Text and your intuition as programmers should push back against that idea a little bit why? It's going to cause your array to get quite a bit bigger probably at least twice as big Depending on how efficient your locking data structures are but putting a lock on every entry is not really Appropriate and in fact the question text instructed you not to do that Well, it didn't say not do that specifically it said don't dramatically change the spatial requirements for this particular piece of code So I can't do that. What can I do? I've got to use one in theory. I'm going to use one lock That seems like the thing. I'm I can't use many. I might as well stick with one. What can I do to optimize the? user reader writer lock Okay, I think I could get that to work. I can they actually do something that's even simpler. Yeah All right, so one thing I could do is I can move the lock inside the for loop Like it only grab the lock When I'm checking the value So how is this better it is maybe better it's not Best this full credit better, but we're going in the right direction. What's better about this? Yeah doesn't know They have three or four threads that are trying to acquire they're trying to do this simultaneously What's better about moving it inside the for right? So the idea is I've had multiple threads I don't have this in systems. We sometimes call this front of the line blocking So I don't have a guy at the beginning of the line who's spinning through the entire array holding the lock who is preventing Everybody behind him for making progress instead. What can happen is that multiple threads sort of simultaneously will be moving through the array So that sounds good, but have I really reduced the size of the critical section here at all? Well, no answer my question first. Yeah No, no, no, I want to do this step-by-step, right? I mean maybe people see what the answer is but have I reduced the size if I move the lock to here, right? And I drop it and acquire it every time I move through the array. Have I shortened the critical section? Yeah, damn Well, remember I'm grabbing it even before I check anything about the array, right? Yeah, okay. Well, let's we'll get there, right? But but one thing is I could put it right after here, which would allow more interleaving, right? So that would improve the ability of multiple threads to search simultaneously But the total amount of time a thread would have to acquire the lock would still be the same So what what can I do top? Okay, so I can move it inside the if statement So I take my lock acquire and I put it here now. Where am I going to drop the lock? Before the break Yeah, I could probably drop it actually right here once I'm done modifying the global structure, but whatever. It doesn't really matter So is this safe if that's all I do Let's say that all I do is I take my lock acquire and I put lock acquire here And I put lock release here or after failed and it doesn't really matter. Yeah, right? How? What can happen? Yeah Right. I have two threads that essentially simultaneously They check that a particular index is valid. They're both checking the same index They both see that it's not valid Now one of them grabs the lock and sets it to be valid and stores an entry and the other one grabs the lock and Overwrites the entry that the first guy just wrote. So this is exactly what I didn't want to happen This solution is essentially equivalent to not using a lock at all But I made some progress here. What else do I need to do? Yeah No, you could you can but that's there's a better solution Yeah, okay, and I don't have a new locking structure. I have to use the locks. I have yeah, right? So remember I it's not safe to rely on the value that's stored inside the shared data structure Until I have the lock, but that doesn't mean that I can't use that value as a hint So the the solution here is to move the lock here Once I grab the lock. I have to recheck valid So the first thing I do once I get the lock is I have to make sure that valid is still zero if Valid is has changed That means that somebody else raced in there and beat me to it and then I have to break out drop the lock and keep looking But the idea here is that I'm only going to acquire the lock when I think that I've found a valid Entry that I can use this makes sense. Does anyone have questions about this? Again as long as I only read the value of valid as a hint, right? This is I'm only reading it to I could never write it without the lock I cannot modify the shared data structure without the lock But I can read the shared data structure without the lock as long as I only use it as a hint about when to acquire the lock So that solution works quite well Right and in most cases it reduces the critical section to only a couple of instructions Any questions about this example? Yeah, yeah, so that's a good point, right? It's it's possible that if I do it this way I may need to loop through the array again or do something else to make sure that I haven't That my hint hasn't failed, right? I Try to think about if that actually I Think I'm only going to miss fire though if it looks like it's free But but you're right It's also possible that that value is changing to one, right? So it's possible that I read it without the lock and then right after I read it It changes to zero and I missed a chance to find that entry. This is a good point This may affect the ability of this solution to find a valid entry, right? And I might need to there might be another way to fix this, right? So for example, I can maintain a count of the number of valid entries in the array to make sure that when I'm finished If I fail out I failed out because there's really no space, right? On the other hand with an example like this if you think about it given that there's always Items coming in and out of the array It's always unclear when to stop looking right because even in the first case when I have the global lock I might need to drop it let some people take some items out of the array and then reacquire it and look again Right, so whenever I have a shared data structure like this. It's always possible. I need to wait a little bit, right? Of course, I could use a CV to do that. It's a good point Any other questions about this particular example, but this is a common approach to safely using read access to data structures as a hit which is read lock read again Okay. Yeah What's that? Right Yep Yep, absolutely. Yeah, the idea here is that this will allow multiple threads to search the array Simultaneously right without having one thread blocking everybody from from proceed All right Hey All right, maybe a little bit easier three system calls that allocate virtual addresses Maybe one Or two sec three S break for M map there's others. Those are the big ones and The semantics associated with each I'll let you guys review that that this is you know Some of these are are pretty much straight from election notes M map M map give me some virtual addresses that point to a file Okay, so question five This question asks you to think about new month So yeah, Diana Yeah, like how do they allocate virtual addresses, right? You know s break. I asked the Colonel to move the breakpoint so I can have more heat. That's that's semantics What do they do? How do they work? All right, so this question I'm looking at a new miss system So this is something I will commonly ask you guys to do on exams or on midterms is to think about a system with some Properties that are different in the systems that we're using used to talk about and Apply but apply some of our design principles to these systems I find these questions fun to write and I think they're a good chance for you guys to think about How you're actually going to use knowledge in this class in the future, okay? No one is going to ask you again ninety nine point five percent of you are never going to develop an operating system ever So how what's the point of me asking you all these sort of stupid little minutiae about operating systems? I want to give you guys to see You guys applying the design principles that you're learning to new problems So here's an example. I have a new ma machine What that means is that? The latency associated with accessing memory is non uniform but most of the machines you guys are used to thinking about the memory accesses are Relatively late relatively uniform. I've got memory and you know memories got some low-level details And things like that but all else being equal reading a bite of memory Performs identically regardless of the bite of physical memory that I'm actually reading on a new system This is in the case and there's all sorts of different reasons why this could be true memory could be on another machine Memory might I might have a system with so many cores that I can't provide the same memory latencies to all of them If memory is literally farther away on the physical chip or something like that Anyway, the point is that I've got memory But not all memory on a new machine performs identically for every core so and Imagine that the way this machine is set up is is describing the question is that I have some memory that's fast And then a larger amount of memory that's slow so the first thing it asks you is How does? the design this particular design complicate My address space abstraction What additional consideration does this introduce into the address space abstraction that processes have not been used to? What was one of the things that I liked about the address space abstraction? What what additional complication does this introduce or should look up the statistics? Maybe nobody answered this question. It's too weird Why it's not that complicated Yeah, that's right. That's the that is something that we've noticed in the past But the smaller the number of people that answer a question the higher the average is Especially with these questions where you can pick multiple ones Yeah Yeah, sure. I mean this is not it's not designed to be a trick question The idea is now where the OS puts stuff even when it's in physical memory matters So I might have two pages that to the process or side by side But those pages even when they're paged in right? It's one thing to stall a process while I go get a page from disk and move it back into memory It's another thing where the process the pages are there But there's this one page that's slower than the other page both in memory. They're both physically resident But the performance on those two pages is different. Yeah So this the second part of the question I think what which is I think what you're answering is at a high level describe a way to make use of this Numa property And it asks you to use a system design principle that we've talked about it at a high level If I have some memory that's fast and some memory that's slow I have a small amount of memory that's fast of course and a large amount of memory that's slow What do I do? Yeah Yeah, I'm more gentle than that though. Yeah Yeah, these are all variants of what principle. Yeah Make it into a cache right the heart that the the computer the the computer a series of caches, right? I've got registers. I've got L1 L2 L3 caches now. I've got a cache in memory, too I've got a small amount of memory then I'm going to use to cache The contents and I'm going to use the same cache management principles I use everywhere else when something is Accessed a lot. I put it. I try to move it into that part of memory so that performance improves When something is accessed less frequently, it's okay to put it in a slower part of now That was it. So I don't yeah, I guess Maybe maybe people answer these questions based on word count, right? That's another deceiving property. I will point out actually the longer the question is the more likely it is that I've told you something about the answer in the question The short ones are the ones that get you. Yeah Yeah Use it as a cache. I think we accepted that answer for basically full points. All right to schedule in algorithms How about three or four that we've talked about what's one? Random what's two on robin three? All that little feedback cues rotating staircase. There's four, right? Not a not a difficult question. Okay I think we have one more the short answer Now we get to my favorites Yeah, I'm not even gonna go through. Well Okay, geez This is the part of the exam where you can rest your brain a little bit, right? These questions are not designed to be that hard, okay? So here's my process page table like I said before because I'm the nicest person on earth I always use base 10 arithmetic on these nothing has changed this year. I haven't gotten any meaner And so I've got a hundred bite pages. There's my page table You know, I've got permissions in there. I've got a physical page number I have a virtual page number and just ask you to translate these addresses, right? So let's do one of them together a Store to address 365,000 for what is the virtual page number? I Do assume you can separate base 10 numerals into their significant figures 365,000 bite pages. This is the offset. So 365 is my virtual page number What is that map to? 8308 that's my physical page number and do I have permission to store a value to this address? Yes, it's RWE. Okay, so what results is a store to address 8,300,008,000 for That makes sense Not particularly this this year's is a little more interesting, but not terribly terribly interesting. So whatever you guys will get it Okay, any questions about this? Yeah, what's that? What's E under permission Execute that comes in handy when I'm doing a fetch, right? So fetch is Going to fetch and execute Something a bite from that particular address It's a good question. All right Onward, okay, so there will be two long answer questions of which you are free to choose one. Please don't answer both of them So last year's okay, so the first one was about smartphone interactivity detection We had discussed internet we've discussed interactivity detection in class in Terms of and so so what first of all I think the first part of this problem is so so one piece of advice and The TA's definitely noticed this last year while grading them in terms follow the instructions Okay, these questions have first second Next, you know, I mean you can circle the parts of the question you need to answer It's amazing how many people don't even bother doing that They just ramble about something sort of semi-related to the question It's very difficult to assign them points because they didn't do the things that we asked them to do So please just follow the questions are very clear In fact this year there's even points embedded in the question So you know how we're going to grade them, you know what you need to do So please just make sure you complete complete the question So first describe the interactivity Detection problem and explain why this information is generally important to thread scheduling You want to take a take a run at that? Yeah So what is okay, so what is interactivity detection? Broadly speaking there are two types of tasks going on on your system Interactivity detection is distinguishing between yeah, you want to continue, right? So so what are interactive threads doing? Yeah No, no, that's that's a heuristic that we used to try to figure out which ones they are from a high level What are they doing? Yeah What's that even a higher level than that very simple answer this question. Yeah Well, they could be doing that they might not be yeah maybe Yeah Maybe Something you're gonna notice Right, they're interacting with you. They're they're doing something That you're gonna notice if it slows down All the thing you guys mentioned are examples of that But there are threads that are somehow involved in communicating with you or that are somehow You know driving information to your eyeballs Okay, a thread that's going in background and doing a bunch of reorganization of your disk It's not typically something you're going to notice unless you're sitting there staring at that disc defrag screen for some reason, right? Anyway, the point is that if the threat is involved in some way, okay now The you know the fact that this is the interactivity detection problem should maybe give you a hint that Something the same something like oh the threat is sleeping a lot is not high level enough, right? That what makes this challenging is it's very hard to determine which tasks are which it's very easy to think about it Conceptually, it's not a hard distinction to make but from the perspective of the system given the information the system has available It's very difficult to Determine right Okay, so Two reasons why interactivity interactivity detection could be even more important on Mobile Smartphone Tell me why this problem is important on mobile devices Yeah Exact right so one big reason is the these are battery constrained devices, so If I make a bad decision and I allow a thread that you don't a task a Process that you don't even notice to do all sorts of work That's not that you're not you're not interacting with you don't understand it's happening Then you're going to pull your phone out first of all it's going to get warm of course You're going to know something's wrong because you have this little pocket warmer all of a sudden And then you're going to pull it up and the battery is going to be almost dead and it's noon Right, and that's when something has gone wrong There you know I have made a mistake. There was all this activity going on you didn't even notice So that's one reason The other reason is simply that you know another possible reason would be simply that people Spend a lot of time interacting with these devices right pull out my phone. I want to check my email I do that quickly. I put it back in my pocket I don't spend a lot of time staring at it. It has this sort of passive role a lot of the time, right? You know with my email client or with some interactive thing on my desktop if it takes a little longer to deliver a notification That's okay, right with the phone when I'm interacting with it. I may be trying to put it away as soon as possible I hope that's what you guys are doing right rather than like navigating through the whole staring at it You might walk into something Okay, so So now consider how the differences between smartphones and traditional devices affect the interactivity detection problem and Propose a new approach to interactive detection that responds to these differences So it asks you to compare with traditional devices So what's what's so what are some of the things that are different about smartphones from traditional devices? Other than the things we've already talked about the things We've already talked about our problems Right the device runs out of battery too soon What are some opportunities? That I might be able to take advantage of on smartphones to improve interactivity detection. Yeah, so that's a great point, right? Smartphones because they have a smaller screen. There's just one thing that I'm doing at a particular time So an easy like they have really super easy heuristic is like and this is actually something that Android does whatever's in the foreground Meaning it's painting the screen Is the thing that gets higher priority on a desktop, you know I've got two monitors that are that wide who knows the desktop doesn't know what I'm looking at the smartphone does There's only one thing that's painting right so that's that's one Thing that that the solution we gave people credit for noticing. What are some other things? Yeah, damn Okay, so yeah, it's possible that the well the touchscreen You could probably map mouse events to touch screens, but I think you're headed in the right direction here Smartphones have some capabilities that typical devices don't have They have cameras on them sort of by default They know where they are They have sensors that can detect Orientation and other things on some level there is a lot more information available to smartphones that you could potentially plug into This problem by default your laptop doesn't have a GPS chip Maybe some of them do now, but most of them might not By default, you know fancy laptops have I guess maybe all laptops now have forward-facing cameras But you know in the past they happen desktops a lot still a lot of them don't so these were some of the differences we were asking you guys to notice and If you notice them, you didn't really have to talk We weren't asking you to come up with the next latest greatest schedule in algorithm We just wanted you guys to see some of the opportunities and challenges of doing interactivity on this class of device Any questions about this problem? All right Let's look at the second question. Okay, so jumbo pages Well, I think we got about to the same point last year with VM So this question should be something you guys should be able to think about So jumbo pages We talked about 4k pages That's been a very traditional page size, but there are systems now that are starting to support larger pages First explain why and in what cases 64k pages would improve or degrade OS performance so What's the So give me a case where they're gonna improve performance. I think we actually talked about the classes. Yeah What's yeah, so what's the general principle here, right? Bigger pages are gonna be good if there's a certain property that holds Yeah, so in general spatial locality helps me if If I use one byte on a particular page What's the probability? I'm going to use all the other bytes on that page Imagine one byte pages With a one byte page the probability that I'm going to use another byte on that page is Well zero because I already used the one byte that I used to get it in but essentially a hundred percent, right? I use one byte. I use everybody on the page with an infinitely sized page The probability I'm going to use every other byte is zero Every other size in between there's some trade-off so with pages like ones that might be used to store video or audio the The data is so big You know a page might only store less than a second of a high-def video So what's the probability these are going to hit pause? Right in the middle of that page very small Clearly when I hit pause I'm going to be on some page But even if I'm using 64k pages the probability is very very high that once I hit one byte on that page I need the whole thing so I shouldn't have it broken up into eight pages so Okay, so and then this is essentially the answer to the second question part of this question Which is what information about virtual memory use could help the OS decide whether to locate content on a jumbo or regular size page And what I want to know are the axis patterns on that page so the spatial locality on that page That's not necessarily information that the OS can gather We'll talk about that a little bit after midterm week, but it is the information that I would want Want to know what the probability is I'm going to use One or more other bytes on that page. Ideally the entire page Second explain how in certain cases you can implement jumbo page like functionality on top of an existing system that supports 4k pages without changing the underlying memory management Hardware so you're you're like, you know, you're you're working at this silly software company your boss He's not going to actually be able to change the hardware Right pretend you have an MMU that only supports 4k pages How can you get jumbo page like? functionality Without actually having jumbo pages, right whenever I get a page fault, right actually sorry a TLB fault in a particular region I Load a bunch of other pages while I'm in the process of paging in that particular page I also grab a bunch of other pages So when the page has been swapped out when I have a hit inside a particular Super page region that consists of 16 pages. I get every other page as well. Does that make sense? Actually technically this little more flexible than 64k pages because I can get you know Eight pages on either side of the page. It was faulted. I don't even have to break it up in this very static way I can just grab a little bit of data before and after So what benefits of jumbo pages are preserved or lost? So what MMU features are required for this to work? I think the answer was essential. I need to be able to see TLB faults I don't remember have to look at the look at the solution which benefits of jumbo pages are preserved or lost by your approach What benefit is preserved what we just talked about so I should reduce the fault rate Because when I get one fault if the other pages around that page were actually really needed If I go get them it means that the process is not going to fault again for a couple of more pages. That's good But what's the real reason? Potentially at the MMU level to use jumbo pages What's a benefit that I'm going to lose if I really don't have jumbo page support in the MMU and in the TLB? Yeah, you know the translation speed will be the same Yeah, yeah, I can I can implement continuity if I want to What was our other trade-off with page size? Yeah, essentially remember the trade-off with page size was page size too small Lot great spatial locality can't map very much memory with a fixed size TLB page size too big can map a lot of memory, but very little spatial locality And so the benefit of 64k pages is that one TLB entry can now map 16 times more memory than with the traditional 4k page If I use this hacky way of preloading entries into the TLB based on fault behavior and surrounding pages I lose that benefit. I'm like, but you know I gain the other ones Okay questions about this Question All right, any other general questions about this exam? I will go over I will switch to one of the 2013 exams for just the last five minutes What is this yep, okay, what do you guys want to see short answer? Long answer Okay, nothing All right long answer. Ah, okay, cool This is 2013 that the first question has to do with being able to predict wait times so remember when we were talking about Schedulers one of the things we wanted to know was when a thread sleeps How long is it going to sleep? So this question asks you to think about a couple of different ways to do this All right very quickly what are what are some cases in which thread sleep times might be extremely predictable Remember when I put a thread on a weight cue It's waiting for something to happen What are cases where I might be able to predict how long that thing's going to take? What are some events that would land a thread on a red cue that I'd probably be able to predict how long it's gonna take? Yeah What's that? Okay, I want to play it's a little more specific. It's not what's that Okay, that's that's interesting. I could potentially try to use the past to predict the future here But in certain cases I have threads that might be doing a variety of different things with different distributions We're gonna all get combined. Let's say I just want to know something about what it's doing. Yeah Yeah, so certain things on your system. I Probably something that you can mull One thing would be disk IO So is there going to be some variance in disk IO? Yeah, of course because other Threads and processes are also using the disk But it's likely that if I know how long the disk cue is I might be able to guess And certainly I can produce a bound I can probably say it's very you know It's very likely that this thread will wake up within this amount of time. What's another like? What's a really simple example here? Yeah, okay, so it's a lock Okay, I like the second answer better locks not so much because that depends on other threads. Oh clock Sorry, okay. Yeah timer right if it's waiting for a timer to go off at that point I know exactly how long it's going to wait for that. I don't have to do any prediction at all, right? Yeah, sorry. It was like a lot Clock yes, it's waiting for a timer. Great. What are some cases where this would be highly unpredictable? Or I might as well just give up and not even try Yeah Way down the user. What else? Yeah network traffic Not not going to be doable Depends on the quality of the website that you're visiting Okay, so If I could have this information, how would I incorporate it into a scheduling algorithm? Let me give you an example. It's a really really high priority threat That goes to sleep What do I want to ensure the highest priority threat in the system? Yeah, I I want to be there when it wakes up, right? Immediately, I don't want to have another thread in the middle of some big timer quantum that I've set up I want to be ready, right? So ideally for example as It the time that it's waking up is approaching what I want to do is I want to schedule a thread to run right? Until the point where it's going to wake up So when that thread thricks context which is out Literally the moment that the high priority thread gets back from the weight queue It is immediately chosen by the schedule. So essentially I want the Scheduling process to happen as soon after it gets off the weight queue as I can That you know that that's that's one of the examples. I think there was there would maybe one other one, okay? Okay last But not least This well, I Don't think we do we talk about this this year. It's one of the lectures. I Didn't give something. I skipped over. I'm not sure we really talked about threading models So actually I think we did but I don't know. I don't really care you can look up this this question In the solution, I don't think I spend as much time on this particular problem this year All right, let me know the couple minutes up any general questions Really gonna discuss this. Yeah Yeah Yeah, I mean if you guys asked during an exam the TAs would tell you what those instructions did, right? I don't expect you to have memorized the entire MIPS Instruction set. I mean the the exams I give are I try to be as conceptual as possible, right? I'm other than Every year the stupid plug-and-chug VM translation example The rest of the exam is really aimed at getting you to think and not regurgitate Like random stuff that you from a MIPS data sheet that you forgot to memorize So yeah, that wasn't really the point of of that question But it's fair, right? The goal of the midterm is to be fair I don't want to I don't want to penalize people because they didn't memorize some silly thing I mean there are things you need to know But yeah, there are things that we've covered in class All right, any other questions? Good luck. So the process will be the same as the pre-term We will get in here early. We'll have exams on seats. It'll be a seating chart an hour beforehand Please plan on leaving codes in the back or in the front And we will start once a reasonable number of people are seated and ready to go I will see you on Wednesday