 All right, good morning everybody Everybody had a good weekend enjoyed the Super Bowl. How many people rooting for the team that won? How many people were rooting for the team that didn't win? Okay, but how many people didn't care? All right At least it was like somewhat somewhat entertaining game. Okay, so Today, we're gonna talk about CPU multiplexing. I'm gonna try to give you guys an historical perspective for Why we do this? So we'll talk about some old systems that did not multiplex the CPU and then we'll start to talk about Relationships between the CPU and other components that make CPU multiplexing important and potentially useful And then we're going to talk about the actual mechanism that the operating system uses to Both create the illusion of concurrency So the illusion that multiple things could actually be happening on a system when there's really Physically only one thing or four things or eight things that could be happening So expanding the notion of concurrency past the limits of the actual physical hardware and then we're also going to talk about Just the mechanics of saving thread state and moving things back and forth on an office view. Okay, so We're gonna the grading for the code reading question should start today I'm gonna give the TA's instructions after class, but the online grading is ready. So you guys should start to see Some of you might have already seen some some greats come back. So I was pulling around with it like you Just like two questions just for fun But so we're gonna start grading today and people who have submitted answers should start to see our marks come back And then you will have for the code reading questions one more chance to submit answers before we Finalize your scores and show you the correct answers. So that's how things are going to work Hopefully everything is self-explanatory, but please let us know if it's not or if it's or if it's confusing in any way I also put up a handout. I didn't send out email because it was late Friday night But I put a handout up on getting a clip set up for OS 161, which might be something that some of you guys want to do it's definitely Not too difficult to get it set up to browse and build OS 161 kernels the debugging support is a little as a little more groggy I've really never used the clips as a debugger But I was a little disappointed in it But I don't know what the problem is maybe somebody out there who's a big Eclipse hacker can fix it for us, but for now it's just a little bit slow So I don't know what to do about that, but it does work kind of so Anyway, go follow the handout and and and see how far you go All right, so last week we talked quite a bit about Some of the low-level CPU mechanisms like CPU privilege that allow the kernel to do its job And and the reason that we privileged the kernel we talked about ways that the kernel gets control of the system specifically Interrupts generated by either hardware software and then exceptions So cases where the operating system will begin to run either because something interesting happened to hardware that needs It needs to be dealt with or the software did something interesting or the software did something wrong, okay? So any questions about the material that we covered covered last week? We'll do a Slightly longer review today just to get everybody back up to speed Any questions about last week's material, okay? so Why does the operating system need? special Privileges Maniche Okay, we're getting close to a correct answer Tam to divide the resources right so the applications are trusting the operating system to divide the resources in an efficient way And then once I've divided the resources what else do I have to yeah book? enforce those divisions right so my fault Way right to divide the resources and enforce those divisions, so let's see Sarah a tour false operating systems need Privileges to create abstractions for programs to use False right and there's several pieces of evidence to the contrary, okay? So how does the processor start? How does the what features of the processor exist to? Allow the kernel to gain these special privileges is it masakazu? What is the processor? What feature does the processor provide that allows the kernel to do this? You don't you don't remember Yeah, Sean Colonel mode right colonel or privileged mode which gives the colonel Special powers on the processor itself and also some other special powers on the machine potentially different view of memory Etc. Etc. Okay good What happens when an interrupt is triggered an interrupt or an exception will cause what to happen three things? Tau give me one of them. It enters privileged mode. Yeah What's her name? Chica, so, okay, they're getting that's that's a vague beginning minutes with Daniel Record state necessary to process whatever happened in Jeremy It jumps to a specific place what we call the interrupt service routine and begins to execute instructions at that memory address, right? What causes hardware interrupts that's any give me one thing. Yeah. Yeah, like a disc right finished. What else something else Andrew? Network packet arrived. Yeah Okay, okay, I know a device something happened device needs some sort of attention, right? So distributed completed network packet showed up and today We'll get to timers and why we have timers and one of the main uses for timers on the system But timers also fire Regularly causing the CPU to enter the interrupt service routine. All right Software interrupts. Give me some examples of software interrupts. Yeah Nope software interrupts. We'll come back to that answer. Yeah fork Yeah, so this is how user applications get Get these kernel to run when they need something to happen, right? So we have this mechanism that allows the kernel to run and that mechanism is also used by user applications To get the kernel to run to do something for them, right? And we call this system calls, right? So fork exact weight things like this, right? These are all examples of Cases in which a user application will trigger the kernel intentionally and cause it to run Okay, software exceptions. Yeah, so what what what is a software exception? Distinguish it from an interrupt. Let's do that first. Yeah Yeah, so a software exception and again the big way you can remember this is Processes expect it in a system called HAP, right? Processes set up a system call and initiate the system call themselves, right? So they are not surprised that the kernel will begin to run a software exception on the other hand It's normally a case where the application would not have expected the kernel to start running at that point Jeremy Yeah, and essentially a good way of thinking about this is software exceptions or CPU exceptions happen when the CPU cannot continue to execute Instructions without some sort of help, right? And and what what could cause something like this? Yeah Divide by zero, right? So example of a software exception divide by zero. I don't know what to do, right? I don't know what to return. There's no way to return a value for this. So I'm going to ask the The kernel for help Jeremy something Yeah, no point or exception would be an example of this and we'll get back to that when we do memory management But the user application is tried to use a memory address that I don't know I don't know where it should go, right? It's given me a memory address and it said to store something there to load something there And I don't know where that memory should be or another example is attempt to use an instruction That should only be able to be used in privilege. All right Why would the operating so we talked about masking interrupts, right? And you can see You know and and I think this week in recitation you guys will walk through some of the interrupt handling code with Aditya And you guys will see certain places where the your OS 161 kernel manipulates the processor The processor interrupt mask, right? But why would why would the operating system want to ignore certain interrupts in certain cases? No hand raisers Greg doesn't remember Sam. Do you remember sure? So the masking allows me to establish priorities between hardware devices, right? And actually today and we get back to talking about concurrency and scheduling and the mechanisms behind scheduling We we could come back to this and we could talk about a specific example where while I'm handling a certain type of interrupt I definitely do not want to handle another Interrupt of that same type it turns out in this particular case, right? So how does the operating system put to so I've got these interrupt handlers, right? and The interrupt handlers are what one or What run when something happens on the machine that needs to be dealt with by the kernel, right? But those interrupt handlers just live in memory and so Applications can overwrite them, right? And that would allow them to take control of the system and do things that they shouldn't be able to do and how Do I prevent that Frank? I'm not sure about what's your name Vinay, right? So I put you know and and this is something that the CPU also helps me with right But these are located in areas of memory that are protected from access by user applications, right? So the code that lives at this particular memory address is not code that a User application could modify. What would happen if it tried to overwrite the interrupt handlers? There's a good question Yeah Yeah, because it was actually caused a page fault, right? We've got a page fault the CPU would say You know either I don't think that this application is supposed to be able to modify this memory Or I don't know where this memory address is supposed to be and the kernel will notice what would happen And it would kill that bad process, right? It would say bad process Definitely not allowed to to take control away from me, right? I mean that the kernel is kind of a you know a Domineering tyrant of the system, right any challenges to its authority are met with swift and certain retribution That is if your kernel works properly if it doesn't then and you have more anarchy and that things go bad, okay? You guys should be designing extremely dictatorial, you know cruel kernels in this class, right? Don't allow user processes to get away with stuff, right? Don't think the kernels be nice if you let the user processes right over random parts of memory, okay? All right questions about this stuff. This is last week in four slides five slides All right So let me let me take you back to sort of the dawn of time many of you guys Well, I shouldn't say many probably all of you guys including me weren't weren't alive back when some of these things were happening So, you know until quite recently You know one of the limited we talked about this right the limit We talked about what are the limitations of the CPU that the operating system is trying to to work around, right? What are the problems with the CPU that we're going to create abstractions to try to try to fix right or to try to address? and Historically, you know you had systems and this was true China Be a good question to figure out What if anybody can look this up for me, you know the the first Emergence into the mainstream consumer market of multi-core multi-processor systems I think when my when my brother was starting college, so this would have been in like early 2000s His roommate had a dual processor machine and that was kind of like a big deal Yeah, Jeremy Yeah, so yeah, there were these like gawky ways of supporting this right so like some of the some of the motherboards Maybe didn't have sockets for two So, you know you had to you had to do had some work around but the funny thing about my brother's roommates computer I remember was that he had he had been sold this computer by Dell But they had sold him a version of Windows that only supported one processor And it took him a while to figure this out right but there were two processors inside the box And he had probably paid a fair amount for those two processors But only one of them was actually being turned on and used so that was a little disappointing when you found that out So so recently we've seen all this move toward multi-core systems I mean how many people have a how many people have a multi-core phone Anybody okay, so why? Why have we you know what what happened we had these CPUs and they were great and they were getting faster and faster and faster I had Moore's law and and things were going up up into the right fairly quickly and now Suddenly we've we've we have these two core four core a core processors. Why did we start going in that direction? Yeah, Wembley Well, yeah, it's a little more complicated than that so they're they're have so they have been these thermal design problems, right? and Partly partly these also emerged because of leakage current and some sort of transistor physics issues and things like this But but yeah, I mean that's not a bad answer So after years and years and years are going faster and faster and faster Are you how many people remember these old penny and four processors? Right, I mean, you know, they were they were they were up to like what three and a half gigs like four gigs I mean they were going fast, right? Does anyone remember what the heat sinks for those things look like? They were yeah, they were these like they were like a rocket launcher, you know, they were a foot tall and And those things got really really really really really hot So to some degree what we started to do is started to spread out, right? So we started to trade off Temperature right and speed for area So now we have and it's an interesting, you know design point, right? Because for a long time concurrency was only an illusion that was provided by the CPU, right? And now it is actually a reality that started to Be embodied by the the architectures themselves, right? But but again, even if you have a four-core phone or an eight-core laptop or whatever What is still true about the relationship between processes and cores? I've got more processes than you know I'm more cores than I have tasks to be run, right? In fact, if you have a reasonably If you're like me and you take and you abuse firefox tabs You probably have more tabs more threads in firefox, right? Or whatever the new browser is that all the cool kids are using but I'm still using firefox as I'm old and crafty You have more cores in firefox than you have tasks then sorry more tasks in firefox Then you have cores on your system, right? So even if all your system did was run firefox or chrome or whatever it would still you would still need some Illusion of concurrency provided by the operating system to do this, right? so So one of the things we're going to keep coming back to throughout this semester as we work our way sort of through the hard work components of the system is Looking at the relationship between the properties of these components how that relationship has changed over time and To some degree those ratios between different things have driven a lot of operating system design, right? And a lot of architectural design as well, right? So if you had to use one word to compare the CPU to every other part of the system What would it be? Way faster, right? It's just it's not even in the in the right ballpark, right? So let's see so it's way faster than memory. Okay, so Depending on which level of in the cache memory you're hitting, right? So now you have these architectures that have like three levels of either on ship L1 cache or near chip L2 L3 before you actually have to go all the way to main memory, right? But going out to main memory can stall the processor for how long anybody anybody know expressed in like number of cycles So how many how many cycles do you think it takes to access a register on the CPU? five anyone want to go lower Like none, right? I mean registers are usually immediate operands to things like ads and subtracts, right? So an ad will be you know a two-cycle instruction that will take its operands from registers and put its Result into another register, right? So registers are just like there, right? But then main memory, right? Anyone want to venture a gas cycles for main memory couple hundred cycles, right? This is not not a bad rule of thumb and it could could be longer depending on where in the memory It is and whether or not it's cash somewhere or whatever. So so yeah, so when your processor Executes a load or store it will potentially have to wait hundreds of cycles for the result of that to be Available, right? So it says load an address from main that turns out to be a main memory and put it into this register And I managed to sit there for hundreds of instructions waiting for this to complete, right? So this is not a problem that's normally addressed by the operating system, right? This is you know how many people have taken some sort of computer architecture class Okay, great. So you guys know about you know auto-order execution and multi-stage pipeline in all sorts of things, right? So this this is typically something that is addressed by the CPU itself, right? So CPUs today have all these clever features that try to hide the Latency associated with reads and writes from memory, right? And they do this in very clever ways, but this is not something that the operating system gets involved, right? And to some degree it's because it just happens on timescales that are too fast, right? So You know so so for so what what could you do here, right? What could let's say you wanted to get the operating system involved What what could you do to try to solve this problem by by having the operating system run? Anyone want to take a guess so I have some I have a thread running and it's been running some some sort of computation And now it wants to write or it's wants to write or read a result from memory, right? So it's going to do that and then what what could I do? Yeah, I could switch in another thread, right? I could you know switch out that thread and I could start up something on the CPU and Then how long would I have taken to do that? How long do you think it takes to execute a contact switch? Well, let's let's express it in terms of cycles. Yeah, Jeremy Yeah, it's it's I mean that the short answer is it's way way longer, right? I don't have if I have a hundred cycles I do not I don't even have time to get into the operating system just before that read that memory read is going to be done Right, so it's just too fast, right? It's too fast for the operating system to react We start to talk about things like I oh, right if I'm going out and I have to write something to disk Well that take that might take like days, right? So I've got plenty of time to like leisurely make my way into the operating system and choose another thread to run But these memory access latencies are way too fast, right? In fact, I'll show you today some of the context switching code from OS 161 and it's hundreds of instructions, right? So it's like, you know again that the memory memory is slow compared with you know The median instructions on the processor, but it's fast compared with context switching, right? So this is just a demonstration of why we can't involve the OS in this process. It's too slow, okay? all right, and Like I said, the CB was way way way faster than disk, right? So this is like one of these Do they have one of those in Buffalo? I know they have one in Boston where it's like a It's like a scale model of the solar system Or the Sun is placed at like the Museum of Science and then the planets are out like the Pluto's out in the Suburbs somewhere, right? Like they they do a mock-up where they they try they keep the they they I mean It's not obviously the distances aren't the same, but the distance are in scale, right? So you can see like how big the solar system it isn't how much space there is So the disks are like out in the suburbs, right? Like, you know, maybe memories is like Mercury, you know orbiting kind of close to the Sun But the disks are like way way out, right? So so and this is addressed by the operating system and we'll see how you know today and later in the class, right? And then the final limitation of the system is you, right? Like human processing and and conceptual delays, right? And this is something that people usually don't think about when they think about Things that the CPU and the operating system are trying to work around But one of the big ones is you like you are a slow like you are a very sophisticated Processing machine if you want to think about yourself that way, right? But you are also slow like slow slow slow And so while you're sitting around waiting to figure out what to do the operating system has plenty of time to to work on other things Right and and to some degree some of these illusions of concurrency have emerged because of your limitations, right? Not not just the operating system, okay? It's important to keep in mind. So when I was when I was praying these slides. I went up and looked Try to find some imitation. I just was curious about this, you know human perceptual limitations like what are what are some ballpark figures for how Delays in particular delays that people will notice right and what I found out is that there's this rule of thumb which is, you know 15 milliseconds As is a latency that once you start to get slower than that people will start to notice, right? So if you think about You know 25 Apparently in order to for people to perceive video is smooth. It has to be at least 25 frames per second, right? So that's now a 40 millisecond delay between frames For the old telephone systems apparently they had this 100 millisecond delay, which is actually pretty long, right? What they found is that if you start to delay, you know, how many people have ever called a foreign country where there's like some Pretty serious delay on the line, right? Like your signal is being like balanced up to a satellite and like off the moon and you know coming coming down on the other side of the world so and And you and you guys know that at some point when that delay gets long enough Regular patterns of speech start to start to fail, right? Because like you're used to being waiting for a certain period of time based on face-to-face conversations with other human beings that are right there and Then you know once it once it starts to get long enough. It starts to become difficult to have a fluid conversation, right? So let's so let's express these delays right to a one gigahertz processor, right? You know 15 milliseconds like 15 million clock cycles, right? And that's the shortest delay on this slide So this gives you an idea of how slow you are right compared with your computer, right? I mean your computer can potentially do like 15 million or you know, whatever seven and a half million Instructions it can run while it's waiting for you to notice that it hasn't done anything for a little while, right? Okay, so I think this is like some I think if I didn't include this These next couple slides in the class. There was like some committee on operating systems instruction That would come here to campus and like march me away, right? So bear bear with me because I think this is just obligatory material that everybody has to everyone has to learn so Again, so it's a back-back and you know at the dawn of time, right? We didn't have we didn't really have modern operating systems because We didn't have modern computers We had these fairly complex Machines that really ended up kind of doing one thing at a time, right? They were set up to perform a particular calculation They ran that calculation they produced a result and then they were reconfigured to do something else, right? Like what's that? What do you guys think is the best analog for this type of device today, Jeff? You probably have one. Maybe you don't have one. Maybe you just have one on your phone now Maybe when you grew up you had one Jeremy. I'm ignoring you the best analog to this kind of device Yeah, a calculator, right? I mean these how many people had a programmable calculator when they were in high school I thought that was really cool. Yeah, I did Yeah, so I mean these were essentially, you know sort of pretty sophisticated programmable calculus That's that's what they were and there were teams of of you know dorky dorky white guys like those dudes whose job it was to Program which really involved just like reconfiguring, you know pulling out some wires and pushing in other wires, right? I mean It's funny because now that I look at this I feel like this this looks a lot like the telephone the job that telephone and switching operators used to do, right? And and I think those jobs were primarily done by women Apparently, this is a more sophisticated version of telephone switching which can be done by men But anyway, I mean it looks it looks pretty similar to me So, you know they sat there and they pulled out wires and push them in other places and eventually they had the computer set up to Run some calculation and then it would sit there worrying and clanking for a couple of days and then some number would come out, right? and You know and essentially To the degree that there I mean these computers there was there were barely even what we would think of as modern programming languages But to the degree that there was with some of these early systems that were not designed to support multiple users or even multiple processes, right? So multi Programming and supporting multi users are things that we can talk about in different ways, right? But these things weren't like this wasn't this just didn't resemble a computer that you guys grew up, right? You guys you guys wouldn't wouldn't really know what to do with this and so, you know the and but when people started to try to reuse Pieces of their computation, right when we started to see the emergence of of languages and other Tool kits to program and use these systems. It was really just all abstraction, right? Because there was no multiplexing these things in one thing at a time, right? So and at some point, you know these these computer systems got powerful enough that they weren't just toys for For geeks to play with anymore, right and people wanted to do things with them And there were there were people who kind of were out there and and okay These these graphics are stupid, right? Like someone didn't want to use Firefox on the on the mark one, right? But but you you see you see my point, right? So people lined up with their job, right? And and they submitted the job and the job ran on the computer and then you picked up the job later This was sort of punch card computing, right? How many people know someone who ever programmed on a punch card, right or heard about this? So this is this is I mean, you know You kind of starts to sound like the uphill to school both ways sort of type story But you guys should should listen to those people and respect their their experience with computing because it's because you guys Have a pretty good compare with them So so what we had but really what we had here was this bad idea of what's called batch schedule, right? And we'll talk a little bit about that's good in a few slides I think and we'll think about think about it is in relation to what we used what we do now, right? But this was batch schedule and the computer did one thing at a time, right? And at some point we know again We started to see their eyes a sort of more interactive type of applications and and now you really had for the first time I Would argue Real computer multiplex right because you had multiple people you had multiple applications You had multiple processes that had to share the machine, right? And so again those two questions that that are going to absorb us this semester started to arise, right? How do I do that? How do I partition the resources? Who do I give which processes do I give what and then also? How do I enforce those partitions, right? How do I make sure that the processes don't misbehave and and and interact with each other in in not nice ways, right? So so again, I mean really really operating systems were a response, right? To the fact that computers got fast enough and powerful enough that they could start to do multiple things at once, right? That there was there started to be enough resources on the machine that you could support multiple users You could support multiple programs and that's what gave birth to these sort of classic, you know classic schedule and ideas, right? so and and one way of thinking about this right was that one of the one of the things that happened, right was it used to be that computing time was very valuable, right and human time was was you know in some sense less value, right and In that sort of environment, what you do is you schedule the humans, right? So in general what we try to do in operating system We try to schedule the things that are you know that we try to do we try to we try to We try to prioritize the things that are valuable, right? So if the computer is valuable, then you just have the humans form a line outside, right? You try to keep the computer going 24-7, right by causing the humans to queue up and wait, right? And and this is kind of how we did this with this batch scheduling idea, right? Well people would come there was no, you know interactive use you would submit your job You pick it up later, right and this is batch scheduling again batch scheduling still has some some nice features, right? That we will talk about when we get back to scheduling in a lecture to so But then but then again, let's let's talk about so, you know What what is one of the problems with batch scheduling? So I have this idea of I'm going to devote all of the resources on the machine to a single job, right? And I'm going to run that job until it completes, right? And again these were the types of jobs and the types of really Computational type jobs that ran into completion, right? There was no interaction if you were computing something, right? Maybe like a missile trajectory or you know some, you know, you know big, you know a complicated Calculation that you were going to do and it was going to pop out a result and it's going to sit there running for a while, right? But we talked about the fact that the CPU Far and even on these early systems, right? This was still true, right? And things have gotten worse to some degree over time, right? But the CPU is still the fastest component of the system, right? So What's the problem with this sort of approach, right? So I take one job I run it on the machine until it completes, right? I have whatever resources I have on the machine I'm just going to let that job go until it's done, do you know? Right, at any given point it doesn't need all the resources, right? At any given point in its execution Something is going to be the bottleneck, right? And this is still true on your systems today, right? Something is the bottleneck for every process on your system at any given moment in time, right? Maybe it's the CPU, maybe it's memory bandwidth, maybe it's the network, maybe it's the speed of your disk But something is the bottleneck. One component is holding it back One component is determining the speed at which it is making forward progress and the rest of the components are underutilized If they're not being shared, right? So if they're not being shared, there's one component out there that's holding you back and all the other ones are saying, hey I'm available, you know Why, so here's an interesting question. The CPU is the fastest, you know, most expensive component of the system If you're doing things like, you know, big gonky calculations, why might it not be such a bad idea to batch schedule a system? Yeah, a bit Yeah, I mean to some degree these these early jobs were probably really CPU bound, right? I mean they were doing big calculations, right? Maybe they were memory bound too, right? Because they might have been reading big data sets But they probably weren't, right? They were probably doing like they probably took small inputs, right? And it had a huge amount of computation and produced a small output, right? So for most of the time during their execution, they are CPU bound and so, okay Maybe this isn't so terrible, right? But in general, as Gina pointed out, right? The You know, at any given point in time, some component on the system is the bottleneck and the other components are underutilized, right? Okay So the solution we came up to this For this problem, right, is this idea of context switching, okay? And what context switching does is it allows us to the goal of context switching, right? The goal of really Everything, almost everything, and until we get to file systems, then the goals become a little bit different, right? but the goal of the CPU and memory abstractions and multiplexing techniques we're going to talk about for the next few months is to take a single set of resources and partition them in a way that makes The sum of all these parts look way, way, way bigger than Than what's actually there, right? So it's this kind of clever magic trick and if we can pull it off, we can make, you know, one computer look like 18 computers or 52 computers, right? Every process looks like it has its own computer that is as fast and as powerful as the underlying machine But in reality all the processes are sharing the same machine, right? So again, it's this kind of this pretty intense sort of magic trick you can think of it that way We're trying to to to divide the resources so efficiently and so cleverly That applications don't even notice that they're sharing the machine, right? That's that's the oh, that's the overall goal and One of the one of the main Ways that we do that or reasons that we do that is these slow devices, right? We talked about memory and memory isn't really our problem, right? But slow devices frequently Discs for long periods of time discs have just not gotten very fast, right? I mean, I wish I had this up on a slide, but if you look at the Spread between disc latencies and CPU latencies It's it's it's been fierce, right? I mean CPUs have been going off off up to the right Moore's law, you know now, you know, we're spreading out a little bit of multiple cores But we're still doing a really good job of making that component faster and faster and faster Discs aren't getting that much faster, right? They've gotten a little bit faster flash, you know solid-state discs have helped but there's still This the the CPU has still outstripped them and continues to do so at a much much greater rate And we'll come back to this when we talk about file systems because that explains it parts of file system design Yeah, Rima, I mean It would be great if they did, right? I mean in theory You know and it's too bad, right so Yeah, I mean if everything what typically tends to happen, right? I would argue is that You can design you can do a very good job of system design for a specific set of relationships between components, right? so Early systems had a certain ratio that was established between the CPU and main memory right or between the CPU and the disc When those relationships change Then some of your design assumptions break down and the system doesn't work very well anymore, right? So over time as those relationships have changed even if like if we came up with some technology and it's possible, right? People are working on you know, there are there are there's research going into essentially Memory-like Chips that save state, right? So solid a camera what it's called, right? But it would essentially mean that your your system would never have a disc again, right? Because it would have memory that was as fast as memory, but that was able to save its state like a disc, right? Yeah, something like that came over the name of the technology is tall But you can imagine how much that would change system design, right? If the contents of memory were persistent, right? I mean, it's really actually kind of fun to think about all the changes that that would bring about, right? But what happened over time again is even if those things changed so even if tomorrow we came out with the technology like that We would have to revisit the architecture of the way that we build systems, right? Because all these relationships have changed and some of the things we do now to hide these things we wouldn't need to do anymore, right? And then there'd be new opportunities as well that would that would emerge Jeremy. Did you have a question or comment? Yeah, yeah, so it's like not it's an idea. It's a non-bilitile RAM, right? The RAM this memory that is as fast as RAM, but it doesn't doesn't lose state like RAM does when it's powered off So So a lot of the goals of these early systems was trying to keep the processor active, right? Hide latencies by these slow devices and try to keep the processor busy, right? And that makes some degree of sense, right? Because the processor is really the component on the system where something is happening, right? Like if you think about it, the processor is the part of the system that changes things, right? The processor is involved with well, you know There's certain things that can bypass the processor if you have, you know, clever architectural tricks But a lot of what happens on the system goes through the processor in some way, right? And so if the processor is busy the assumption is the system is well kind of well utilized, right? And that's not a perfect assumption, but it's not a bad one either, right? So, I don't know why I was trying to go with this. Let's just keep going. So I'm trying to remember exactly what the point of this slide was. All right, I think you get the point. The point is that we have, you know, multiple tasks on the system now And those tasks can be distributed either between multiple users or between on a single user, right? I think I just did the slide because I wanted to have multiple icons of the of the business-looking woman standing there Okay, so Again, now I have this idea that I'm, what I really want is this, right? So this is the illusion I'm trying to take place. Let's say I have two users on the system, right? And this is what this user is doing and this is what this user is doing over time but in reality This is a single core system, right? And so there's only one thing that's going on at any point in time So how, I mean you guys know this from your earlier classes. How is this actually accomplished, right? How do I get the system to look like this? Despite the fact that the reality is that there is only one thing that is happening at any given point of time. Yeah, Frank Yeah, so remember these perceptual limits we talked about before? What I can do is I can exploit them to do this So this is what's actually happening on your system, right? If you're using multiple applications or or let's say a better example here is usually like something long-running in the background Like you're playing an empty three or something or you're listening to music and you're also browsing on firefox What's what is the system actually doing? It's Rapidly switching between tasks so fast, right? And so well that you don't notice, right? Because again, I mean you you're limited in your perceptual abilities and so if the processor sneaks away From running firefox for you know a couple million cycles in order to write some audio data to the buffers So that your song will continue to play smoothly. You don't notice that, right? It's too fast for you, right? So this if you zoom out far enough starts to blur into looking like both things are happening at the same time, right? That's that's the that's the goal, right? So in order to let's let's just go through this before we get done today So in order to perform a context switch, right? So a context which here means that I'm going to stop firefox firefox is running right firefox is going about its business It's you know rendering some web page and it thinks that it's the only process on the system, right? And that's that's because I've allowed it to think that and I'm going to be very careful to make sure that it keeps thinking it okay, but Suddenly I realized that you know my audio buffer is is draining or the user is typing at the terminal and I need to allow another process to run Okay so the first thing the first problem here, right is How does the operating system get control in the first place remember? I said that firefox is running, right? It's running on the CPU. It's going about its business So what are ways that the operating system might get control Jeremy? I'm going to ignore you Gonna ignore you too Yeah Some kind of interrupt, right? So what what could generate an interrupt? Tim oh Yeah, you picked the wrong one. I didn't want to start with that. Yeah What's that? Okay? What else you had one okay you guys with your timer interrupts? Okay, fine. So but there are there are other things that could cause even if that's a pretend I didn't have a timer. Okay, what else could happen that would cause the operating system to take control. Yeah, many Well that okay, so trap is what would cause this but what would cause the trap right? What's something other than a timer that could happen? They'll be associated with firefox that might cause me to Yeah, like for example a network packet that firefox had requested might have arrived, right? So I might get a interrupt for That's was generated by firefox right saying that hey, you know, there's another packet here for the web page that you're trying to render, right? So so it's not necessarily the case that I need a timer right because they're or what else could firefox do right? What could firefox do that would cause me to take control Jen? firefox is running along Happily, and then the operating system starts to run it. Okay. It could have an exception in which case This this might not keep going right so it might have done something and it would crash right? But let's say I wanted to keep running right Simon Doesn't know What could it yeah AJ? Yeah, which would generate what eventually? What kind of interrupt a? System call right firefox could make a system call firefox could say hey, I want to read something I'm done rendering that packet. I want to read another packet from my network socket So I'm going to make a read call and the operating system would start running right? But let's say that none of those things happen Right, let's say that firefox is like computing digits of pi or something right and it doesn't read anything for disk It doesn't need to operate systems help. It's just doing something that's completely CPU bound It is never going to receive an interrupt. It's never going to there's that there's no hardware interrupt That's ever going to happen. It's never going to make a system call This is a kind of a contrived example because that's very unlikely on modern systems But what would I so what would happen then I? don't have a timer it's pretend I don't have a timer so I Know you everyone knows the answer this question so if I didn't have a timer and firefox never made a system call and There was never anything that happened with hardware ever again, right? What would happen? Spencer firefox would just go. Yeah, you guys. He has a team. Do you guys team up on you to learn your last names? Okay firefox would run forever right? Or until firefox on certain systems firefox might be put might be designed to periodically say hey You can run now, you know So there are there are systems that were built to do what's called a cooperative multitasking where firefox might actually have to Give up control of the CPU voluntarily right, but let's say I don't want that to happen I'm the dictatorial operating system. I don't want firefox to run together forever So what do I need to do to take control? I need to set up some kind right so this was what we just talked about right it's possible Then none of these things will happen I need some kind of way to ensure that the operating system will always be able to run in a bounded amount of time right and the way I do this is very simple I set up a timer and Those timer interrupts give this the operating system a chance to run and yes in general Every time the timer on your system fires the operating system will run every time Sometimes it won't do much sometimes it'll just trap into the kernel the kernel will look around So I don't need to do anything and just let the interrupt complete and let the process that was running continue to run Right, but every time the timer fires that it fires a lot right you have this overhead of having to enter the kernel Save some state look around figure out what's happening and then allow things to proceed Yeah, well exactly right so this is the This is the mechanism that allows me to Do this right? Even if these things weren't generating interrupts even if they you know, they weren't making system calls I would always be able to time slice in this way if I want to because I have some Timer that fires fast enough that allows me to take control of the system at regular intervals, right? So frequently on many systems the timer is one of the highest priority interrupts Right, it can be masked in certain cases, but it usually isn't and it's always fire Right, so it is always going to make sure that these the the kernel will have will have the chance to run Right, so the last thing I'm going to leave you. Yeah, Rima Who's small? Yeah, so okay. Well, okay. This is a good question. So so let me let me do this and then we'll come back to that, right? So this is kind of what I just want to leave you with today and then on Wednesday We'll talk about how we do context with it So but this is an important thing to keep in mind as a programmer and it's a particularly important thing to keep in mind During this class and any other time that you're using these types of systems Because this creates a huge number of headaches for you guys, right as programmers and it's something you guys But but this is the real big thing in mind If you write a user process That user process can be stopped at any time and an arbitrary amount of time can go by before it is started to come Dealing with this is really the subject of assignment one and all sorts of Programmer effort is language design and other things have gone into just saying this simple problem, right? When you guys write your sequential programs and see your job or whatever they are Unless you arrange things otherwise at any point in time, you know Between two lines of code sometimes in the middle of one line of code Your program can be stopped an arbitrary amount of things can happen and then your program will start to run right This is something that's very difficult to think about as a programmer because you're you see you thinking sequentially It's like well, I put that value into that register and then do this and they run this calculation That's not actually what happens what happens is again in the middle sometimes and certainly in a language like Java, right in the middle of a line of code Things can stop Anything almost anything could happen to the system unless you take explicit steps to make sure that certain things stay the same Which we'll talk about Anything can happen and then your process gets restarted, right? so I'm gonna go back to rainbow's question. Just just to finish class So would there be a case where I could have timer interrupts that would fire to wrap right you guys can start packing out This is just an interesting question. So So during the timer interrupt one of the things that's going to happen, which we'll talk about on Wednesday is I'm going to Save the state of the thread that caused the interrupt and then frequently I'm going to run some code inside the operating system kernel That's going to try to figure out what to do next right and what to do next might mean Then I need to schedule a different thread to run right so scheduling is the policy that goes along with Context switching as a mechanism for switching between the start scheduling is figuring out which thread to run next if the so think about it this way every time There's a context switch every time this timer fires There's a fixed amount of work at minimum that the operating system will do right as The interval gets smaller Right that fixed amount of work starts to dominate the total amount of work that happens on the system right so if I schedule like You know if I schedule twice as often then the amount of time I spend in the context switch code doubles Right, so if I make the interval smaller and smaller and smaller and smaller and smaller Eventually I might be spending 50% of my time just context switching back and forth and in and out of the curve Right and that would be kind of dumb because the whole point of the system is not to run context switch code Right the point of the system is to do this stuff in between When the kernel gets to choose threads right so there is a balance here if I make the interval too short Then I basically I'm spending all my time scheduling in context switching. I don't do any useful work What happens if I make it too long? What happens if I make the let's say I made the context switch interval like one second Yeah, and now I now I'm way past my human perceptual limit of a hundred milliseconds, right? So if my terminals running and Firefox is sitting there trying to paint right then a second goes by and else Boom, you know that would actually be kind of be fun to see what your system would feel like that way It'd be like don't don't don't you do you'd notice like the clock would go forward like very jerkily, you know Like every it would be weird, right? So you have to be fast enough that you can get below those human perceptual limits But you have to be slow enough that you don't overwhelm the system with context switching so that's the balance All right, I'll see you guys on Wednesday Jeremy