 Achtung! Time! Hello! Hello! Thank you! I can't offer you a free pizza so definitely Angus has me beat a net particular realm but what we can go over is some juicy tidbits of operating systems and there's still some conversations going on. Thank you. Still some conversations going on. Thank you. Okay again what I'd like to do today is go over a little bit about review of what we talked about in the last session in terms of we've got the CPU thingamabob and threads and how we can kind of coordinate between the two and the balance of the class is going to kind of tip into a little bit about how we can actually handle schedules, I'm sorry, scheduling, threading and processes in the context of modern operating systems on this. So questions before we actually begin from either last class, Monday's class, what have you? Talked about multiplexing. We talked about threads. We talked about interrupts. We talked about syscalls. Going once, going twice. Everything clear as crystal. Ready. General questions again I can't get into details for obvious reasons right now but questions about what's coming up obviously checkpoint 2.1 is due at 5 p.m. If you're here today I assume that is probably a pretty good sign on that. So we do have office hours after class right through the deadline itself so if you're still kind of pounding away you can certainly make use of that too. But again just a reminder on that checkpoint 2.1 and then just to reiterate obviously the balance of assignment 2 is going to be due in three weeks. One small announcement about that I know we've been kind of hitting you with a couple deadlines unfortunately watch it with the balance of assignment number 2 it is a big one. This was honestly only a small part to kind of kick start everyone on this. I would definitely not wait until Monday or if you will next week before you start looking at the rest of the class. As a matter of fact I got asked the question what's kind of a good time checkpoint and we're in terms of where we should be. Ideally assuming that you're completing console test today if you could have if you will most of the file stuff wrapped up and pass the file only test which basically means read write open close and L seek maybe not all of the file functionality but at least the basic stuff if you can pass file only test by early to middle of next week. The reason I mention that is the balance of the project which is the process control assist calls like fork, weight, pit, exact those are big ones okay. They are much more difficult and they're going to take considerably more time than the file ones so you don't want to kind of divide it up two weeks and two weeks because you're going to be short changing yourself on the other end. Plus I'm sure I know you have these things called other classes how about that and most professors everyone has this bright idea about scheduling midterms either right before spring break or after spring break on this. I know Jeff has scheduled your midterm right after spring break but the thing is you probably do have most of you at least one test before spring break so that's also going to be interfering with your completion of if you will assignment too. So again another reason to the extent that you can kind of work ahead I'll try to post some suggested timelines on discourse on that but again an initial checkpoint to hit after this one would be kind of early to middle of next week if you can have file only test passing if you do not have it passing by that point in time speak to the staff about exactly where you are what we can do to kind of get you back on track. So just a short FYI about that. Questions about that stuff? Right. Transition between threads is coming out. There we are. There we go. Contact switch. Okay. Another what's a thread again and what's a process again and how does this kind of all wrap up. Well we've got this CPU thing above that is being chased by a lot of these if you will programs that we're trying to run and simply put again a thread think of it as an execution context. It's kind of where one particular processor is running one particular thing at a point in time and then at another point in time it runs another thing if you will to be really technical about that. So and how do we switch from one thing to another thing. All right. It that is what we call a contact switch to go from one thread to another and again the mechanism that we use to kind of enforce switching from one thread to the other is what we talked about that how to get into the kernel with a syscall. Well it could be a syscall okay but if the syscall fails we know there's going to be one type of interrupt and that's a what type of timer. Exactly right. So again clear on the distinction between a contact switch and an interrupt. Roughly put an interrupt is any sort of method to get into the kernel and then once we're in the kernel we're going to be running a scheduler more on schedules in subsequent weeks that's subject for PhD dissertations on this. Okay but then once we pick a thread to run we now want to do a contact switch in the kernel from one to another. That's where we go from one thing to the next thing. All right. CPU limitations. Remember it's kind of a little bit weird here because what is it here? There's only one and again as odd as it sounds being faster can be a bit of a limitation. It's good because we can run lots of stuff but again what's the problem with having a CPU that's much faster than the rest of the system races ahead and then what does it do? Exactly. It twiddles its silicon thumbs on this. Okay so remember we talked earlier about batch scheduling and this is kind of the old model of computing before we had modern operating systems we often had to wait. In other words let's say programmer A walked in with a stack of punch cards ran let's say some insurance processing job and let's say while the job is running the CPU is finished with calculations and is waiting for something from the disk drive and it waits. And waits. And waits. I quote Casablanca on that. So the upshot is at this point we only have one thing to run so it's going to if you will result in a lot of inefficiencies and that means money down the drain. All right so instead what we do. There we go. There we go. Okay so instead of having just one thing what we're going to do is run a bunch of things here and how we can actually do that again is with respect to an interrupt and if an interrupt is not forthcoming via vis-a-vis let's say a sys call one way we can always make sure that the kernel will eventually get control is with a periodic timer tick on this. Again I think I did mention in the last class there are roughly speaking kind of two types of clocks. There's a periodic timer that's always running in the background and you people as programmers are very often going to kind of set your own timers on that. We're not talking about that one. We're talking about the one that if you will the system is always running in the background to make sure we very often preempt into the kernel. All right there we are. Oh one other thing to illusion of concurrency remember Jeff talked a lot about that before strictly speaking if we only have one CPU things are we're not doing more than one thing at a time but it appears to be more than one thing at a time because why we're doing a lot of things really really quickly it's kind of like the movie illusion on this. So one problem with that is that since we do have a lot of things running and they're kind of being jiggled and jagged amongst themselves we have to deal with synchronization on this. So this is kind of one of the downsides to the whole threading model we'll be alluding to this at the end of the class here. Threading is great in the sense that we've got the illusion of concurrency and now with multiple CPUs we even have real symmetric multiprocessing on this but it makes it a pain to program. That's what you're dealing with right now in OS 161. Kernel programming is a lot harder because we've got these things got block locks and spin locks avoiding dead locks and all that fun stuff on this. So we have to coordinate this mess here. Okay last bullet point here now we stop a thread and then we know we're going to start it later on and we want in other words we basically want to move from one thing to the next thing but then we want to move back to the first thing at a later point in time. So how do we want to make it appear that nothing has actually happened. Again this is back to the analogy of remember poor Han Solo was frozen in carbonite. Okay hopefully when Han Solo gets woken up again Han will kind of think that it's the same time and day if you will at the point that the actual freezing took place. That's what we want to happen there and how can we actually do that. We want to make sure that this thread if you will gets restored back to its original state. So thread state again registers and stack those are if you will the two main components to any thread or execution context. And again this is one of the things that runs on the CPU. Now I should say there is also some additional administrative goo that goes with a thread to maintain its state and you can actually yourself take a look at some of that goo pull up the definition of thread struct in OS 161 you can see there's a whole bunch of different fields in there things like the thread state is it currently sleeping is it currently zombie is it currently runable whatever it is on that. So you can okay we need a whole bunch of this administrative fields but for practical purposes again it's the registers and the stack. Think of registers again as being if you will honestly they are really a type of local variables on this. This is kind of how this all blends together here. Let's say that you're just writing a regular program in Java on that and you got a bunch of local variables remember local variables never have to be synced because they're eventually saved on the stack. Well again this is kind of the same if you will can of worms here because registers often are used to hold actually to the extent possible we'd like them to hold local variables on this so that's something that if you will we need to save on the stack that's why we were talking about a trap frame. Okay remember when we go from we have an interrupt we want to switch from one thread to another we've got to save if you will the registers the local variables on to the stack because again the stack is where we if you will put local variables when they're not in registers when we have too many of them yada yada on this. I should also say too digress for a wee bit here right. We're talking about operating systems and we're talking about if you will let's say trapping into a kernel like let's say your program calls sys read on that so I want to kind of poke the kernel I need to kind of ask for help from the kernel you can think of it as a type of subroutine call on that and in one sense it really is because I am calling what looks like a function it does stuff and then it returns to me with a return value on that right and I have parameters that I have to populate on that well it's very similar to if you will what goes on with respect to let's say if you're calling you're running if you will routine foo and you call bar what happens on this well inside this user context you push another if you will stack frame onto the stack go to the next subroutine and if you remember if you took 341 remember you have to save out some registers other registers you can use for parameters and whatnot so it's it's a it's a good analogy okay it's not exact in the sense that when we're trapping into the kernel we got to make sure that everything is restored okay because we don't know what the user is going to be doing in between this and vice versa in other words we have to have this huge firewall between a kernel and the user space here but again the rough analogy is you can think of it as a type of subroutine call in terms of you are still saving registers and returning values on that okay question comment so in other words where do we save the trap frame okay yeah good question that's what we talked about I think it was like on Monday or so in other words where do we actually save the trap frame again think of the trap frame as being well it's the actually it is the registers okay and especially let's say that we're doing a sys call those registers contain the parameters on that and like any if you will subroutine call that gets crammed onto the stack okay but we cram a lot more onto the stack okay now one other thing I should mention is why do we put it onto the stack I think someone asked me that like why couldn't we let's say simply save the trap frame onto the heap or whatever else well again think again of the subroutine model partly because we could have nested traps in the same way we can have nested subroutine calls in other words I could let's say call sys read that generates a trap and I have to cram a trap frame onto the stack while I'm in the middle of sys read let's say I do a page fault because I'm trying to access memory on this bingo that generates another page fault that gets crammed onto the stack because it's a type of subroutine call not really it's kind of going on behind the scenes but we go to our memory manager and while I'm in the middle of my memory manager if you will servicing my page fault a timer fires well now a third if you will trap frame gets crammed onto the stack and I had kind of have to return back up if you will peel off these various if you will trap frames till I eventually return to the caller on that so in other words one reason why we use the stack is that again you can think of it as analogous to a subroutine and we don't know how many levels we're going to have and where do we put levels that's one of the main purposes of the stack on that the other thing too is that remember the stack is kind of private to a particular thread on this if we cram the subroutine onto the heap well now we're going to have to coordinate well this is the trap frame for which particular thread and we're going to have to kind of keep it separate from if you will other threads on that and that just introducees coordination problems again let's just stick it onto the stack answer your question there was a hand thank you good catch okay basically when you have a interrupt and let's say you are in user space and you take a trap that trade frame is put onto the kernel stack all right there was a question about that last time quick version is because we want to make sure that our context switch stuff if you will is not polluted we have control over the kernel stack et cetera on this so actually if you want it's a bit of a digression from your projects but it really does help at a low level to understand what's going on with this look at kind of how the thread subsystem works and that explains a lot about that so nutshell version if you are in user space and a trap happens that trap frame gets crammed onto the kernel stack et cetera on this and then obviously while you're in kernel space if there's another trap yet another trap frame gets crammed onto the kernel stack on that too so other questions yes essentially no okay uh if we're talking about signal handling right yeah okay so and that actually by the way since we do save a trap frame on to the user space now we have some security and sanitization issues but essentially no that is correct yes all right there we go there we go okay now context switches are good and bad one of the nice things is when let's say we're going from user space to kernel space or from let's say one kernel thread into another kernel thread or what have you oh and by the way again i should be very clear i'm glossing over there are trap frames and there are also switch frames so if you see some code in os 161 about that i think i mentioned that the last time the switch frame is actually when the context switch happens itself so just make a mental note of that right now they're not quite the same thing but back to context switches on this here one of the upsides to context switches is the reason we save all this stuff is we have a pure unadultered version of if you want a thread state that's why we save all this stuff we save a lot more registers than we would if we're let's say doing a simple sub routine call from user space to user space on this because we need to make sure everything gets saved and restored kind of in total on that that that is great for security that's great for stability on this but again we're saving an awful lot of junk on this here so what jeff is trying to get at here is these context switches they ain't free simply put they take a lot of time and well wait a minute here i guess there's two ways of doing this here how would we possibly let's say let's say each thread context which takes a lot of time so i want to minimize that one thing i could do is let's say lengthen the time between context switches right so let's say instead of the timer tick firing 100 times a second i have the timer tick fire two times a second that is going to greatly decrease the overhead of a context switch at the downside of what now if i have a the timer tick every only twice a second it's going to be so it's definitely we're going to have some inefficiency right because you're going to have a lot of cases of okay i finished up a calculation i'm now waiting and i have to kind of well well wait a minute here that's a good question here let's say that i'm doing some calculations and i now need to wait on some data for a disk do i need to wait for that why am i hesitating because what am i going to do well i could do busy with but i want information from the disk how do i get information from the disk syscall so i'm going to if you will request syscall that's going to trap into the kernel right then so another i don't have to wait for the timer tick on that so in that particular case i get lucky the kernel will put me to sleep all right but there might still be another problem with that how about you people you are playing what is the the latest version that i'm duke nukem i'm really dating myself um but um or whatever it happens to be castle wulfenstein even worse all right let's say that you are playing a first person shooter and you have context which is twice a second problems why did i pick something like that like yeah okay in other words we've got probably a bunch of stuff going on and we need again remember the movie illusion here there's a reason why i think it was thomas edison picked 24 frames a second it was kind of the slowest speed at the time that they could manage with the really old mechanical devices that was at least fast enough to maintain the illusion of motion to the human brain so in other words if we have let's say only twice a second it's going to appear really choppy to you the user interface on this so we want it faster than that my point is there's a trade-off here we don't want to go too fast because now all we're going to be doing is doing what if i have a one gigahertz processor and i have the timer tick one billion times a second what am i doing every right that's all i'm doing is servicing timer ticks all right in fact i probably can't even keep up with that so i'm going to have to kind of adjust the rate at firing on this so far so good make sense okay this by the way is the sort of thing that jeff is going to be asking you to kind of digest when it comes time to the test here in other words the trade-offs the pros and cons of some of these policies all right wait a minute there we go oh just general questions how much complaints no hands okay threads okay we talked there we go okay thread if you will the thread context registers and stacks here now let's talk a little bit about what gets shared threads and processes all right and again what is a process roughly speaking it is a bunch of threads that share the same user thing of a jig context on this and one thing that you can ask yourself is remember fork and remember jeff's discussion about fork what does sys fork duplicate when i want to if you will duplicate a process does it duplicate the registers or the stack can you talk about that interesting maybe well well what is okay what are registers and stack okay they are if you are part of the thread state right so let's think about this for just a little bit well maybe go down to the bottom here does the memory get duplicated in fork okay the address space gets duplicated so in other words what is shared between threads memory so if we have one process that has two threads they get to share the same address space similarly remember fork duplicates the file descriptor table same thing in other words the threads in a particular process get to share if you will a file table on this so kind of bubble back up to the top here we duplicated these things here did we duplicate the registers and the stack if i what's a thread it's an execution context if i duplicate an execution context i now have two threads that are doing what the same thing right that defeats the purpose of having multiple threads on here so in other words we want to have multiple threads within a process but they're going to be doing different things so that's kind of the a thread stack does not get duplicated if you will and if you will let's say a call the sys fork as a matter of fact well it's kind of like this you call thread fork right but what does thread fork do it creates a new thread if you will okay and i actually lied okay because if we have one process and one thread we have to start the child process off that's a clone of if you will the original process makes sense on this but if i have one process and i have let's say multiple threads within that process it makes absolutely no sense for the threads to be doing the same thing because the whole point of multiple threads within a different i'm sorry the same process is they're doing different things but again just to clarify again when we do call sys fork at least initially we want the threads within the process to be doing at least apparently the same thing yes and no it's it's kind of like this since we're starting or if you will we're returning to user space if you will like if you take a look at an os 161 okay when what we really care about is returning to the point in user space that sys fork was called all right and what os 161 actually does is when you go back into user space it resets the stack pointer at the top so in essence like a thread can never really return to where it was before it trapped into user space that's why i was mentioning earlier you can think of user space calling i let's say a kernel sys call is a subroutine on that does that make sense so in other words we whatever our thread context was we really don't care because we're going into user space and when we come back from user space we're in essence going to have kind of a clean stack okay that is the specific os 161 implementation of that other questions yes you know what i do not know the exact numbers on this i know you can actually funk around with it with os 161 i think it was at like 25 megahertz but again that's an emulated system on that wait no that's actually the clock frequency on that so but you got me on that i really do not know it's going to be i know faster than the usual human perceptual limit and slower than whatever the clock frequency of the processor is i gave myself really wide bounds on that one other questions that i can try to answer and you know i should also say this that's something that you can dynamically change too depending upon what's going on already ah this is a nifty little oh there we go okay this is a nice little diagram here and what we're trying to do is initially we have a process actually two processes and what does that process consist of well we have this address space remember our different if you will address segments you'll be toying around with those in assignment three we've got our file table with our handles that are pointing to open files and as of right now we just have one thread we can add more more threads to this again for the purposes of os 161 you're not responsible for implementing multiple threads if you want to go right ahead that is part of assignment number seven all right that is due on august 15th so aside from that though right now this is kind of the model that you're dealing with in os 161 and it did certainly represent the model of if you will historic operating systems before they kind of decide to get funky with multiple threads okay why do we want to use these threads here now again part of it goes back to the historicity of remember batch processing was inefficient so we need to kind of use that cpu schedule other things plus we have this whole thing about illusion of concurrency here but there's another thing from kind of if you will a philosophical aspect here right especially take a look at number two here sometimes it's a good way about thinking about things or conceptualizing the problem at hand that you're trying to sell sometimes you have a problem that can kind of easily be divided up into two parts and so you know what we're going to stick different threads on that an example or two of that in just a little bit number three here we've already talked about that a lot here namely put if we've got if you will delays okay let's say the disc drive is holding back the cpu well let's not hold back the cpu we'll schedule the cpu to do something else on this ah okay there we go kitchen analogy on this okay the kitchen is the process and the cooks roughly speaking are the threads so they have their own address space in this case the room are here okay and within a particular process you know what they're doing related but somewhat different things here so they're all preparing food ideally let's say maybe you've got the pastry chef you've got the who knows the entree chef and they're all preparing let's say something for one particular meal here but again they are doing doing different things at different times here the last part here bears if you will mentioning here they have a private state but they can communicate easily here so what we're talking about is that because they share if you will the same address space even though they have individual stacks they can still pass data back and forth between each other very quickly in other words the pastry chef can yell at the entree chef to shake a leg if things are if you will going too slowly or what have you they're standing probably right across the room from each other on that right oh that's must coordinate oh coordinate what we're also talking about that is synchronization in other words ideally we want to make sure that the dessert comes out not before the entree okay but not too long after so again this is timing coordination communication on this that was correct oh this did not show up on my computer i was wondering what happened to that meme okay i just got a completely white screen when i was preparing for this lesson so okay that is the meme okay ah philosophical way of looking at things okay threads versus events on this now ultimately on most operating systems things are implemented with threads but that's not necessarily the best way of always thinking about things sometimes there are better ways of engineering things of designing things on that and events is a good example of that if you think about it let's let's talk about you've got a gooey and remember back to let's say 115 116 you got all these buttons on the screen for whatever game it was that carol alphonse had you doing that particular project and each button is going to be linked up to some sort of an event handler or an event servicer on this i really don't care about the threading abstractions that are going on in the background what i care about is if this button is clicked on how do i handle that event on that and just you know what as far as i'm concerned it's i write code that will service that event and then as far as i'm concerned it just runs to completion what's going on with the threading subsystem it's you know what it's something that i really just don't want to think about on this as opposed to if i take a threading model i'm probably going to be much more explicit about things like synchronization blocking sometimes that's good sometimes that's bad on that so again it depends what you're doing and kind of what your approach is on that okay naturally threaded multi applications here we go here things like web server in other words let's say that you go to who knows foo.com and foo.com let's say uh you're trying to request a whole bunch of who knows let's say javascript snippets and jpegs gifs gifs what have you on that and in effect what the web server is going to try to do is send you a whole bunch of snippets over the web so it's going to you know what have a whole bunch of worker threads to send you each one of these individual things so it's part of if you will the web servers process but you know what while one thread is grabbing one graphic another thread is grabbing another graphic it'll kind of speed things up from the web server's standpoint on this because i need i've got a request for 10 items so rather than servicing them odd seriatim i'm going to have 10 threads work on them simultaneously okay that's what we mean by something that lends itself easily to multi threading on this similarly on the browser end here you have different tabs within the same application well that application is a process so we want let's say firefox chrome what have you to communicate within itself about its state here but within that state here you know these particular threads here one is at foo.com another one is at bar.com and they're doing different things here so we want different threads running that scientific or like big data crunching on this here all right this is kind of what jeff's research lab does is like a lot of data processing for if you will data from smartphones and we have a test bed of 100 ish or so phones that are out there and we get these big blobs of data here and periodically we i and others have to go through and crunch lots of data well you know what what we need to do is we've got data that we've collected from 100 phones let's crunch the data from phone one from phone two phone phone three what have you that's something that is what we like to call embarrassingly parallelizable if we want simply to if you will see what happens on each of these phones we can handle them as individual threads make sense on this my point being is sometimes things like buttons on a GUI are easier to visualize and code for in terms of events other things like let's say servicing web requests or crunching data among mutually independent let's say sources of data those are better to think about in terms of threads with the caveat that under the hood i should say this event-driven programming still is implemented with threads hence if you look at remember in java there's this thing called quote event dispatch thread edt but that's another story all right why not processes we're talking about again on this last one here we can let's say use threads to let's say handle tabs or threads to paralyze stuff or what have you all right that's great so why don't we let's say use embarrassingly parallelizable processes just fork off a whole bunch of independent processes well in theory you could do this so in other words it's not so much that we can't do it in principle but we get into if you will some if you will uh what's the word i'm fishing for like time and performance issues on this because what's the whole point of a thread or i'm sorry thread versus a process a process is a bunch of threads that share the same among other things memory space it's easy remember the pastry chef can shout at the entree chef because they're right in the same room on that as opposed to a different process let's say the kitchen didn't receive its order you're going to have to pick up a phone and call a supplier that's going to be if you will more overhead it's going to be more costly more delays on that because remember one of the biggest things of modern operating systems multiplexing security process isolation right threads trust each other so maybe make a note of this medley on paper what have you but threads within a process trust each other processes even if they want to trust each other are not allowed to trust each other and the kernel enforces that with very good reason on this all right stability security what have you on this i mean think about timberlake on that that has processes running from multiple different users on that do you want so you don't say user x to have access to your address space no you do not hence the concept of process isolation on this so the thing is this is great for security and stability but ipc inter-process communication is more difficult because we have to have all these firewalls in place and the kernel has to enforce this so it adds overhead yes we can do inter-process communication again anything we can do with threads we can do with processes but it's going to be in general much slower much more inefficient on that if you've ever programmed in python and you've had to do deal with the gil the global interpreter lock you know what i'm talking about because there is a essentially a bug in python that you can't easily use threads at the same time so you have to use processes and it's one of these holy wars that it is what it is on that all right questions comments complaints so so far what we've talked about is threads versus processes why sometimes threads are better what the advantages of processes are different ways of thinking about things event driven versus it will a threading view of the model moving right along okay implementing threads okay ah here we are okay well we already have there's a couple ways of doing this down at the bottom here is essentially what we've already been talking about threads implemented through the kernel directly the one to one threading model in other words i am in a process and i call the p thread library to fork off another worker thread the kernel handles creating the thread scheduling a thread yada yada yada on that so again this is the same diagram that we had before we've got our process same address space same file table but again now we have three threads that are managed by the kernel that's one way of approaching things go back up to the top here the m to one threading model what that's talking about is rather than the process calling a p thread library well you know what let's just say the process has one and only one threat again this goes back to the historic model here we're just going to kind of do our own management of threads on this in other words you know what i know what i need to do and i know i need to do a bunch of mathematical calculations i also need to fetch stuff from the disk on that so you know what i'm going to kind of time things to make sure that i don't ever waste time okay if i'm waiting for information from the disk well maybe i'll kind of try to arrange that and then maybe do something else in the meantime or in other words basically i am going to somehow i'm one thread within one process and while i'm in the user context i'm going to kind of subdivide within myself if you will user level threads on this there's pros and cons to that as you can imagine on this too and let's look at them here okay here we go ah here we are first off take a look at this multiplexing and anything that involves multiplexing and security a lot more overhead on this so one thing i can do is since i'm in the kernel i'm trying since i'm in user land i can instead of dealing with things like multiplexing and multiplexing and kernel level context switching i can deal with stuff like take a look at this we have these routines like set jump and long jump all right in other words i can use user level routines to if you will switch from one thread to another so i don't have to save out all those registers because i trust myself on this all right um preempting other threads on this here well there's a couple things one way is i can be nice to myself this is the old like you know like goes back to old versions of windows if you will cooperative multitasking where threads are counting on each other to periodically yield or you can maybe ask for a little bit of help here and ask for let's say a signal or something else like that it's not quite as if you will overhead heavy as a periodic timer tech for multiple kernel managed threads here but the upshot is i can still kind of implement threads in user space and it's going to be a lot less overhead on that this is okay an example of how we can for example affect a type of context switch in user space and jeff has written up a little bit of a nugget here that illustrates let's say the set jump long jump routines here and tldr on this i'm going through a loop i'm iterating ten times well almost i get to point five here and that at point five here what do i do here is i break out of the loop and then i drop down here okay so you can see what's going to happen is i'm going to say value of you know zero through up until five and then i'm going to break and then take a look at what's happened i go down here and then let's say some something else happens and then later on i run this command down here long jump and basically what that does is it takes me back to kind of where i was and i can continue on this see what's going on so i'm in user space okay i went through half of my loop and then in effect i have kind of like a programmed or self implemented save of my context and then later on i want to kind of restore back to this and what are we actually going to print out on this let's take a look here up to five i hit my saved state i break out and then guess what i restore and then i print out the remainder of the loop nice so that's kind of user level implementation of how i can save and restore state no context switches needed on that much less overhead all right again this did not show up on my well actually i can't say an xp machine because i had any ubuntu mode but the upshot is okay we now we nailed a long jump so forgot this that jump ah here we are pros and cons again of user space here pros and pros is essentially the threads can be smaller and much faster we're already talking about that now essentially though we're losing something in terms of the kernel which means we lose the overhead on this and among other things here well this is kind of interesting we can't use multiple cores because why remember i've got one process and one thread and that thread here is operating within a user context here that thread is set to run on let's say core number zero by the kernel i as a user can't say you know what i want another thread to run on another core i need help from the kernel to do that as a matter of fact it's also going to be even i do get that amount of help from the kernel i'm going to still have to worry about coordinating among different threads at this point so essentially i'm kind of limited to the model of if you will one user thread is essentially running on apparently one core on that so i'm kind of giving up a little bit of resources on this another thing too here is operating system can't schedule the application essentially second port here the operating system if you will knows more about the system as a whole all right but if you will the operating system knows less about what's going on within a particular process or a thread so this can be kind of viewed as two sides of the ball bearing here so sorry about that essentially put if you will if i have user i'm sorry let's say user space managed threads that if you will process hopefully is going to be able to do a better job of scheduling the thread within itself but it's going to have a much poorer view of the system overall on that another thing too is since again i don't have these multiple cores and i have much less control over if you'll scheduling on this if i block let's say i'm my one kernel level thread that appears to be multiple user level threads and i'll say i call sys read well guess what i'm suspended as are all the other user created threads on this too so again i'm not getting any work done so it's more efficient but it's also less efficient make sense on this this the sort of thing that makes another good exam question oh ready now we go oh again this is kind of the mirror opposite of that if you will okay the kernel has a better if you will view of the entire system on this and it can schedule among different cores here but again we have if you will the cons of if you will the if you will the context switch overhead all right oh thread scheduling here yes policies this is going to be where we're getting into like how fast should we run the timer check okay in what order should we if you will run which if you all threads or processes on this that's actually critically important to user experience it's critically important to system efficiency on that too and if you look at things like the os 161 really does not have much in the way of policies it's a didactic operating system and the purpose is to implement base functionality on this all right in a real operating system you are going to see lots and lots of code devoted to things like optimization detecting current user state like it has the current let the user been blocking on a certain type of i o or is there a particular tweak we can kind of effectuate to let's say giving more memory to one process or let's say adjusting more clock cycles to a particular process that might optimize things on that and again here's where you can get into a lot of ai and different algorithms but again that's something that we do for the most part gloss over in os 161 so if you're looking for a lot of that optimization code you're not going to see much of it if any of it questions on this yes good point here in other words how do we even have a scheduler in the first place so this is one of the the question is how do we implement a scheduler if we do have user level threads i guess there's two parts to that is how do we actually get to the scheduler so we have to deal with we don't have a kernel enforced timer tick we either have to have let's say a thread yield or we have to let's say ask the kernel for help with maybe some like signal or something like that that we talked about but the other thing is yes we now have to have our own internal scheduling algorithm because let's say i am one process with one kernel level thread and i have subdivided that kernel level thread up into 10 user managed threads in what order and how much time they get yes that is exactly code that i'm going to have to have in my own user space so again in here you can argue well that user skate pace can probably tweak itself better but it's still going to be blind visa be the rest of the system answer your question other questions okay that's just about it i will stick along is thank you very much for coming i'm glad to see that you have held it right through to the end see you next week well actually Jeff we'll see you next week 5 p.m check point 2.1 take care people