 So, I mentioned this already. So, let me just quickly take you through this. The requirement here is that pilot input should be pulled at rates greater than 16 milliseconds which means that no more than 1 input per 16 milliseconds which means that when an input comes it should be written out before 16 milliseconds. The other one is the state of the flight dynamics being updated every 12.5 milliseconds. I mentioned this already when we discussed this. The third one is the end to end constraint. I think I said 150 and 200 is 150 and 100 for fighter aircrafts. By the way fighter aircrafts are by design unstable, more unstable than commercial aircrafts. So, which means that they are dependent on the on the computer working much more critically than for commercial aircraft. Commercial aircraft the pilot can take over at short notice. In pilot aircrafts in fighter aircrafts if the computer dies the plane is gone which is why these are much more critical for fighter aircrafts. Let me give you the picture here. So, this is the profile of the reactor. If you do not have the power supply to the coil temperature starts to fall at this rate. So, which means that if the lowest temperature you want the system to reach as T flow at which point flow can stop you want to catch that before that happens that temperature is reached. So, you have this much amount of time to deal with the following temperature which means that once you detect that T man has been reached you want to turn on the heater again within this amount of time. So, knowing the slope is important this is where the physical characteristics of the environment come in. Similarly, when you turn the heater on system will take certain amount of time to get up to maximum speed maximum temperature at which point you want to catch it and not when the temperature reaches T boil. So, this again this amount of time is how much you have to react to it this I mentioned already. So, you can convert all of this into set of concurrent tasks which do the actual control and they have sensor input like form m t the point b being reached a being reached and then output to the actuators on forward on backward of things like that same thing here also you measure the temperature into a variable look at the thresholds being reached if so do something otherwise nothing else. So, very simple, but good example of how control tasks are usually programmed. I have been talking about scheduling and operating system. So, that is where we will go next. So, the goal of the real time system which is the core that we are talking about is to make sure that events are reacted to to perform predefined actions within specified time intervals the key is specified time intervals right and which we use the word feasibility checking or it is also called schedulability analysis is something schedulable is something feasible or doable in a way that all the time constraints will be met. And there are lots of different ways of doing this I am sure some of you have seen some of this before, but probably not in this combination. And these relate to the various characteristics that we started talking about of real time system is a dynamic is the laxity large is it safety critical versus firm versus soft. So, depending on the answers to these questions we will have to choose between these paradigms paradigm meaning classes of scheduling algorithms. And the key differences are when do you do the scheduling feasibility checking when do you check whether it is possible to do something do you do it statically meaning a design time or when the system is running. The second is what is the result of the analysis is the result itself a schedule or is it simply an answer yes or no it can be one of the other. So, that gives us four combinations. So, the first is what I would call as a table driven approach it is static table driven in the sense that the table is clear as statically and kept around until the end of the system this is being done at design time. So, a good example of this is a railway time table or flight time table which is set for a period of time not when an aircraft is ready for a departure, but every three months this is the airline company says we are going to have this schedule is published everybody can know what is happening. Any perturbations on this is being done within this confines of this static table right. So, this is called static table driven approach you perform the static schedulability analysis by checking if a schedule is derivable can you derive a time table and the resulting table tells you when the system will allow a task to start like start time for a flight end time for a flight all of that is pre specified tasks are periodic or are transformed into periodic as sporadic task for example, very predictable if jet highway says is going to leave a 640 in the morning chances are it will leave a 640 right you can depend on it highly inflexible if there is a thunderstorm if there is a rain if there is a strike sudden strike all by employees things go for a toss people start to pile up in the airports. So, any change to tasks and the characteristics may require a complete overhaul of the table especially in railway time table we see this happening right if one train is delayed there will be a cascading effect on other trains because the resources are being used one after the other. The so that is an example of something being done statically and the result itself is a schedule the second is an example of things being done statically, but the result is not a schedule per se the result is a priority assignment according to which if the system runs a run time the predictability achieved at design time can be preserved. And this is called the rate monotonic approach a very simple idea lower the period higher the priority if you do something very frequently give it a high priority if you do something less frequently give it a low priority. And the nice thing also is that the analysis is very simple you look at the task utilization where utilization is computation time over period the semantics of this is that semantics of this is how much of the processor is required per unit time for this task I am going to run this task for a computation time every period. So, computation time divided by period will give me the amount of time per unit CPU time I need this processor for this task exactly. So, utilization of the processor for this task is given by this if the sum total of all of that is given by natural log of 2 is less than natural log of 2 which is about 0.69 then I know that it is feasible actually the natural log of 2 comes from this formula which is the actual formula it is because the Leo and Leyland bound C i over T i is the utilization for task I sum total of all of that should be less than n which is the number of tasks times 2 raise to 1 over n minus 1 for very large n this boils down to 0.69. So, the test is very simple and it gives you a priority assignment followed by a yes or no answer. So, tasks at run time are executed highest priority first with preemptive resume policy which means that any time you are always running the highest priority task if while I am running this a higher priority task comes I stop this by preempting it run this and it is over I come back and resume it and go on that is a mantra. So, to do this we have to know the execution time or the computation time for every task I can mention before predictability does not come for free you should be able to analyze your task to be able to get this number. If you have resources things get a slightly more complicated you have to know how much the worst case blocking time is what is the blocking time you are using something I need it I have high priority I cannot get to it because you locked it now I am blocked for the duration can be analyzed then that case I had to put the blocking time into the formula and apply. What is story behind this very fascinating you all know about the Apollo mission right the Apollo missions which went to the moon. So, 1969 or thereabouts professor Liu who was then at Illinois University of Illinois at Urbana-Champaign was a young assistant professor he wanted to do something for the summer practically. So, he went to JPL the Jet Propulsion Labs in Pasadena California he talked a lot of engineers saying what do you do what do you do and one common theme that you heard from them was we assign priorities according to smallest period implies highest priority more frequent something is a more high priority less and it sort of seem natural to do it, but it is not clear whether that is the right thing to do. So, he is a theoretician he was instillers and he said I am going to analyze this from first principle and then said that if class a periodic and deadlines are equal to the period the example assumption we have been making and tasks are released at the beginning meaning that they come into the picture to be executed at the beginning they do not suspend themselves meaning that voluntarily they would not give up the CPU. They have bounded execution time they are independent meaning no resources are required this is his analysis at that time and overheads are negligible meaning the preemption resume costs are 0. Then he proved that the rate monotonic assignment why is it called rate monotonic higher the rate higher the priority right. If according to this assignment and if you run highest priority first at run time then if this formula holds things are feasible. Now, this is what kind of formula necessary a sufficient it is a sufficient formula which means that if this condition holds you can be sure that these tasks will meet their deadlines and then according to the periodicity requirement. It is not necessary which means that even if this is not holding it is quite possible to have task sets which are doable which will meet the periodicity requirement. So, naturally there are two questions that arise under what conditions is a task set feasible and what or rather is there a feasible and check or schedule ability check which is both necessary and sufficient. And the answer is to that is there is no check unless you put some constraints no check which is feasible which is sufficient and necessary. The check the conditions are as follows if all the periods are harmonics of each other what does it mean there is some base period all of this are multiples of that period. Then this right hand side bound is 1 this can be replaced by 1 which means what the CPU can be kept 100 percent busy and yet meet all the periods. There is a earlier formula which applies to any arbitrary period and arbitrary computation time the sufficient condition says this bound should be 0.69 which means that the processor cannot be loaded more than 69 percent. So, this one condition the condition corresponding to one says if the periods are harmonic then we can use all of the processing power that is one of the ways in which this can be improved. So, I started talking earlier about how do you design these deadlines and periods. If you come across a task set given to you to be run on a given processor you size the tasks come up with the computation times you apply the formula and you find that it is not feasible. One option for you is to go back to the designers and ask the question can we make these periods harmonic. So, if there are three tasks let us say computation period of 11 16 and 6 I can convert this to 10 this to 15 and this to 5 and get harmonic periods. So, that is a big plus I can now see if the computation times over these periods respectively will lead to anything less than 1. What is the downside earlier I was dividing the computation time by 11 to understand to get the utilization of this now I am dividing by 10. So, utilization of this is gone up same thing for the others also. So, required utilization goes up, but the bound becomes 1. So, if there is a give and take the bottom line is you can go back to the control engineers and ask the question can I change the periods in such a way that they are harmonic and most often the answer is yes unless the device is constrained to be sample only at once every 16 seconds or whatever. So, that is one option as people who might be called upon to design systems here is one option in case the bound the leave only in boundless and hold. The other more complicated way is it is something called exact analysis I would not have time for this I am going to just introduce you to the idea behind the analysis exact analysis I am going to illustrate through this example. We have 4 tasks periods 3, 6, 5, 10 if you do the test add up the utilization it comes to 0.9 the bound for 4 n equals 4 is 0.75. So, the condition does not hold as it turns out this is not true this does not imply that it is not schedulable why because of the reason I gave you before this is sufficient test not a necessary test you would like something which is both. One possibility is to change the periods the other is to take this itself and do this analysis on a instance by instance basis. Now, think about the following way what is the worst case response time inducing condition and what conditions will a particular periodic task experience the worst case response time. Task arrives here how long will it take to finish the execution time for sure it needs to execute its own computation what else can happen during this execution a higher priority task might come. So, this execution time will expand by this much now this is a new assumed response time earlier we assume the response time to be equal to the computation time alone now we expanded it because more interrupts might arrive with the higher priority with this new expanded response time we check again whether more interrupts will come at that within this interval if they do this will expand some more see why expansion is happening because you are assuming an execution time or response time and then within the response time asking the question will any higher priority interrupt come. So, as a period as you keep on asking this question again and again finally, a point will come where the response time does not expand any more that is the final response time. So, this is the basic idea behind exact analysis you start with the task having the highest priority check its response time what will be its response time its computation time nothing else you can know the higher priority task exists. Next task is computation time plus any other arrival of a higher priority if the response time is less than the period you are done next task this is called a recurrence relation and the recurrence relation goes as follows for any task in the n plus 1 iteration of this expansion its own computation time should be added to the response time derived in the last iteration of the recurrence relationship how many higher priority tasks will come the response time divided by the period of the higher priority task and what will be the total required computation time that many instances of arrival time the computation time. So, whatever I told you by hand is here in formulaic terms as it turns out if every response time if every task response time is less than the period then the task that is said to be feasible this check is both sufficient and necessary the only complication here is that it takes some amount of iterations to go through it, but you can write a very simple program to do this and this is called exact analysis is this clear. So, what I would like you to do is again when you go back to your rooms or when you way back home go through these numbers I suggest not looking at the slides before you do the numbers you have the basic idea is the intuition behind it apply it for this task set this four four task set and make sure you understand only if you have any difficulty go back to this slides and see where you gone wrong is that clear. Now, so we looked at two parts of the four paradigms static table driven static priority driven by the way the term static is used in the static priority driven in a very special way in static table driven the static is the time at which the table is created design time statically in static priority driven this has two connotations either you derive the priorities at design time of statically or more precisely the term static qualifies priority all the priorities given to the tosses static in nature they do not change. So, if you take two instances of two tasks let us say I have a task with period three two instances of a task and a task with priority with period five this is three here assume these intervals are of equal length these two instances will have the same relationship with this instance. So, for example, this is running it finishes work it has a higher priority then this instance starts running when this is running of this next instance comes this will be preempted why because all instances of this task are higher priority than all instances of this task. So, static priority means that independent of when you look at an instance its priority will always be higher than any other lower period lower priority task instance that is a property of rate monotonic that is why it is called static priority. Now, there is another way of assigning priorities called earliest deadline first something that we all use on a daily basis right we have two things to do we do the one which has an early deadline first. Earliest deadline first is a dynamic priority algorithm why for example, let me take specific numbers here at this time both these tasks are ready to execute this has a deadline of three this has a deadline of five which has earlier deadline the top one. So, it is executed and done and gone this is the only task waiting to be done it is taken up for execution is executing and at this time this task comes in the rate monotonic is what happened this took over the processor in the earliest deadline first case this has a deadline of six this has a deadline of five. So, this wins what is happened is that a lower period task instance has a priority which is lower than a higher period task let me repeat that this instance of the of this task this instance that is of this task has a lower priority compared to this instance of the higher period task not thinkable in the context of rate monotonic but possible and happens in the case of early deadline first why because this instance has a period of five there is a deadline of five this has a deadline of six five wins over six. So, that this continues to run even after the arrival of this is that clear. So, this is dynamic priority two instances have to be related by the deadlines and not by the periods is clear. So, earliest deadline first also is can be analyzed in the same way in that case this bound becomes one whereas, the bound was one for this only for the case of the harmonic periods is one for the case of EDF for all periods the only reason EDF is not used in practice as much as it should be given this increase bound is because of this dynamic priority changes why because every time there are two instances to be run of different periodic activities you have to look at the dynamic you have to dynamically look at the deadlines and make the decision as to which to execute next. Another reason is that typically in a processor you have priority levels assigned to devices interrupting that processor typically there are four lines or eight lines or sixteen lines is the case may be what you can do is associate one of those interrupt lines with a device associate the highest priority device line with the highest priority device which means the lowest period device. So, it is a direct mapping between what the rate monotonic priority assignment is and the priority lines on the processor the hardware and the processor will ensure that at any time if there is an interrupt line which is enabled it will run that interrupt routine corresponding to the highest priority interrupt. So, the hardware support for highest priority for the execution. So, all for all of those reasons rate monotonic is the assigned is the preferred priority assignment policy and even though in the literature there is a lot of work on EDF based approaches and some systems also built around it is this clear. So, both of these were at least the way I described it statically done the assignment of priorities in the case of the second approach and the creation of the table that is really no reason why it cannot be done dynamically. In fact, we do this all the time we start in the morning we have a certain plan for the day going to be here at this time here at this time here at this time and around the static schedule we create a dynamic schedule somebody calls you pick up the phone if you can if you have the time to do it if somebody calls when we are doing something with the statically schedule we do not pick up the phone. So, we are doing dynamic allocation of work on top of statically allocated work right and this dynamic allocation might result in some movement of the statically allocated tasks. If I decided to finish this class by 1 o clock which I had I am dynamically changing it is 115 now the work will get done nothing else will happen because there is no consequence to this delay other than delaying lunch by 15 minutes. So, we have static allocation on top of which we make dynamic allocation or dynamic scheduling changes which is the norm same thing applies in real time systems also. So, here what we do is to do feasibility checking at runtime a dynamically arriving task is accepted only if it is feasible to meet as deadline and we say that this task is guaranteed to meet its time constraints as in the case of the first two approaches the result could be a priority assignment or a schedule itself. So, in a sense we have the best of both worlds flexibility of dynamic approaches with some predictability of static approaches that happens only if you check the feasibility before taking it on. This feasibility check being done dynamically has a very important consequence what is the consequence we may have to say sorry we may have a new task arrival which happens to come at a time with a certain requirement in terms of deadline which cannot be accommodated. So, you should have the flexibility to say sorry and the application should also be designed in such a way as to accept the answer of sorry right and this requires some amount of complexity to be added to the system because now we have to ask the question I want to do this work with the deadline send it to the scheduler schedule comes back and says no retry it retry it with a different set of characteristics in I had initially said I want to finish it by 5 o'clock now I can make it 6 I can initially I can perhaps run something of a shorter duration with the same old deadline than of something that I had planned to do before there are many activities like this right. So, for example, scheduling itself is one such activity the general scheduling problem is NP hard meaning that computationally very intensive. Now, what we have here is a very interesting situation I have a timeline going in the direction our task arrival here we call that a I have a task deadline here and I have to do two things within this interval what are they execute the task given the computation time and also plan the task execution which itself will take some time the conundrum here is the following the more time I spend on planning the better the plan I get right we all seen this right we look at all the options and look at which option is working best it will take you a long time, but the longer you take for planning the less time I have for the actual execution the shorter time you take for planning the more you have, but the worst the quality of the planning itself is. So, that is one issue. So, in a practical system we have to deal with this the other issue is when do I do the planning if I do the planning as soon as the task arrives I get the benefit of having other options in case the system says sorry, but the sooner I have to do this the state in which the task will execute is likely to change from that point to the point where the task executes according to the schedule. For example, suppose I am measuring the altitude of a flight to make a decision between when the planning task executes and when the task itself executes there might be difference in time there will be difference in time and if the computation time of the task depends on the altitude of the flight I have to look at the worst case altitude to come up with the planning parameters follow me I have to know how long the task will take to execute when it is scheduled and that requires some information about the execution time assumptions underlying the execution will have to be pessimistic or conservator and the sooner I do this planning the more conservator I have to be because the distance between the planning time and the execution time will be that much larger. So, because of these reasons there are number of decisions to be made how do you plan when do you plan what assumptions we make under planning how much time I do I allocate to planning and all of that. So, that means dynamic planning that much hard that much more difficult and complex, but you cannot avoid it unless you have the simplest of systems dynamic planning is something that has that should be accommodated. If you do not want to pay the price for it you do best effort which is what most of us do which is do the best to meet deadlines no guarantees what do you use for scheduling earliest deadline first or some notion of importance this is what the for example Linux does it puts all the tasks into one queue adjusts the priority according to interactions with the user CPU bound or IO bound and so on and tries the best to provide the best response for the average case user not good enough for things with deadlines and then there is a combination of these techniques called cyclic scheduling which is often used in practice things are harmonic in nature you fit the periods into one of these harmonic periods and according to some priority rule apply them you can read this on your own time. So, basically what I have said is four different types of scheduling approaches to on the surface are static in nature, but given the third one which is a dynamic planning the planning could use one of the table driven approaches or priority driven approaches follow me only complication is all of those things that I mentioned when to plan what to plan with what to do if the plan does not succeed what are the alternatives generally run something with higher deadline the first deadline does not work or run something with a shorter computation time and so on or if you have distributed system you send the task of some of the node some of the processor which is possible today because most real time systems are also distributed clear. So, what do we have in a embedded kernel for real time purposes basically things which are not necessary are removed. So, it is stripped down things are made to run faster fast context which recognizing interrupts quickly locking code and data and memory. Now, some people argue that disk should be avoided in embedded systems and the reason for that is simple you all know about virtual memory right. How does it work when I am running the code I refer to some variable that variable could be in virtual memory in a page inside virtual memory. So, I have to fetch the page and by doing a page fault. So, what is the worst case execution time for a for an instruction the actual execution time for the instruction plus the fetching time for the operands what is the worst case for fetching time is a if it is brought in from virtual memory secondary memory. So, if you add up all of these worst case times it will be enormously pessimistic. So, one way to have your cake and eat it too is to have virtual memory as a general principle, but bring the code and data into memory before you run the task itself. And make sure that nobody else will take away this pages while you are running and that is called locking or pinning of code and data and memory. And then there are special sequential files that can accumulate at a fast rate typically these files are circular buffers which get overwritten over a period of time. So, the best example of this is the black box recording system in an aircraft which records the last 18 minutes of all interactions all sensor values in an aircraft remember this. The first thing you look for in a flight crashes is the black box because it has the most recent state of the aircraft recorded in all its glory every interaction every announcement every sensor value every actuator value is recorded here. So, this is not uncommon in other real time systems also. So, typically what happens is this gets recorded and before it gets full another demon task copies every tenth or hundredth or some number of entries to another circular buffer that becomes a lower granule coarser circular buffer and so on down the line. And the last one is written to disk. So, over a period of time you have the most recent information in a fine granule less recent information in a coarser granule and so on. Finally, for recovery purposes and monitoring purposes we have something in disk which is of a much coarser granule. To deal with time constraints we have a real time clock bounded execution time for primitives real time queuing disciplines like or is their line first delay suspend resume primitives and of course priority driven scheduling mechanism. Now, if you have a real time operating system typically you have priority driven scheduling if you have something called a real time executive you have you use typically table driven mechanism. The reason this is much simpler if you look at a table what will it have it will say at time t 1 do task x 1 at time t 2 do task x 2. So, all that the operating system has to do is set an alarm for the next entry in the table and that alarm goes off go to the table check what the task is to be executed run the task nothing more than that no priorities no nothing. So, the operating system requirement is much much simpler. So, these are called real time executives as opposed to a priority driven scheduling algorithm which requires a much more complicated queuing discipline interrupt arrivals triggering tasks and all of that which is why the cyclic scheduling slide which I had uses a table driven approach basically the table contains the primitive the task executions within a priority and then on top of that uses simple priority driven scheduling algorithm is to take Linux or RT Linux Linux convert that to RT Linux or some variant thereof take something like NT or Windows convert that RT NT. The reason people prefer this is because you know the system you know the primitives you know how to use the APIs and so forth disadvantages as I mentioned before too many inappropriate assumptions like the queues being 540 and so on. So, what you will see this afternoon is RT AI which is essentially a clone of RT Linux. So, basically the addition let me just focus on these parts here look at these boxes around this cloud we have interrupt control hardware the normal system which sends an interrupt to the operating system the operating system manages interrupts and then processes are executed at the top. So, what usually happens is processes want to do some IO they give control to the Linux system Linux handles the IO through the interrupt control mechanism and system comes back. Now, I will tell you about how the priorities are handled this I say NT thread priority, but it is actually any priority driven system has the same idea. If you ignore this top box or moment this is usually what happens the priorities are all given by the system as tasks execute they go from low to high priority or back depending on whether they are IO bound or CPU bound. The task starts as though it is a time critical IO bound task as it does more and more computation it comes down to the priority as it does more IO more interactions it goes up you all know this right this is called degradable priorities the priorities degraded over time spend on the CPU or upgraded when the priority changes because of more interactions. So, the basic idea is to give more responsiveness to IO bound tasks like in the case of our policies we are assuming that only we have control over the priorities not the system. So, this is not usable for us. So, to do something some sort of serviceability to real time applications something called the real time class has been introduced in most operating systems today including NT, solaris and so on. So, the idea is these are non degradable priorities system does not come into the picture right. So, you assign a priority at one of these levels system would not bother you and as long as there is a task at this levels it will not execute any of these tasks the bottom that ensures that real time tasks have a higher priority than non real time tasks are you with me on this. Only problem is that these priorities are lower than interrupt priorities what does that mean suppose there is a real time task running now and an interrupt happens for a non real time task the non real time task interrupt will preempt a real time task running now thereby completely avoiding completely ignoring the priority different scheduling approach that we have seen so far and that leads to something called priority inversion. Real time Linux came in and said that I will handle the interrupt if the interrupt is for a real time task I will run the task myself it was for a non real time task I will hold it make it into software interrupt and the Linux runs only when nothing here in the real time task case is to be run. So, it is like the following I have real time task T1, T2, T3, T4, T5, Tn and then other task called T star which is the Linux task T star will run only if nothing else to be done in this range including interrupt handling for the T star that ensures that all interrupts which come in and are given time are only real time interrupts and not non real time interrupts this basis is sufficient for the afternoon lab class we will pick up on this tomorrow may be spend half an hour to an hour we will have done your lab exercise by the time and I will just close this with the remaining you know slides on operating system tomorrow. Sorry to keep you waiting let us disperse for lunch.