 we have been looking at model predictive control and let me go over some of the ideas model again well we have modified formulation from linear optimal control, linear optimal control is technically a formulation over infinite horizon whereas this is a formulation which is over a finite horizon and it keeps changing as a function of time. At each sampling instant we are going to solve a constraint optimization problem over a window and this window keeps moving, window keeps sliding in time so that is important. So you solve first the problem over window K to K plus C where K is the current instant implement the optimal input move, move forward in time reformulate the problem over K plus C plus 1 to K plus C plus 1. So the window size P remains constant and so on. So this window keeps sliding in time and as I was giving you analogies that this is what actually we keep doing when you control the system, when you drive a car, when you drive a cycle or a motorcycle or you know drive your work you actually keep planning only over a horizon and then this window keeps sliding. Of course our brain is much more complex computer than what you can implement. So the window in time and space that we can have can be time varying and sometimes you might plan over 100 meters and 10 minutes in future, sometimes you might plan only for 3 minutes in future you know brain is amazing, we cannot teach a computer to do what a brain can do but at least we can approximate what. So proactive constraint management we actually do this when you are driving you proactively do the constraint management. You make moves that will make sure that the car will not go outside the constraint boundary after some time okay same thing you want to do now we want to have a model which is running on board on the computer parallel to the plant online in real time. You are going to do forecasting over the future and you are going to check whether the constraints are obeyed or violated by the prediction model by the predicted output okay. So I have just modified a little bit this concept of what is called as control horizon and prediction horizon earlier I had talked about input blocking which is a little advance concept I have moved that to like appendix in the revise and I am introducing a simpler idea called control horizon here. This is what originally it appeared and this is what is very commonly used the blocking concept which I talked about is little advance and you can understand it maybe let us take a step backward and understand a simpler concept which is control horizon. So what I am going to do now is that forget about I am going to do a special kind of blocking you can say see I want to plan over the horizon from k to k plus p typically how much how much long you plan you plan this p is chosen based on the settling time over the system what is settling time if you give a step change in the input the time it will take to settle so roughly of course with different inputs the settling time will be different so you take the maximum of the settling time for different input so you how much you want to forecast into future up to you know why settling time because you know the steady state effect of current input will be failed after the settling time so that much I had in the future you want to predict. So typically this could mean for a let us say if it is a furnace it could mean prediction over 5 hours or 4 hours okay if it is just a vehicle it could mean just prediction over 5 seconds depends upon the system and so typically if you quantify in terms of sampling interval this would be typically 100 to 200 samples in future okay so it will be about 100 to 200 samples in future that is what you want to predict over okay now as I said this is the one picture which tries to capture everything so these are the future input that I am planning okay now since I am going to implement only one move here this one UK okay and discard the optimization results when I move to the next window okay what we do is we restrict the degree of freedom into the future in principle we can change or we can treat future inputs up to this point as the manipulated inputs which are available for manipulation we can do that okay but what we do is that gives us to a large optimization problem okay we want to restrict the decision variables so what we do is we typically allow input to change over first Q samples this Q could be 4, 5, 6 small number and then we assume that after this point the input is held constant at the last value which is here okay so the blocking I mean I have tried to explain this using a different figure now yeah. So these are the two scenarios where you allow all the future inputs as a decision variables to the optimization problem the other thing is you say that only first 5 you are allowed to change and after that you keep it constant okay so this first 5 this window is called as a control horizon from here to here okay the prediction horizon is the window over which you are predicting the future behavior control horizon is that horizon over which you are allowing manipulate inputs to change after which you hold it constant okay this control horizon business has come mainly because from the computation new point you are trying to reduce the degree of freedom for optimization problem if you have to solve a large optimization problem and anyway if you are going to use only one move out of that okay suppose I optimize next 100 input moves and throw 99 and just use one okay then first of all I am formulating the huge problem okay of which I only trust one move okay then idea is to reduce the dimension well then you reduce the dimension of course your manipulatability reduces okay if you give less degrees of freedom okay the way you can shape the future is get restricted fine but you know there is a trade off between fast computation online because you have to do this if you are doing implementing a predictive controller for a vehicle you may have to do this calculations optimization calculations in fraction of a second okay so in that case you know the smaller dimension optimization problem converges faster so this trick is done okay so let us move back again and let us quickly go over the basic elements of MPC one is you have an internal model well I have developed the formulation using an observer close loop observer predictor predictor you can use all kinds of things you can use open loop observer you can use filter Kalman filter you can use remember the observer you can use whatever okay so I have developed with one particular way okay so you can you can use a model which is coming from our max or DJ actually I have uploaded yesterday show you how it can be done for LQG using both models one using a linearized model another using identified model how things will be different I uploaded my code for your for example so those are image files and you cannot copy from there but you can view what is there if you are able to copy tell me I will change it so then you need a prediction scheme over the future how do you predict over future there are two components of prediction scheme one is you have to use the model into the future to predict future profile second is you have to realize that the model is never perfect okay so you have to have some scheme for compensating future predictions for plan model mismatch okay so there are two things plan model mismatch are unmeasured disturbances are always present the model you develop in the beginning of your project is never going to be valid I mean it is roughly okay but not you know perfect model the plan conditions keep changing and anyway you have a non-linear real plan is always non-linear you are linearized so there are all kinds of approximation okay so you have to have a scheme for compensations of plan model mismatch and then you have to have a scheme for solving it optimally online okay so for that of course linear programming quadratic programming there are very efficient tools available and commercial codes are available and you can use those commercial codes some of them are even of the domain and okay so we said that the observer we are going to develop using observer here and this observer could be developed through any enemy it need not be Kalman filter it did not be Kalman predictor it can be the hamburger predictor it can be state realization of our max BJ model it can be anything okay then I am going to use this to do current state estimation XK given K-1 is the current state and then I am going to estimate the current innovation this current innovation will contain the information about model plan mismatch okay unmeasured disturbances if everything is perfect okay then innovation is a white slide but if it is not perfect if the model is different from the plan innovation is not a white slide and we use that signal to compensate for the future prediction so how we do that you do it using this innovation bias approach you filter the innovation okay and then you try to find out what is the nonzero mean of the signal this is done using this simple exponential filter this is what I have included now is I put this slide for unit again filter I don't know how it is not there some problem with the version okay so what is this unit again filter I have just explained this in this one slide unit again filter is simply well in continuous domain a filtered signal Y is filtered through a first order filter simple tau S1 plus 1 or if you are more comfortable with a it will be S plus a and the numerator will be 1 upon the 1 plus a I think so this is unit again filter the gain of this filter is 1 okay so the task of this filter is only to knock off certain frequencies what are those frequencies that will be decided by how you choose this tau okay what does it map it to this particular differential equation or this one first order transfer function actually maps to this difference equation this z domain transfer function which is nothing but this filter okay first order filter and the map between this alpha and the time constant you might be more comfortable when it comes to filtering it might be more comfortable simply in terms of a time constant or a frequency rather than you know discrete time alpha okay so it's easier to work with this tau here and remember this mapping at alpha is equal to exponential minus e by exponential minus e by tau where T sampling time so this is how you filter a signal okay so this filtering of this signal this is the filtered value so new filtered value is alpha times old filtered value plus 1 minus alpha times the new input which is coming why so why is filtered okay and depending upon how you choose alpha the signal gets filtered through this first order filter and then I have just shown here this first order how it looks so for different values of alpha how does the filtered signal look this blue signal is the original signal without filtering and the other three signals are filtered signals for different values of alpha so this alpha is a kind of a tuning parameter we have to choose that tuning parameter between 0.8 and 0.99 okay and when you do your control L2G or MPC or whatever this particular parameter you will have to tune you will have to choose to get a good behavior okay you can have given starting guess is 0.8 anywhere between 0.8 to 0.99 you have to try different values it may happen that for higher values the system will start stabilizing for lower values it may not okay so okay so so after we find out this business then comes you know suppose we are given a future set of manipulated inputs UK UK plus 1 UK plus 2 okay at the moment I have not put those constraints of control horizon and all that I am just thinking that you are given all the future inputs for manipulation how will the prediction look like okay so the prediction will look like this first the this the K plus 1 is the first prediction I am going to correct this using the innovation filtered innovation and I am going to correct this why predicted also using the filtered innovation so this correction here this brings in this brings in somehow the effect of unmeasured disturbances plan model mismatch on the future prediction okay all that I have done is I have recursively use this model okay I have recursively main thing is that the first point on the prediction is connected with the observer in the past this is the connection this particular statement is the connection between the prediction into the future and observer which is working in over the past okay so this is where you connect so initial point for the prediction is same as the last point of the observer that is what I am saying here you just see here I have I have this observer here I have this observer here my observer at time K I predicted estimate of X okay so I have to start my predictions from the current point where am I right now in terms of state I am at X at K given K minus 1 okay so once I do this matching between the past and the present okay then I am going to just recursively use this equation over the future okay and carry out the predictions so a raw implementation of predictive control will just involve a for loop okay a raw implementation will involve a for loop in which first first this will be the first at you know J equal to 1 this will be learned then this value will be used here see you do not have to do all this expansion you just first in the for loop you compute Z hat K plus 1 use Z hat K plus 1 you will get Z hat K plus 2 using Z hat K plus 2 you will get Z hat K plus 3 and so on just put it in the for loop you will get the prediction okay for every guess for every guess how you how you are going to solve this problem we have already seen it once in the last lecture as an optimization problem so you have to compute for a guess of the inputs you have to compute the future prediction that you do in a for loop then you compute the objective function for the optimization problem and give it to the optimizer optimizer will do the rest okay so here what you can do is there is a optimization program in MATLAB if you are going to use the optimization program called fmincon okay so constraint optimization program in which you can specify the bounds you can and you have to give a you have to write a function in which the objective function is constructed using model prediction once you do that okay you can implement your model to control scheme so now I have expressed this in terms now as far as program is concerned I need only this first statement and this statement and this statement and this statement I have expressed this like this in terms of all the future inputs and also all the future and g hat k that is because I wanted to do some interpretation I said that the prediction p step I had prediction in the future is function of two things what has happened in the past that information is bought through or three things actually what has happened in the past that is initial state of the model okay then all the future inputs that you are going to input moves that you are going to make okay and this will bring in the information about the past disturbances and past model plan mismatch so this is the compensation for model plan mismatch okay so this is this is how you do the prediction okay and this is the interpretation of those predictions and here of course p is called as a prediction horizon now the next question is that yeah so that is a guess no see the way optimization work is that you guess yeah so you give a initial guess from that it will compute the objective function compute the gradient and then there are methods to generate new guess from the old guess okay so the way this is going to work is each time a guess is that you do prediction okay so actually what you do is actually what you do is you you write and write a function in which given a guess you generate the objective function that is your job okay it's out of the optimizer to from the old guess to generate a new guess evaluate the objective function that is out of the optimizer so optimizer will keep generating new guesses till certain criteria that is necessary condition for optimal it is satisfied and it will terminate the optimization so it's the iterative process okay for every guess optimizer generate I have to construct the prediction evaluate the objective function and give it back to the optimizer okay evaluate the objective function evaluate the constraints and then tell the optimizer the situation whether you are for the given guess whether you are inside the constraint boundary that is that is business of the optimizer okay so you think of an optimizer as a stability is available to you to which you just supply an objective function okay so what is also in the objective function given guess of the inputs you carry out predictions over the horizon using those predictions you find out difference between the future set point and the future predictions find the objective function calculations and then give it back to the okay so actually technically what we have done is something like this we have done prediction using this kind of a model so done predictions using this kind of a model what we have done is that we have assumed that there is a unknown disturbance this unknown disturbance we have primed at as ESK okay and we have assumed that this unknown disturbance remains constant over the horizon okay actually this is called as integrated white noise model okay though I have practical implementation of this integrated white noise model is what I showed you in the previous slide okay but philosophically it means that I am actually doing this I am actually solving this plus this equation together whereas the initial condition for this is nothing but the filter denomination okay so whatever we have done earlier can be written you know conceptually like this so actually what I want to highlight here is that to remove offset or to account for model plan mismatch we have to introduce an integrated element into the controller this integrating element into the controller is introduced artificially and this artificial business comes through this augmented equation okay so this artificial this artificial state epsilon has been introduced into the prediction and then this is used to compensate for model plan mismatch because of the way this is what has happened practically it means of course we have you know this ES coming here if you eliminate all that actually practically it is this but conceptually it means that you have added an integrator how many integrated elements you added you are added integrating elements equal to number of number of outputs because ES this is the innovation okay so you actually augmented the system with extra states which is equal to number of innovation so number of measurements so that is how you get rid of the offset now constraints on the inputs I am going to talk of the simplified constraints okay not input blocking I have moved to the end of the slide so this is a simplified constraint you are going to allow you are going to allow only first Q moves to be freely changed after that I am going to make a constraint that after Q Q Q plus one up to Q plus T minus one is equal to okay so first Q this Q first moves would probably mean only five moves or six moves or six seven moves that is because you know we repeatedly solve the problem okay that is why we make this simplification if you have lot of computing power and if you have you know there is computing time is not a constraint then you do not have to put this this is more from a practical view point so this in the control control MPC terminology this is called as a control horizon typically in industrial implementation this would be five or six or ten and the prediction horizon will be hundred hundred fifteen so you allow next ten moves to be chosen freely for the optimizer and of which you implement only one okay and yeah this one so this is where you know the X is the observer so I deliberately kept two notations one for prediction and one for observer observer is dealing with the past predictions are see predictions are for a given guess you will predict that prediction may not happen okay so that is why what I have done to move here see my what is see there is a governing equation for the plant dynamic equation okay what is that we are going to use the observer for as a model for prediction okay so I want to use the observer now observer in the past and observer in the future okay I want to keep them separate so that you are understanding is cleaner okay so I am going to use two different notations okay so when I am saying X here when I am saying X this is my observer so I use this observer to do the current state estimation this is at cake I am standing at the case instant okay I want to estimate the state at the current point and then using this state as a initial point I want to predict into future okay so what I am going to do now is this clear up to this point this point is clear so now this so this innovation bias that is okay that is fine okay now look at this equation so now I am going to use the same models of future prediction see you is a future input I have not actually made it I am just contemplating different input okay so suppose I were to implement UK given K what would be the prediction the difference equation will give me I know Phi I know Gamma I know L I know C okay okay a mistake I have made is that it should be here it should be that should be that here I will change that I will correct that it should be that had K plus one should be here okay and then what I am doing is initial state of this see you need VK here to go to VK plus one okay where are you going to get that from the observer in the past that is the connection between the prediction and the past okay so there is a error here this should be Z hat K plus one here so yeah here I have corrected so here there is a typo okay I will correct that so you get it now now so once I get this prediction I can use this prediction into the next prediction okay and then I can go you know jumping in or hopping in time in future okay from K plus one to K plus two K plus two to K plus three K plus three K plus four and I am going to do this up to K plus P into future so actually what I have done if you look carefully when I see this one I am saying that this future future error is equal to EF okay I move to the next next point again what is the future error I do not know okay so what is the best guess for the future error current error okay so that is what I have done so see all these other signals I am using future you see here but what about the future disturbance can I ever predict future disturbance I can never predict future disturbance okay so I am saying that the best estimate of the future disturbance is the current value of the disturbance okay and then I am going to just use it see even three step I am still using this EF and then if I go P step I am still using EF see all the other things are future but the disturbance estimate is current okay so philosophically what does it mean okay philosophically what does it mean it means that you are making a model which is of this form this model okay in which you are saying that the future is going to remain constant it is equal to the current disturbance okay so conceptually doing that is equivalent to this model that is what I am saying okay these two are one and the same okay from the theory viewpoint it is important to write this actual implementation is what those equations you are not going to you can actually substitute this here you can substitute this here and get rid of this equation okay but that is all fine but philosophically you are done this okay philosophically you are augmented the system with extra integrating states okay that's what so then I have this future manipulated variables which I have some degrees of freedom and then I of course have constraints I can see unlike all other control schemes why manipulative control schemes have become so popular because all the other control schemes you just come up with one gain matrix okay it is difficult to or go on gain matrix or one transfer function for controller transfer function or whatever it might be you cannot systematically handle constraints here you are solving an optimization problem so tell optimizer that don't choose a move which is outside the bounds it is so simple okay you are actually doing time domain predictions you can actually say that don't choose a move that will let the predicted output to go beyond okay so it is explicit you know your control problem as you think about it you can transfer into an optimization problem it is very easy you can give bounds on the inputs and real systems are always bounds on the input and all the analytical control theory cannot deal with it systematically okay so what you do there you know when you are actually suppose you were to implement your LQG controller with bound all that you do is that you know you put an if statement if LQG ask you to implement a move which is higher than what is feasible then don't implement it set it equal to okay so that is called you know it is called reset wind-up mechanism but those are ad hoc measures you know you are putting if then a statement that's not max right whereas here when you put it an optimization problem you are actually solving using formal mathematical techniques okay so you can constrain delta u you can see many times you cannot change you cannot open a wall from say 50% opening to 100% opening in one second okay or you cannot change the paper motor from you know beyond a certain rate there are always physical limits and the controller should know that there are physical limits when you just put a gain time something if the error increases the delta u increases that doesn't happen here you actually can constrain the inputs okay so I will just try to put your eyes this control horizon input constraints and all that then I talked about last time about this future set point trajectory okay how do you want to go from the current point to the final point okay like you are cruising an aircraft and then whether you want to you know go very slowly to the new set point new height or whether you want to shoot very quickly and go up it depends upon application and you can actually you can actually decide a future trajectory which starts from the current point and takes you to the final set point so this is this can be done using a first order filter and then different values of first order filter will give you if you don't put any filter it means the set point trajectory is this step if you put a filter it means you are taking it gradually okay if you do not put any filter if you say that the set point at the next instant should be equal to this set point okay if there is no filtering it is like a step function if there is a filtering then you are taking it gradually okay so it depends upon this is a tuning parameter this is not this is you can choose this then of course I talked about steady state target business in LQ linear quadratic optimal control implementation same thing you have to do here if you want to do a except that I have put it this is an optimization problem because it is quite likely that your set point may not be reachable within the bounds okay so then you have to come up with a target which is not equal to the set point but as close as possible to the set point okay some operator might give a set point which is not reachable okay within the input bounds so that you have to modify the target problem here little bit when you implement MPC you do not have to do this target business forget about of course if the unconstrained solution exists that will be same as what you get from LQ part that is not different that is one and the same so what is this constraint MPC formulation constraint MPC formulation consists of an objective function the objective function as three terms one term this first term is distance of the predicted output from the set point trajectory okay so what I am what I am trying to say is that minimize the distance between this is some of the square of distance this is error transpose W error what is this error this error is set point trajectory minus the predicted output okay so I want to minimize the difference between the future set point trajectory which I have given and the future predictions that is done over P you put one more term on the final terminal point okay you put one more term on the terminal point this terminal point term is this is between the target state okay the target state business now that they implement LQ OC you will be clear to you target state business so you take it the system as close as possible to the target state so these two will make sure that you know you are you are cruising or you are doctoring the future behavior as close as possible to the desired trajectory you want to do it such that no excessive moves are made in the future so you don't do it by you know making large input moves so that is handled through this input waiting okay you you put a waiting on rate of change of input okay typically into your controller so delta u is difference between two subsequent I think this term needs to be modified now because now there is no input blocking so I will change this one term okay what are the constraints of course model okay every time you give a guess the optimizer has to compute predictions using this equation okay that is and bounds on the output you can bound the future outputs this is something completely different from what other controllers can do you can bound the future predicted outputs okay and then of course input bounds input rate bound everything you can this is an optimization problem I have given you one way of formulating optimization problem as quadratic non-optimization problem somebody might say why why here this is two norm square right why two norm square why not one norm why not infinite norm you can use infinite norm you can use one norm you can use all kinds of okay you can use all kinds of in fact people using people also use MPC objective function as a profit maximization you can have a very open problem where you know the optimization problem is make decide the future such that the profit is maximized okay so conventionally of course you do this optimization there are some more important things which come in up in MPC which are not there in the other I want to highlight this okay sometimes sometimes you do not want to control an output precisely at a set point okay you do not mind if it fluctuates within a bound okay so for example you know some concentration or some you know temperature of in some reactor is very very important for the productivity but the level inside the reactor it need not be exactly at you know 7.2 meters it can be between 7.5 okay so what I can tell the controller here is that do not control level at the set point as long as it is within the bounds it is fine with me okay so this is called zone control variable and this is something different about MPC you need not give a set point on a particular output you can just say that maintain it within a bound okay which is more practical in many situations so you are actually allowing giving freedom to the controller not to take certain outputs exactly to certain values but allow them float within the bounds so all these things are possible with this okay so the controller specification is so transparent and so straight forward from what you think see if you ask operator or even a control engineer as you go there and said translate your controller requirements into a frequency domain design criteria most of the you know popular techniques in control multiple control or sequential domain very difficult to translate that that is on the case here okay what is the bound on the input you know what is the bound on the input what is the rate at which you can change you know what is the rate you can change okay what is the physical bound on the particular output you know what it is so all these things can be very transparently you know converted into a controller specification okay it can be flexible it can be flexible if if a frequency domain controller is to be redesigned you need an expert who understand frequency domain who can we will go back do the calculation here it is an optimization problem which is solved at every instant right suppose I formed I had one set of constraints today I can change them tomorrow so this particular formulation predictive control formulation is very very flexible nothing more flexible how you can think of okay so how do you compute this term here W infinity you know you can show that if you do it using solving the Lyapunov equation then you know you can guarantee some stability properties so I am not getting into that theoretical aspects but just believe me right now that how to compute W infinity you can solve this Lyapunov equation and get W infinity well this has been implemented over the years and found to be stable and working and robust and all that of course as you know academicians are always bothered about proving properties and then there has been lot of work on using Lyapunov stability theory to establish theoretical foundations of why it works well it works you know it works and it works in thousands of cases now we are trying to satisfy ourselves by saying that well our old good old theory of Lyapunov works here and I am not given those all the details it gets a little bit more tricky to show stability even though nominal stability maybe I will upload some notes later just for your reference well then what do you do of course implement this in moving horizon formulation you only implement the first move optimal move and you throw the rest and then you move on and reformulate the problem again we solve it and so that is the moving horizon implementation now are there efficient ways of solving this well the raw optimization problem which I discussed is can take lot of time to solve there is something called quadratic programming if you have done a course on optimization you will probably know about quadratic programming quadratic programming can solve the same you have to reconvert this problem into a quadratic problem that is algebra I will just show you those equations but I am not going to go detail into those equations that is just a lot of algebra you know you patiently sit and keep doing this equation rearrange all the equations into certain form and then you can solve it very efficiently so the nice thing about point way to control is that it can be used for a system which is non-square the number of inputs number of outputs will not be equal there can be more outputs than the inputs they can be more in out there can be more you know output than the input there can be more input than the output they can be equal they can be whatever okay does not matter same ideas same optimization formulation work for any kind of input output mapping so as I said when the number of outputs are more than the number of inputs you have to give this zone variable you cannot maintain all the outputs at the desired point there are some but this can be predicted for theoretical what is this quadratic programming business I will just briefly mention it I am not going to go into it too much detail details have given in the notes mathematical constraint optimization problem is called as a quadratic program when it is of this form okay so the optimization problem that we talked about just now multiply the control problem can be transformed to this this form okay the you here are the future input moves H is a complex matrix of Phi Gamma C and then this F is another vector and this U is again the future input so if you have this quadratic equation as a objective function okay subject to this bound a u is never equal to b then this is called as a quadratic programming problem and these quadratic program problem can be solved you know very very quickly polynomial time to be precise in mathematical terms they can be solved in fractions of a second so if for example if you do this exercise that you use Matlab's fmincon which is a constraint optimization problem I think it is specific quadratic programming if you use the Matlab traditional constraint optimizer and if you use quadratic programming of Matlab there is a Matlab program called quad prog which is quadratic programming then quad prog is ten times faster than the normal optimizer okay and of course for online implementation when you want to solve one optimization problem each sampling instant you better do very fast computing okay so this transformation of the original problem into quadratic problem is desirable of course when you do your assignment do not do this do not get into this you do it as simple you know write a for loop like write the prediction equation do it in a raw way when you do it your assignment but actually a real-time implementation you would be using quadratic program okay so you have to transform the original problem into this kind of a problem and there are very efficient QP course available commercially or even on public domain and you can use them to solve your problem very complex very last case problem so this this controllers I will show you they might have you know ten thousand decision variables and you can solve them in a few seconds to use these efficient codes so how do you do this of course you know you well I'm what I'm going to do here is develop just show you very quickly how to transform the problem into QP problem I am going to neglect right now that terminal target set business but that can be included I am just removed it because keep the algebra simple I don't think that cannot be done with that so what you do is you define a vector of all the future input stacked one below each other okay so this you this you future f is all the future input that one below each other this why future f is all the future predictions that one below each other okay and then all those prediction equations I am going to stack below each other and write one giant equation why future is some matrix into the initial state plus some matrix into all the future input okay plus some matrix into EF so what are these matrices SXSU and SE so these of course if you sit and you know write all those equations one below each other and then take you common okay to do the algebra you can find those matrices so those matrices will turn out to be some huge matrices okay so this this matrix SQ will have dimension equal to number of outputs into number of prediction horizon so suppose the number of outputs are 5 and prediction horizon is 100 this matrix will be 500 cross whatever number of inputs times the control horizon so the number of inputs are 5 and control horizon is 10 so it will be 500 cross 50 matrix the huge matrix and then doing these matrices in MATLAB and all is not difficult 500 cross matrix is the style play in MATLAB now so this matrix is often called as dynamic matrix and you observe carefully and if you know what are impulse response coefficients from your previous understanding of system theory and actually this matrix consists of all the system impulse response coefficient of the system and it was conventionally called as amplitude literature this is called as a dynamic matrix of the system so basically you have written the system in terms of three things all the future predictions packed into one is equal to one matrix into all future input and these two matrices which bring in the effect of past state and past disturbances okay you are written as one all the p step predictions clubbed into one giant equation one giant matrix equation then you can write your MPC problem in terms of this giant vector okay so huge matrix and then you know doing some algebra you convert this into a so there is a mapping between all those matrices and so this is just a this is just a lot of patience and you know you can convert though all those equations into this complex equation nothing great about it so basically you have transformed the problem into this quadratic problem okay if there were no constraints if there were no constraints this quadratic problem can be solved analytical if there were no bounds on the inputs if there are no constraints then what is the what is how do you solve this what is the necessary conditions optimally the gradient is equal to 0 and essentially possibly different of course the gradient equal to 0 if you do for this particular system then the solution is okay solution turns out to be this so and then if you transform this solution if you transform this solution okay if you see this F vector as exact K given K-1 EFK and all that okay you can rearrange this solution unconstrained solution you can rearrange it into this form doing suitable algebra you can rearrange it in this form okay why I am showing you this because I want to point out that MPC actually is a state feedback controller see here UK is GX times negative state feedback control the same it's a form of a state feedback controller design okay unconstrained MPC will turn out to be state feedback control concerned MPC is a state feedback controller but not in a closed form unconstrained you can show it is a closed form okay if you are doing unconstrained solution okay then you don't have to you know solve optimization problem every time you just compute these matrices GU GX and then just multiply you will get the solution okay so there is nothing so you get what I am saying so this unconstrained MPC actually is a form of a state feedback control law okay and then you can of course this unconstrained MPC you will not really be using typically in reality this is only to give you inside that actually MPC is a power state feedback controller in reality when you use MPC you will use with the constraint at least you will have input bounds nothing else the bounds are always yeah which one yeah the difference is the finite horizon this is a finite horizon problem whereas LQG is a infinite horizon problem so now what see here in LQG here you are able to give constraints on delta u okay so this is then you know you have a very nice handle on model plan mismatch so it is it is not too different if you were to ask me if I have to implement an you know a scheme which is unconstrained I do not see too much advantage over LQG I would then implement LQG why go for finite horizon LQG does it give close form solutions I do not know this can be used as a finite horizon LQG unconstrained MPC like finite horizon LQG is a form of finite horizon LQG with idea of moving horizon see LQG does not have idea of moving horizon okay so basically the idea is that this original problem can be recast as a to be problem and then you can actually solve it as very very efficient yeah one can do of course I have developed one way of doing MPC using this you know innovation bias you can do state augmentation and then formulate MPC where I mean MPC has been a very rich area almost 30 years of research in this area there are so many ways of doing it I just showed you one possible way which well I like which more of personality I have done it using closed-loop observer but originally it was not done using closed-loop observer originally it was all done using open-loop observer so originally all these method could be used only for open-loop stable system now of course you do not have to do all that then of course you can I have done it using Kalman predictor but I just wanted to note that that is not a restriction it can be done using any any form so there are different tuning parameters you can choose the set point filter trajectory you can choose the robustness filter this innovation bias filter that you can tune then you can tune the control horizon prediction horizon okay you can actually set the delta you move so all these are tuning parameters and then the tuning parameters are very transparent meaning in terms of in terms of performance they are very very transparent so typically prediction horizon you choose between 60 and 100 control horizon is between 5 and 10 these are from industrial implementation you can give zones instead of giving set points so all kinds of just to show you one example this is a problem which is quoted by the shell refinery so I want to control three set points there is a there is some some heavy oil which is being separated into lighter products and heavier products okay this is one of the typical operations in the refining so the field is coming here at the bottom and the steam is being injected here at the bottom so I can manipulate the flow that the top draw that is the flow rate here I can manipulate the flow rate out here this is the field which is coming in okay and then I can manipulate the heat input here okay this is called in this called u3 is bottom reflow duty this is nothing but the heat input here to this heat exchanger and I can manipulate this product rate I can manipulate this site product rate okay there are two disturbances some part of the liquid is taken it is sub cooled and put it back same thing is done here now this cooling fluid for this is coming from somewhere else so these two are actually disturbances okay if you do not understand the physics do not worry basically as a control engineer I have three end point means purity I want to control the purity of the product here I want to control the purity of the product here okay and I want to control the temperature at this point okay just look at the black box three things two product compositions one temperature I have measurements available for the two product compositions and the temperature I can manipulate three inputs the top draw I can manipulate this this flow rate here I can manipulate this flow rate I can manipulate the heat input three inputs three outputs to disturbances okay this particular system has large time delays it is a very difficult system to control it has very heavy interaction okay so it is a challenge problem by the shell refinery for people who work in control and they have given different they have given this model and then you can convert it into a state space model into simulation and so on they have also specified the input rate constraints input constraints everything they have given as a description of the problem okay so they have said that the input should not cross limits of plus or minus point five disturbances will not cross minus point five at any at any control move should not be more than point zero five okay so designing an LQG controller which will make sure that this happens is not possible okay you have trouble so or even a PID controller or you cannot design you can enforce as a you know ad hoc measure that if it gives you higher than do not use it and discard but there are constraints on the output okay it just says that the Y3 need not be controlled at the edge point Y3 can be above minus point five they have given everything in terms of scale variable so we do not know physical variable and then they have said that Y1 should be between plus or minus point five this is the constraint so you have to operate the control system under these constraints okay so I have developed a MPC controller for this system using 40 sampling instance I control horizon is five the model is I have done the same thing that you have done in your course I took this as a plant then I injected the inputs okay I use system identification toolbox develop a state-space model so I am treating this this plant as a black box okay I am generating data using it to develop a prediction model that prediction model I have used for controlling I am going to compare my performance with 3 PID controllers okay and what I want to show you is that three very tuned PID controllers cannot manage this system so well as a model PID controller can model PID controller is a multivariable controller with no constraints so very advanced controller at some point to three PID controllers it's like three drivers in the car you know who do not know about each other so so I have three controllers which so this is the you can see here this black line is the PID controller this year and this black line I have given a set point change here okay the MPC controller very quickly set this blue line is the MPC controller okay black line is the PID controller PID controller takes ages to settle you know almost 12 to 15 hours MPC settles within 5 hours okay the same thing is here these two set points I have changed okay here also this blue is MPC which settles within 5 hours PID takes ages to settle okay so same thing is true when I make a reverse thing okay so MPC in typically you know it it takes you to from one set point to the other set point very quickly because the multivariable controller okay it avoids you don't have to do anything for specially for avoiding multivariable interactions the model itself is a multivariable model it knows our interaction and the planning of the inputs is done simultaneously the input profiles are completely different of MPC and PID okay so what about the regulatory case I have given a step change in the input disturbances and then they have also given as a part of the description they have also given what disturbance if you include and all that so you get this improved disturbance rejection in using MPC is the step disturbances are given I am just showing you here that it gives completely decoupled response when I changed when set point the other two set points the other two outputs I have now introduced different disturbances earlier I had cut off WK white noise business here to show you clean behavior now I am doing simulations here with unmeasured disturbances measurement noise everything included okay what I want to show here is that when I change one set point other two variables nothing happens they are just as if they are when I change this set point this variable and this variable are for all this control except a small blip here so it gives a kind of a decoupled response as if the other loop doesn't exist in fact there is no loop there is one controller which is looking at all three things simultaneously so this MPC is much more you know and then of course you can do better disturbance rejection what I have told you it just just basic you know there is lot lot and lot more literature on how to work with this MPC controller how to improve their performance robustness all kinds of well as control engineers we should bother about nominal stability one approach is to formulate so-called infinite horizon problem and then impose a constraint that a state constraint that xk plus p equal to 0 and then you can establish stability under Lyapunov river I will give in the references towards the end if you are interested in pursuing that then otherwise there is a approach which called as introduce contraction constraints into the MPC formulation what has been shown is that you can construct Lyapunov function directly using MPC objective function okay so you can actually put stability using the MPC objective function itself so I am not going to get into that so you can you know this paper by Muske and Rawlings 1993 there are two papers one in IEEE transactions of automatic control another one is AICAT German both of them are seminal papers I have given references in the end they will they this particular paper sorted out most of the theoretical issues associated with MPC so they are considered to be very and it put it into the state space framework and QG framework and so details you can see that basic idea is that you know you put this constraint that xk plus p is equal to 0 so you bring the system to the zero state after some time if you put that constraint and you can prove Lyapunov stability if a feasible solution exists for this problem then Lyapunov stability can be proved and then how do you bring in robustness into the system and so on so there are there are already many many commercial products and I am hoping that Dr. Jagish they will talk about one such product I am trying to organize this lecture after the exams are over and those of you are here should attend that lecture so this is a survey paper measurement back well I uploaded this paper in moral if you are interested so and and these companies there are many such companies which actually implement this controllers in India they want people who are trained who know about predictive control for such a identification set estimation and there is depth of people who know this so these controllers are very much there not just globally these controllers are here now being implemented so this is a survey function and measure paper where this was implemented you can see that it has been implemented in refining petrochemicals pulp and paper air and gas utility mining metallurgy for processing aerospace and defense applications at 2003 where less I am sure now they are exponentially grown this is changing with the changing speed of macro bus and computing possibilities what ten years back you had age-old computers and what what is available now on your mobile probably you could do on a big desktop and 15 years back so things have moved very very fast you can it has been used in automotive applications robotics as I said the latest thing I heard about was using it on Google Google is hiring people with okay you want to Google job you could go with all it pretty to control background and then they would like to hire you you can do prediction models for you know how see you have to allocate resources okay of computers to service to the customer demand and those are forecasted disturbances and you can develop a time series model do predictions forecasting and then do resource allocation okay so there are so many problems ultimately predictive control is not only for process plan it is for anything that where you can develop a model do forecasting over a future horizon and then do allocation so and then you implement your move only for let's say next 10 minutes to take a call for reallocating after 10 minutes so your prediction horizon could be you know some 2 hours or 3 hours into the future you can develop a model on the fly data is coming you can adapt the model is in time series approaches the model on the fly do predictions what is the largest application of MPC how many outputs controlled and how many inputs manipulated okay that's there in the Canada in one of the refineries in Canada largest one is 600 outputs and 283 inputs 603 outputs and 283 inputs the huge controller well my cousin happened to work on this controller he is bit it from IIT Bombay and he works in a control company in Canada so he was telling me that the model for this particular plant is like a book if you have to look at the transfer function you have to scan a book because it's a huge model it's a transfer function model or whatever the model which is a DMC they have only developed a transfer function kind of a model which is 600 cross 283 inputs matrix and to look at the same response itself is a trouble and then to fix which part of the model is bad his his work was to fix which part of the model is bad it is pretty tough then Honeywell application largest application they had implemented in 2003 was to 25 outputs and 25 inputs and controllers with you know 30 40 outputs and 20 inputs is common and now actually invents is a rosebound use you an MPC you know shelf of the shelf module my cup of service model where you can do fine food fire food control so basically you should know how to develop a model the key thing is if you develop a model prediction model and you can get going multiple controller so this is 2003 data in 9 years I am sure it's just they are not this is a very closely guarded technology not too much thing is available in the open literature because these controllers would cost in crores these are not cheap control implementing them maintaining them you need specialist who have done advance process control course and you have to have you know the cost of these controllers is very high that is one of the criticism of this thing that you need a specialist see a PID controller now doesn't need a specialist to implement but that's on the case with MPC so still that's a limitation it's a limitation that you need but then we are in business because you need a specialist to implement this and you can see here this figure speaks for itself this is before advance process control which means MPC was implemented and after MPC was implemented these two control outputs are all over the place before it was implemented and this is a industrial data okay and when the implemented MPC controller this the control output is just hugging the fed point don't worry about what part of it is this visually this figure communicates what it can do in a real real industrial plan okay well there are many many things you need a we have developed it using linear models okay linear models of limitations and then you can actually do everything using non-linear models so one can actually this linear models can be developed when there is a plant which is operating at a point okay when there is something which is continuously in transition like an aircraft you cannot have one linear model which describe the dynamics so you have to have modifications and there are several several modifications so there is you can use I use for those state space models to explain you can use non-linear first principle model to do MPC okay and not again new there are already products which do it because at least seven or eight products in the market which actually sell MPC based on first principle model okay developing non-linear models is a research problem and even recently our group here we had funding from Honeywell to develop non-linear first identified model non-linear black box model for some plants so that you can use data driven models you can use neural networks you can use all kinds of support vector machines so there's a lot of research on how to develop control relevant MPC relevant models for so I'm just keeping this not going too much into deep so you can have generally a model which is current state is some non-linear function of past states and past inputs you can have non-linear ARX models you can have non-linear BJ models there are all kinds of and then of course MPC formulation is the same you have optimization problem formulation subject to constraints on inputs output state equation modeling equation it becomes a non-linear optimization problem much more complex to solve and then how you can solve it in real time is a big problem okay so if you know your maths while you are in business you have just an example of this plant this plant again the model equations are available if you write to them they will give you simulator and then I have included the case study of this controller which we have implemented one MPC student implemented this controlling this there are six outputs and ten inputs to be simultaneously controlled and the problem is for example to move from one what they call as one product grade to other product grade okay so there are some there's a product called G which we don't know what G is one company has not told what it is but and then they have given the constraints how you know output should be constrained how input should be contained what are the bounds everything is given so this is a problem defined like a benchmark problem you have a new way of solving MPC you implement on this and show them that it works so we are taking it from 50% product split of G to 10% product split of G it's a major transition of the system from one operating condition to the other operating condition and we are able to manage it using our algorithm that we developed MPC student developed so the so you are going from certain product purity to certain other product purity certain product rate to certain other product rate and so on and this is done by simultaneously moving all the set points the new point using all the ten input simultaneously so subject to the all those constraint rates and all that this is using some time varying non-linear models we had developed so so you can use MPC using all kinds of models you can use data-driven models you can use mixture of mechanistic models and data-driven models you can use only mechanistic models depending upon what kind of you know what level of confidence you have in your model and so what I just described is just a tip of the iceberg just it's expected to work I'll just show you visually what difference it can make so this is a expected setup with some of you are simulating right now in the in the assignment I have two heat inputs as my mind put input and one flow rate cold water flow rate I'm mixing hot water and cold water here I want to control temperature temperatures of these two times and level here so there are three controlled output for my input and I want to so there are two disturbances what happens just visually see that if I don't put a controller the temperature is drifting this is open loop behavior the level keeps drifting the temperature keeps drifting temperature is changing here almost by six degrees here by two degrees in the second time and the level changes by six centimeters this is a tank of 15 centimeters six centimeter level drift is a huge drift for this particular time okay I put MPC I can just control it you know within plus or minus 5.5 this is a nonlinear MPC implementation actually on the lab okay so again part of a PhD work where we are developing some nonlinear time series models to show that how they can be made to work on the real setup so these are inputs and outputs there are products nonlinear model predictive control there are already products in the market last 10 to 15 years and most of the major companies into who are already into linear model based predictive control also into nonlinear model based so these are again 2003 survey and I'm no survey has come recently so this is I'm sure much more work now many more commercial products which actually that time these were the actual implementations of nonlinear MPC now it has growth exponential so this is a very very very very flexible control scheme okay one of the most potent and very one of the schemes which has which is a major commercial success okay so no other controller multi-variable nonlinear or linear controller which has spread so much so now of course in control all the control journal there is always there are always a lot of papers on nonlinear model predictive control there are special workshops of number of books which have come out on this and the main thing is that if you know this it you are in for a very good job because doing this knowing this technology is there are many many current decisions that can work on how do you model unmeasured disturbances work on robustness you can work on fall diagnosis people have used this MPC now for all kinds of things they are using it for curing planning okay curing planning off you know to deal with the market conditions okay you want to plan the production over next two month horizon production planning conventionally was never a control ingenious job but with MPC you know you are moving horizon idea you have a prediction model you are in for a business for you know and your decisions are what to produce where the manipulated inputs concept has to change under time slots you know how much you produce for which time where and so on there are huge applications on those who like embedded control work how to embed an MPC on a chip how to embed a state estimator on a chip okay all these things are very very fast and MPC to like math how do you make a non-linear optimization program which can very quickly solve something online so each one of them actually has helped me to define PLD problems so somebody has worked on fast and MPC somebody has worked on disturbance modeling so I am just listing here my PLD problems and I have given a lot of responses here the there are excellent books which give you exposure to MPC and also being a quadratic optimal control these are some of the nice articles which have appeared and which can help you get going if you want to go working into this direction so I think with that let me close these fellow lectures well very frankly I taught this course four or five times but this batch I really enjoyed teaching you guys and you girls so it was fun because you kept asking questions and it helped me to you know change my notes all the time particularly thanks to Venkatesh and Saurav who kept bugging me all the time so it's good it's good to have a class which is so responsive so I had fun I know I hope you also learned something so main thing is that do this assignment more than those exam problems which are not going to be more than two cross two matrices okay so I cannot do in an exam I can't give you big at the most I will give you some three cross three matrices but you will learn only when you actually dirty your hands in programming okay now some of those programs you have never developed so that's why I put my programs for your effort okay so little complex other thing which you should do that when you are developing this control scheme in simulation all in reality you should never never attempt to do a grand integration divide and rule okay take one component tested separately okay then take one plus two tested separately that's them together like that so first develop observer tested in open loop no controller okay then develop a controller tested with perfect state feedback no observed state then and tested with linear plant simulation if it is working then you go to the observer base on the linear plant simulation so one by one one by one you should relax even now after working in this area for so many years if I do start a new thing I start there is no there is no way you can test your program by a grand integration of something to take each component tested separately then you know integrate that what you would do if you are to do a hardware project same thing you do here okay you any software when you are developing test each component separately and then slowly integrate them into a bigger problem okay then you know where you are going wrong otherwise it becomes very difficult to just look at the nodes and say I am going to write this one program you know it never works okay so now just to find up this is my perspective of advanced process control I think most of the control books or courses start by assuming that there is a model already I do not think that is correct I think you should start from data okay and then come up with the control of the data to model to observer to controller okay so even though the development here a lot of it for MPC is based on linear looks like it based on the linearized first principle model how to do it using identified model I have already uploaded my notes yesterday of how to implement LQ LQG using identified model okay same thing would hold for MPC using identified model okay so I will maybe I will add that one bit and put it so with that you know you have a complete viewpoint from data to control okay and if you go somewhere and happen to implement this you should be in business from okay so thanks for your nice interaction and hope we will meet again after the exam I am going to organize this lecture by Dr. Jagu say about real-time implementations of MPC it might be on his convenience could be on a Saturday so it could be on 50s or Saturday so I will ask him and then organize it will be in the department we are not going to record it okay so