 All right. Hello folks. Welcome to the NPTEL on nonlinear productive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. So this is this representative image of robotics and control of what robotics and control can achieve in the background. This is essentially a rover on Mars. So this is sort of the culmination of robotics, controls, networks, sensors, actuators and so on. So you will continue to see this sort of a background image on all our lectures. So let's move on to the actual content of the course. So this is the first week of our lectures. So and this is today is lecture number one. So today we are on lecture number one. All right. So we sort of assume some basic background when we are doing this course. The first is that all of you have some basic knowledge of state space representation in controls. This is typically covered in of course in our department in course called systems theory. But it is also covered in a second level controls course. The first level controls course is usually sort of frequency domain, Laplace transform, root locus and all of time. And in the second level control course, typically a lot of you study state space representation, notions like linear systems, controllability, observability and so on and so forth. All right. The second thing is that we require that all of you are sort of aware of is the knowledge of ODEs that is ordinary differential equations and basic solution techniques. So this sort of includes things like variation of parameter solutions. Existence, uniqueness, ideas of solutions. So that is couple of things that you definitely are required to sort of know. Then of course we expect you to know some basic multivariable calculus like you know limits, continuity, matrices and you know things like that. So this is I mean I would say a basic engineering maths course should cover all of this material. All right. So with this background assumption in mind, I mean I would like to start with our actual material. I want to also discuss to begin with what a block diagram for an adaptive control would look like. So a lot of you are used to seeing block diagrams. So block diagram is integral to most control courses. And so what I want to do is also to give you in the same block diagram framework what an adaptive control system will look like. So if you see here, so adaptive control as is mentioned here is a rather specialized sub part of nonlinear control. Why usually there is a need for adaptation is the fact that a lot of parameters of our systems, typical nonlinear systems are unknown. So one of our examples at least in this course we come from mechanical, aeromechanical systems like quadrotors, robots, satellites and so on. But of course this is applicable everywhere. The initial probable, initial advances in adaptive control were made by electrical engineering. That control itself was much more popular in electrical engineering in the 70s and in mathematics in the optimal control significantly before that. But in more recent applications it has been the aeromechanical systems community that has found more use in actually used adaptive control real systems. So and this is like I said the reason for this is that adaptive control helps you deal with uncertainties in the system. And uncertainties are probably the most common feature of any system. So you might design a, you might have a model, a state space model just like you see here you have a state space model for the plant. And which is sort of has a general nonlinear structure in general. We assume a nonlinear structure which is like you know something like a summation over lambda i, f i and so on. And then there is an output which is a summation of mu j, g j where f i is a function of possibly state time and a control. And similarly the output is a function of the state time and also the control. So the additional feature or additional sort of complexity in the systems that we are interested in is that this lambda i's and mu j's are unknown. So these are some sort of unknown parameters. So there are lots and lots of examples of this, lots and lots of examples. I mean for example if you consider a robot then you may have damping coefficient, mass, inertia and so on. All of these can be unknown. All of these can be unknown quantities. Yeah, if you consider say you know a quadrotor system for example, right? Suppose you have a quadrotor system. Again, so you have you know you use aerodynamic sort of lift in order to fly the quadrotor, right? So then there are parameters like you know lift coefficient, you know drag coefficient and so on. Again on top of this there is inertia's masses that are unknown. Yeah, so there are many, many sort of unknowns that are very, very common in its subsystems. So what we do is we sort of assume that the system is parameterized, the parameters appear in such a way that the system is in fact linear in the parameters. Yeah, it may be non-linear in the dynamics of course, like so, but the parameters appear in a linear manner. Yeah, this is not such a difficult assumption to be honest to justify and you will see in the future some examples of how you know deal with parameters that appear non-linearly by doing something like an over parameterization and things like that. So, but we assume for the purpose of this course that the parameters appear linearly. Yeah, and that the parameters are constant, parameters appear constant. Like I said, so the parameters appear non-linearly, there is a lot of things that can be done in terms of over parameterization in order to alleviate this issue. In terms of time varying parameters, also a lot of work has been done more recently. We are not going to look at this course because this is still at a curriculum level and time varying parameters, I mean sort of results in some sort of research topic, yeah, still. Yeah, so there is still a lot of research that is ongoing in adaptive curriculum and time varying parameters is one such area. Yeah, there has been a lot of work already, but still we will not have time to cover that material here. Yeah, all right. So, we have this plant, this plant block that you can see here, right, which is of course now has this additional complexity that these lambda i's and mu j's are unknown, all right. Then we have a controller, now the question is what should be the structure of the controller? The controller cannot just depend on this output. Yes, output is the only thing that is available. So, obviously output is the one object that the controller can depend on. But now the control also depends on what we call estimates, these lambda i hat and mu j hats, these are called estimates of the true parameter lambda i and mu j. Okay, so the controller now depends on these estimates also. Yeah, so what adaptive controller tries to do is that it takes these measurements out and takes these measurements and it puts it into, I mean for now some sort of a black box, yeah for now some sort of a black box which in turn creates an estimate for these unknown quantities. So, that is the additional piece. This block, adaptation block is the additional block in a typical adaptive control setting that you will not find in standard non-linear control and we denote all estimated parameters using these hats, yeah, these quantities with hats on them, right, denote the estimates of the true parameters and then these estimates are in turn fed into our original controller, yeah and the controller uses not just the outputs measurements, but also the lambda hats and mu j hats that is the estimates of the true parameter values. It also takes in a reference signal, this is always the case, right, there also has to be a reference signal. So, the plant is trying to follow or the outputs of the plant are required to follow some kind of a reference and so the reference system is also input, yeah. So, with the reference the lambda hat mu j hat and the measurements the controller then generates an algorithm which is sent into the plant through an actuator and then of course you have the outputs again and the cycle goes on and on. This is a typical feedback system structure, the only difference again like we said before is that there is an adaptation block, yeah. So, this adaptation block is what is rather critical, yeah, for all adaptive control, without the adaptation block there is no adaptive control, yeah. You hear a lot of folks who do parameter tuning, not parameter tuning but gain tuning and they take a typical PID control and then they take the P gain, I gain and D gain and then they tune these gains based on some kind of performance metric, yeah and so these are like some kind of time varying gains and then they tend to call this adaptive control, but this is not true, this is not adaptive control, all right. Adaptive control is primarily for the case when you have unknown parameters in the plant, all right or uncertainties in the plant, okay. So, that is what we sort of summarizing typically in an adaptive controller, typically in the non-linear controller there are states, outputs and some input u, f and signal are, the objective is usually for the y, that is your output to track the reference r, yeah. So, this is, I mean there are some examples trajectory tracking in robotics, so this is usually called the tracking problem, yeah always remember this word because we keep using this word in this course again and again, yeah. This is called the tracking problem, all right, this is called the tracking problem, why because the output of the states or whatever we hit are tracking the reference trajectory, yeah. So, if there is no reference trajectory then one dollar states or all your outputs to go to zero, this is called a stabilization problem and so if you want a y or x to go to zero as t goes to infinity, this is called the stabilization problem, yeah, this is called a stabilization problem but what we are typically always interested in is a tracking problem. Similarly, you have a set point temperature in HVAC that is air conditioning systems, then if you want to achieve a particular vehicle speed in an automatic cruise control that also falls under the category of a tracking problem and because you want to achieve a particular speed irrespective of what the roll conditions are, that is if it is, if the, it is an upward inclined or a downward flow doesn't matter, you still want to remain at a particular speed, all right. So, that is the idea of a tracking problem. Now, I mean as is obviously evident, we don't actually use all the states and we don't use all the states in order to do tracking, all right, not all the states are in fact required to track a signal, this is very obvious and very obvious in any, any sort of problem. For example, if you think about, you know say this, for example, your air conditioning system, yeah, just an AC in your home, all right, just think of an AC in your home, yeah. What is it that you want to track? You want the room to track something like a set point, set 25 degrees. Suppose you want your room to be at 25 degree centigrade, all right. So, this is what you want to track, all right. Now, remember the air conditioner that you have in your home has many states. If I actually try to model an AC, it has many states, right, it has electrical elements. So, obviously there are states like, you know, for example, the X will be say voltage, current, okay, then there is a compressor. So, obviously there is pressure, yeah, maybe input-output pressure, yeah. Then there is of course temperature, yeah, this is not just the temperature of the, at the output that is of the room, but this can also be temperature at different points in the air conditioning system, okay. So, there are many, many states, right. I mean, I have just given you say four, five states already and, but if you want to do more and more careful modeling, then of course you will have more and more states, all right. But the fact is we are not concerned with all the states tracking a particular set point, right. We as a user are interested in only the output temperature to track a set point, all right. That is it. We only want the output temperature to track a set point. Now, this is of course achieved by using, by setting the room output temperature. So, when we say room temperature, usually you can imagine that the air conditioner in my home is not measuring the temperature at every point in the room or anything fancy like that. All it does is it has a sort of thermostat at the end, at the exit of the air conditioner, all right. And it simply uses that to measure what my output temperature from the AC is. It is not really measuring the room temperature, the basic air conditioning systems. Of course, if you wanted to make a very, very fancy air conditioning system, which actually does what it promises, then the AC will have to come with many, many temperature sensors that are being mounted in all corners of the room, right. And then it somehow maybe takes the average of that to create the output temperature and uses that to in fact control the air conditioner better. In that case, you will have probably better or more equitable distribution of sort of temperature. But usually this is very difficult to do and that will not just require multiple temperature sensors, it will also require multiple outlets for the AC. So, these things are done you know using HVAC systems, but you can imagine this is too complicated. And as engineers, we of course make approximations when such kind of complicated setups are not possible. Therefore, the AC is a single unit, it contains the temperature sensor just at the outlet and it measures that temperature. It assumes that that is what is the temperature of the room, right. And this temperature Y is only one of the states of the air conditioning system, only one of the states of the air conditioning system. And therefore, like we said, we almost never need to track all the states, yeah, this should be rather obvious to me, okay. So, as I stated already, we have the standard assumptions, right, that these lambda i's and mu's appear linearly. Even though the system is non-linear, the parameters appear linearly, okay. Further, we have that the parameters are constant. So, this is of course evident, the linearity is evident in 2.1, 2.2 again, which is simply rewriting what we had in the block diagram. So, these equations are simply rewriting whatever we had in the block diagram, nothing new there. So, the question is, can we still make the output Y track the reference r of t going to striving a control, okay. So, here of course, we say that the control is allowed to depend on also the estimates for the unknown parameters, right. So, this is the important part in adaptive control, okay. Now, a lot of you may have also seen in some form robust control, which is another way of dealing with uncertainties, okay. So, uncertainties are dealt with in two different ways. One is using say robust control, another is adaptive control. So, there is a bunch of differences. In robust control, there are no estimates, okay. The first important thing to remember is that in robust control, there are no estimates of any parameters, okay. If there is an uncertainty, it is classified as a, you know, sort of a structured or unstructured uncertainty and it is dealt with as is, okay. The second is because there are no estimates, you cannot expect, I mean, I do not know, you should not expect that there will be true tracking performance, okay. So, only bounded performance, yeah, only bounded performance is guaranteed here, okay. What does it mean? It means that you will never truly track a signal, yeah. You will only remain in a bounded neighborhood, yeah. So, this is a rather important point, right. So, if I make a sort of a XY axis, say, and say I have some kind of a robot for which I have a reference signal like this, yeah. A typical robust controller will only guarantee that you remain somehow, okay. As you can see, that I have made it, I have drawn it using a thick line to indicate that you will sort of remain in a neighborhood of this given whatever uncertainties in the system, structured and unstructured and so on and so forth. On the other hand, with an adaptive controller, suppose you start at some offset, suppose you start with some offset, but with an adaptive controller, you are guaranteed to converge to the true connection. Why is this? It is not free, nothing is free, nothing is free. This is because we construct estimates for the lambda i's and mu j's, okay. And these estimates are used in the controller. Robust control does not do this. It does not use any estimates, it does not design a control based on the estimates. It just says that we have given a controller, then there are these uncertainties. What happens to the system performance in the presence of uncertainties with the nominal controller? In adaptive control, you won't even use the nominal control. Okay. So, you design an estimate for the unknown parameters, okay. So, you are adding additional dynamics to the controller, right. You are adding additional states to the system, these lambda i's and mu j's and in fact additional states. So, obviously, there is more computational burden. So, the question is if you can take in this computational burden, then you can construct these estimates. You can make a controller which depends on these estimates, so that you can drive the measurements or the outputs to the desired values, yeah. This is not done in robust control. It is more of a simpler sort of setup where you say that, all right, I don't have access to, you know, that much computational power. So, I am simply going to take a nominal controller and I am going to simply analyze. I would call robust control as an analysis method because it takes a nominal controller. The control design has nothing special in it. It is designed based on the absence of uncertainty and then you simply analyze how, what kind of boundedness, what kind of bounds you will get, yeah. So, typical robust control will actually give you some estimate of this width, give you some estimate of this width, okay. The other thing is a robust controller typically assumes knowledge of parameter bounds, yeah. Without parameter bounds, there is no way to give bounds on this here, right. Without parameter bounds, you cannot give such kind of a bound here, all right. So, therefore, robust control assumes knowledge of parameter bounds, adaptive control does not, yeah. So, these are sort of the key differences between robust control and adaptive control, not free. So, robust control seems to be doing something which I would call analysis. Whereas, adaptive control is actually doing control because you are adding some elements, yeah, you are adding an estimator, you are adding a control which is probably more complex which lies on this estimator, okay. So, this is what we need to remember, all right. So, that is sort of the end of what we want to talk in this lecture. In the next lecture, we will go towards what are the myths, typical myths and temptations. So, we jump right into the mathematics of things. So, this is of course a very, very mathematically oriented course. But soon enough, you will start seeing connections with real systems. And so, the hope is that you can actually take a real system eventually and use what you learned in this course in order to do adaptive design. So, what is it that we talked about today? We saw that adaptive control adds an additional block of an estimator in the system. So, here we estimate unknown parameters. There are standard assumptions. The parameters are expected to appear linearly in the plant dynamics. The parameters are of course expected to be constant for the purpose of this course. Then the control is designed which is of course depending on these parameters, okay. So, as opposed to robust control, you get exact tracking, yeah. While in robust control, you get only approximate tracking that is you converge to a neighborhood of your reference signal, okay. And you do not require to know the bounds on the parameters like this. You do not need any uncertainty bounds in adaptive control, all right. All right. Thank you very much for listening.