 Now, we will start with this linear systems control design part, this is a important part of any mechatronic system to develop the good controller which is a brain of mechatronic system. So, we will kind of develop also we have seen our feedback concept in the previous classes. So, we will now get into little bit more details about specifically for linear systems how we can go about developing control and give you some kind of initial background in the domain of control design for mechatronic system in general. So, let us begin with this part. If we see typical systems that we will encounter in the domain of mechatronics systems is of the kind of rigid body systems or fixed system with flexibility. Now, rigid body systems may have single or multiple rigid body and single or multiple degrees of freedom also. The typical examples could be mechanism, robotic systems, bed, pulley drive and so on and so forth. They typically have limited number of degrees of freedom. So, many times you have single or two degrees of freedom although the number of bodies may be many. So, if you take an example of four bar mechanism then it has four links, but actually degree of freedom is only one because they are constrained to move in certain fashion and we have seen the kinematics of such motion in the past in the previous courses. So, they may or may not be underactuated typically in the in like you know all the industrial commercial mechatronic system they will not be underactuated, they will be having full degree of freedom situation. Contrary to that systems with flexibility will have by them their nature infinite dimensional systems nature ok. So, what we do for control is some kind of a finite dimensional approximation by using some methods like you know there are some method called assumed modes method or simply assuming like all the elements to be co-located or lumped kind of a mass and springs kind of a approximation. So, there are many different ways to do that. These elements typically make the system the compliant elements flexible elements make the system underactuated ok. So, one can handle this one or two degrees of underactuation by developing some kind of a non-linear controllers. So, that underactuated systems control is completely a different kind of a issue to discuss and handle especially in the non-linear domain. In the domain still you have some kind of a tools to deal with them, we will see some part something in the in glass. Then there are tools which are available based on the continuous domain model or infinite dimensional model which is partial differential equation based this area is still developing ok. And of course, these are both the cases like you know you will have some kind of a connection with the electromechanical actuator part or a hydraulic actuator part and things like that ok. So, these are this is like the main kinds of system that we have. Now, this important concept that we need to understand upfront here and kind of putting it before we delve into other topics is concept of feedback and feed forward. So, as so that we can see for practical mechanical mechanical system implementation situations we may need to consider some advantages and like know figure out which one of these are both of them together would be better for your application or not ok. So, we will see that in little bit more detail. So, let us consider our standard like know example which we are familiar with a simple mass moving on a you know horizontal platform with this pore supplied is f and that is our control input. Under this situation we know the equation governing the system is this f is equal to m x double dot and the question is again we have seen already that ok we have we are at some position here from here we want to move to some other position we say some kind of a like given amount of time. So, if we want to do that we can come up now that we know how to plan trajectory we can plan trajectory for that much amount of time for this mass going from here to the to the desired position and say like a trapezoidal trajectory or sinusoidal kind of a trajectory. So, you have a choice to like know make a choice there ok and then get x desired value ok. So, at every position where I want x desired to be and once you have this x desired value available as a trajectory or as a function of time then we can differentiate that and get x double dot desired. Now, when you put that x double dot desired into this equation multiplying m with x double dot desired then we get this desired force ok. Now we can use that force and apply it on this system in time and by the end of like you know the trajectory whatever this x double dot kind of becomes 0 again our block would have moved to the final position under ideal scenario where there is no friction no other disturbances in the system that is one of the ways of driving the system to the final position. We are not considering feedback in this case remember like know we discussed feedback in a lot more detail, but now we are saying we will not we are not using in feedback in this strategy ok. So, now think about like if we do this what are the problems ok. So if we see the feed forward so this is called feed forward case ok where we are just giving whatever is needed for getting to the final position already like know plan trajectory we give that input. Now if the friction is there in the presence of friction will make this not go to the final position it goes to some middle position depending upon how what is the like know the level of friction that we are dealing with here. In contrast to these your feedback strategy say PD control kind of a strategy with somewhat tuned gains such that we achieve the result in the same amount of time ok would overcome this effects of friction because we are continuously monitoring the error between the current position and final desired position ok and depending upon that we are taking some kind of a designing some kind of a control input F. So these two cases ok in one case we are doing just feedback in other case we are doing feed forward there are some advantages and disadvantages can you figure out that ok think about that and then like know we will come to combination of these two effects together in one of the control strategies ok. So think about it maybe you can list it down and then we can proceed ok maybe you can pause here and I am continuing further. So you see that in the feed forward approach you will apply the input only what is required ok we are not giving any extra input or whatever ok whatever is required we will give that and it will execute the strategy ok trajectory. And drawback for that is so input will be not really very high or very low it is just whatever is the amount of input that is required. In the flip side of it is that we do not guarantee to be in the final position because we are not taking feedback and kind of monitoring where we are. But if you go to feedback case you will find that we are taking care of like somewhat the error in this case. However if you see initial kind of position to begin with there is a maximum error between the desired position somewhere here and the current position somewhere here. So the error is maximum and if you want this to happen in like know very small amount of time then you tune the gains to have very high values. Now when the gains have higher values that is multiplying this high highest error then initial control input that we need to give to the system is really very high ok. And that many times that input is beyond the level of actuator saturation limit and that is where like know you although you have planned to go it in the in certain fashion by tuning your PD control because actuators will get saturated for some time duration the job is not really well done ok in the sense we do not kind of guarantee that time whatever time we had tuned our PD control gains for ok. So that is a flip side of feedback kind of a control strategy. So in effect what we need to have is some kind of a combination of these two strategies ok. So we will what we will do is we will keep the force which is required to go from here to here according to the plan trajectory and in addition now beyond this even after we are doing this kind of a feed forward kind of a term feed forward control application beyond is whatever we want to control is where we add our feedback control. So feedback control will be just looking at the error between the desired trajectory that we have found by planning and the actual trajectory X or actual position X and only like you know if that every time point if it is not matching you will apply the additional control. So error will be redefined here our X desired which was final position here is not now considered in PD control now we consider the desired trajectory that we had planned ok that is considered to be a final position for the error calculation ok. And then you will find that the error at every point is not so high it is kind of some small whatever is called by the disturbances and that would like you know minimize or that would like you know would not allow the control input to go beyond saturation. So there are many other advantages of feedback plus feed forward control approach. So naturally now since we are only doing like the error difference between this feed whatever is achieved by feed forward and our planned trajectory that error is multiplied by gains we do not need very high gains in practice ok we will need the typically gain values required will be lower gain values also can achieve the same level of performance ok. So because of this gains being low so gains being low is a good thing for the sake of these other disturbances that unwanted errors are there like you know measurement resolution may there be some error quantization error will be there like you know number representation error or some kind of a derivative computation error. So these different kind of errors or noises so to say are there electrical noise. So these noises which are undesired kind of effects they are showing up in a sensor measurement ok or whatever is what are the terms that are going in the feedback terms. Now if these terms are multiplied the gains then these errors also will get amplified ok. So since our gain requirement in feed forward plus feed back control are low I mean the gain requirements low means the gain values are low the amplification of this unwanted noise will not happen and that helps a lot in achieving better control action ok. So the control in this case will be then because of the lower gains the response to these errors will not be there will not be very high and see if the response to these errors is high that will disturb your control objective itself ok. So you want to kind of settle into final position but it will kind of be jittery in the final position because these errors are unnecessarily getting amplified by the gains and that much input is going anyway to your control ok. So this is an important aspect of feed forward plus feed back control approach. So as long as possible you know you should try to do this feed forward plus feed back kind of approach and also with the lower gains the control will be robust to any disturbances that the system sees. Now what are the like you know flip or like you know the cons of this feed forward approach is you will need more control computations to be performed in the microcontroller. So you will need a microcontroller which is having like you know more power or more kind of a high speed and then you need to know the system parameters very well so that you are able to kind of compensate for most of the things you will be able to compensate. For example if in this case the mass is not very well known ok mass is known plus there is some data error in the mass so m plus delta m then the delta m error will be treated by the feedback control ok. So feed forward will take care of like no actual m but then that error delta m will be handled by the feed forward I mean the feedback control terms in the control ok. So it is good to kind of have the better like you know better the estimation of the parameters of the system better will be like you know lesser will be a load on feedback term in the control and then you will need to identify a friction to kind of compensate for friction in the feed forward kind of a way ok. So this is an important kind of a consideration so we will see you know many kind of controls that we will carry out we try to see whether we are able to apply both of these approaches together to control system design ok control design basically. Now there are many approaches for rigid body systems so we will consider like mainly the linear kind of a control design for now and then we will come to this you know the gradient formulation based approaches. So this form of the equation we will treat little later by adding friction to it right now we will focus on mainly linear systems approach and for linear systems you have these approaches based on the pole placement and classical control tools ok. You just need to remember here this butt is there to that is a friction that will be there in the system and many of the approaches that you are taking may not kind of completely represent system in its you know full sense without friction. So friction actual system has friction the model will not have friction and other thing is saturation will be there in the actuators that we need to always monitor. So you tune the gains and say ok oh my system is working fine that is not enough we need to look at with those gains what is the control input that is required and whether that control input that is going to the system is exceeding the actuator saturation limits ok. So these are the kind of like these are main approaches that we will take and then we need to kind of see that you know in some cases the friction will be too high in some cases it will not be so high all those kind of nonlinearities need to be handled over and above what we have learnt as a classical control techniques and tools. So for these typical approach will be first consider system without friction develop control and then implement it on a simulation implementing simulation on a system considering all the nonlinearities or all the kind of other things and see whether it is in the presence of other nonlinearities still it is giving the same performance or not or like you know is the performance acceptable or not and tune retune the gains further to get the final kind of controller. There is one approach other approach if that is not satisfactory then we will have to kind of consider this in the control development itself. So our model itself needs to have these nonlinearities incorporated and we then use some ways to compensate for them and this is a linear system kind of a design approach and nonlinear system we will see a Lyapunov based approach will be as many nonlinear systems approaches that are available for control but we will go through only this Lyapunov based approaches the most popular one and many of their approaches make use of Lyapunov arguments for its proof for their proof ok. So there are two different parts that we need to think about when we develop controllers there are first is like a design issues what are the design issues that are to be considered when we develop controller first is a stability. So the way the system is stable the controller also should be stable. So computation of the control it should be like you know from one sampling time to next sampling time as we are doing this control computation it should not you know run away like you know run away kind of in amplitude ok. So control amplitude should be always in control that means like you know you have a system controller poles they should be in the left half of the S plane ok that is our stability criteria. So when we do the controller design we need to make sure that that criteria is valid ok the controller by itself should be a stable controller ok. So we are developing this control to have a closed system to be stable but in addition we want the controller itself to be having some kind of a stable nature. Then we talk of performance of a controller so it should give the desired performance out of whatever controller strategy you have chosen should yield you settling time or overshoot in specified limits that the application requires. Then another thing is energy required to achieve performance ok this is very important criteria often people miss this out that you need like the energy required or control input required to achieve these performance need to be monitored and make sure that ok we are not exceeding these two about the saturation limit of actuator. And then there are implementation issues in the implementation you will have two ways of implementation analog domain and digital domain. So of course here we are talking about digital domain analog domain people are not right nowadays this kind of controllers are getting phased out because we cannot change their gains once the circuit is made the tuning gains and doing some changes in the control algorithms is difficult. And also for the cost you know nowadays because of the microcontrollers available at so less cost I mean implementation of this digital domain controller is comparable to the in cost to the analog domain kind of implementation. And it provides additional flexibility additional just change the program and your controller will change or control algorithm will change. So that kind of flexibility is offered with the digital domain controllers. So we will go for that. And other implementation issue we have to consider is sampling time. So sampling time the main kind of a thing is like we need to be able to do the tasks which we are supposed to do in the close loop ok continuously every you know after every sampling time we are doing this task or in during every sampling time we are doing this task. So this task should get finished within whatever allotted sampling time. If the tasks are not finished then there is something called overflow will happen and some of the operations may not be done and then you will find that your control strategy is not working very well. So we need to design sampling time in such a way that these all these tasks are getting completed. What are the tasks we do we read the sensors then process the signals of the sensors in some way if at all then compute a control and then implement that control on the actuator interfaces that we have chosen to ok. So these tasks should get completed in a reasonable amount of time. So that is one major requirement of sampling time. There are some other requirements we will discuss them in little more detail like you know as when we talk about signals and signal processing in in micro drawings ok. So we will move from here to first type of controllers ok type of controllers and then we will we will see some more linear system control in little more detail. So mainly people use the PRD PD or PI kind of controllers traditionally and many industries still continue to use them they they work really well in many cases. Then there are these many different types of controllers robust ok they are robust to internal or external disturbances then adaptive they adapt to the parameters as the system changes like some properties of lubrication or some other kind of environmental property changes happen slowly. So the system adapts to those kind of changes in by changing the gains automatically something of that sort happens in the adaptive control. Neural network control so there are many many different kind of controllers artificial intelligence based control deep learning based control like these are buzzwords these days to talk about. Logi logic is used in washing machine kind of systems. So so many different kind of control algorithms are are there ok then even the non-linear control also is a vast research area there are several several controllers or control algorithms people have paused I mean this is ever growing area mathematically in these. So there are a lot of developments in this control field and many mathematicians are involved in these developments and not all the controllers that you will find in the literature are amenable for practical implementation. So one has to have a wise or wisdom to see ok which are relevant for practical implementation which are not so that kind of a wisdom we should slowly develop as we see some of these fundamentals of mechatronic systems will slowly get that kind of understanding ok. Now the main design steps that you would consider for development of controllers is basically modeling system this we have already done and then representing that model into one of these forms ok by the differential equation form or transfer function form or state space kind of a form and then you will select some kind of a control strategy and do some mathematical analysis to establish that ok that strategy gives you a stable controller. Then you put some effort into designing the control parameters to make sure that you know your desired goals of the applications are achieved and then simulation and experiments and then reiterate some of these processes to make sure that you know your final control is good. So representation of system we have talked about already these are the three forms differential equation form state space form and then transfer function form ok. So in the each of the forms you know you can go back from differential equation to state space to state space to like you know transfer function transfer function to differential I mean you should be able to convert system into one form to other form ok by appropriate you know mathematical ways. These forms have you know in each of these forms there are different tools for control available but we will consider this differential equation to be most fundamental kind of a form of representation which will give us some physical insight also into the system and we may use that in incorporating like you know interesting ideas for control. So this is so although you may develop some some controllers in the state space form you can transfer them back to the differential equation form to see what their physical aspect or physical implementation or physical insight into that controller is ok. So these forms are mainly for the linear systems ok they will not be non-linear systems. Differential equations can be also non-linear but they will be able to we will be able to convert them into state space or transfer function form only if they are linear equations ok. The notions or issues in design and analysis additional to what we have talked already about is these are like more like a mathematical you know theoretical aspects one is like a stability so there is a bounded input bounded output stability notion that we have already seen in my class and maybe in some previous discussions and other important notion is controllability but definition controllable system is controllable if it can be taken from one state to another state in finite amount of time ok. So this controllability and observability are two definitions that are defined in the domain of state space representation of the system ok. So there are some conditions for controllability based on these matrices A, B, C, D of this form ok so we do not get into that right now we just need to kind of keep at a back of mind that some notion of controllability and observability exist here. If we want to use some control techniques from these domain of state space then we will need to go worry about these concepts ok. So some of you might have already in other classes gone into a good depth of these observability of observability you may be free to kind of use them to for your project purposes if you would like to or analyze some of the systems you know from that perspective and see. So in my opinion if you develop the design into like you know the transfer function domain or differential domain you can understand that that is fair enough to go. If you want to you can design also in this other domain but I mean it is actually personal choice I would say. I prefer to kind of like you know choose to not get into the state space domain too much actually it just I think it is just a personal choice. But I mean you can feel free to kind of get into different domains and look at the things. There are standard tools available in MATLAB for analysis in the state space domain also and in transfer function domain also one can use those techniques to do designs. Now we come to this important consideration of pole placement ok. So ok I hope you understand this term pole placement. So pole placement refers to like you know how do you design control so that the closed loop poles will be at certain locations in the in the left half of S plane of course we want all the closed loop poles in the left half of the system to be stable. So how do we kind of make a decision of where the poles should be placed given the design requirements or some applications. What are the considerations? How do you kind of like do this choice of these poles? Once the choice of the poles in the in this plane is left half plane is known then we can kind of see ok currently open loop system poles are at what locations and now if you want closed loop poles to be these locations what are what is the control how do we develop control that kind of a control problem can be posed ok. Then that can be achieved by like you know either using root locus or any modern control or state space based kind of a control techniques ok. For example if the system is controllable you can place the poles at any locations in the left half plane ok that is a very strong result from the modern control theory you get. Of course that requires state feedback but still I mean by using the state feedback you can actually design the controller such that you get the closed loop poles at desired location in the in the left half ok. So there are how what are the character has to think about one is a stability criteria which already satisfied but now other criteria is performance and then other criteria is input requirement ok. So if we start like you know performance from performance perspective we know that if the poles are far away in the left half then their contribution to the control output is very low in the sense the contribution dies down fast. So if all the poles are placed far away here then the response will be very very fast for the system ok. So we ideally would oh we will say ok oh look I want all the poles to be far away as possible that will give me a best response that will give me a very nice kind of you know evolution of the system. However it is not free things come at the cost what is the cost that you need to pay as you push the poles away from the left in the away in the left half plane from away from this imaginary axis then you start increasing the control input that is required to drive the system ok. And that is not desirable because our actuators have a limit ok. So that is why you do not want to push them too far away ok but you need to have them like you know in a far away so that it is your performance is good ok. So these are fighting against each other these two things performance requirements and then input requirement. So that is why like now we if you have strategies which can give you better performance with the little lower in like you know control input they will be accepted or they will be used in the in the application much better ok. So also we that is why from that perspective we may choose to have two poles retained close to the imaginary axis relatively close and other can be pushed little apart so these two poles will be a dominant poles for the system and output you know response or contribution to the output from the poles which are far away here that will be anyway dying down. So we can assume the could close system close loop system to have only these two poles like a second order system and then we can carry out some some control parameter design based on the formula that we already know for standard second order system response ok which which formula are also listed here for our new utility later ok. So that is one of this kind of a strategy is instead of pushing all the poles far away here we keep two poles retained two poles close to the imaginary axis and all other poles are little farther away so that these poles are dominating that is the kind of a way one can think about it. So you can do that pole placement or like you know by using this PID kind of a controllers and see now with this controller introduced how the close loop system poles are and thing like that ok. So one can like you know start constructing this kind of a you know block diagram with the flow and then put controllers at appropriate locations your controller may be here or controller may be here at some location controller is there then you can analyze the different different proposals and see which one is a base shooter to give you the desired kind of a response ok. And typically you you will avoid to carry out you know pole zero cancellations by using controller ok. So as we some part we we know or like if we if we cancel the poles especially in the left half of the plane they this cancellation is not really going to be exact because actual system and like know the its model will have always some small differences here and there and then only that small part which is not compensated completely that will kind of have some undesired effect on the system and it may not work well. So that is why from that perspective we do not do the cancellations here ok. So there are lot of techniques that you might have studied earlier to local pole plot like this plot these are like the classical the plas domain control techniques. Then you you have the state space base designs which is based on the linear systems theory then in the non-linear domain you have this Lyapunov control singular perturbation and feedback linearization or some many many different approaches. I am listening only few of them here out of that also we are going not going to consider lot of things for discussion I will give you some kind of a flavor of how do we go about designing and analyzing and maybe you can use one or more of these techniques to kind of do do this final goal for whatever system that is under your consideration you can make use of this in your project for example. We will go through little more detail in Lyapunov method and like know what is a best controller that can be possible for a rigid body non-linear system ok. So up to that we will do and that is good controller for practical implementation aspects as practical implementation as well so that that will be nice for us to look at. Then we will consider now example of linear or non-linear system to the system which is non-linear to begin with is pendulum system we all know it is non-linear if it will if you consider this theta for the pendulum to be not small and then like know see what what kind of a base we think of its control and then under example flexible belt drive system that you modeled in the assignment problem so you know I will I would like you to kind of do this analysis for that model is already existing with you so you think about suppose I want to position my load at some specific position and I give input as motor angle or motor torque or whatever you want to consider and I want the vibrations in the belt to be minimum ok or I want to position it the output theta L at some position such that there are not much of a vibrations when I go to the final position ok. So how do you kind of develop such a control for in a flexible belt kind of a system by using some of the concepts of this linear systems theory ok. So you I would like you to kind of give a thought to that before going further into the discussions ok. So I will pause here for the first video for this part of the lecture and then we will continue in the other video for the more discussions on these examples ok. So let me stop here.