 Hi, everyone. Welcome today. I'm Shreya Sundaram. I'm the Murray Gordon Professor of Electrical and Computer Engineering here at Purdue. I'm also the co-director of our Institute for Control Optimization and Networks, or ICON. So here we're here today to hear from one of the greats in control theory, Professor Tommy Zucca from the University of California at Berkeley. In fact, you know, Purdue has a long history of contributions to automatic control going all the way back to at least Rufus Oldenberger in the 1950s when he started the automatic control center here at Purdue with significant contributions to the space race. In fact, one of the main awards in control theory is named after Rufus Oldenberger, and it's no surprise that Dr. Tommy Zucca is one of the recipients of that award back in 2002. That control center went away after a while, but of course the amount of activity in control theory and autonomous systems spanning both classical approaches and emerging techniques in AI and data-driven approaches continues here. So it was with that legacy and that foundation that we decided to start this institute about three years ago really to bring together all of the faculty from across the College of Engineering and Purdue to join forces to be able to tackle some of these grand challenges in autonomous systems. So ICON has three pillars of research, education, and engagement with government and industry. One of our most stimulating activities is our seminar series where we get to hear from top experts in the field, and today, you know, is really one of the most influential people in controls. And so it was a prime opportunity to join forces with the College of Engineering to leverage the Engineering Frontiers lecture series to showcase somebody of this stature. So with that backdrop, I'd like to introduce Dean Arvin Robin to come and say a few words about Dr. Tommy Zucca. Arvin? Thank you. Thank you, Shreya. What a great afternoon. What a great evening to launch our Engineering Frontiers series here this fall. And I'm really proud today and what a great honor to introduce our speaker for the day, Professor Masayoshi Tomizuka from UC Berkeley. He is the Sheryl and John Nearhoud Distinguished Professor of Mechanical Engineering at UC Berkeley. He is, of course, a world-renowned expert in control theory, and not just control theory, but control theory. When you look at combining model-based control approaches with new machine learning approaches, mechatronics, and the impact of his work has been felt in so many domains, whether it's robotics, whether it's autonomous vehicles, or whether it's amongst the dozens of his PhD students and postdocs who have gone on to become experts, world-renowned experts on their own, two of whom are sitting in the audience here today as well. It's a tremendous impact, and he's been recognized for that impact in so many ways. It begins with the Charles Russ Richards Memorial Award at the ASME, the Rufus Oldenberger Award medal out of ASME as well that Shreya's referred to. In fact, as a part of that history connection, Rufus Oldenberger, and Purdue, he came that year to do a talk and a lecture here at Purdue. That's the tradition that usually happens, that the medal recipient at ASME comes here and gives that talk as well. He went on to receive the Richard Bellman Control Heritage Award and then later the Nichols Medal from IFAC. IFAC, incidentally, was also an organization that was established by Rufus Oldenberger as well. Then he's a lifetime fellow of the IEEE, and last year was inducted into the National Academy of Engineering. Without further ado, please give a warm welcome to Professor Tomizuka. So good afternoon. Thank you very much for a very nice introduction, and thank you very much for inviting me to this wonderful seminar series. I'm glad to see so many young people here. I'm on the other side of the extreme. People always ask me, when do you retire? When do you retire? So I keep saying, give me a little more time before retirement. So I'm still doing, I'm going, and gave me, I think the zero angler my friend gave me a good title for this talk. I says that 50 years ago, he said, no, not exciting. So came from state space, control to intelligent machines, a five-decade journey in control of mechanical systems. So that's what I'm talking today. Okay, so this seminar is my half century at the University of California Berkeley, and I think not just my history, but I think somehow my professional career development nicely go with naturally I jump into this field. Okay, so I came to the US because the US was leading the control theory development in the 1960s space exploration. So maximum principle, dynamic programming, all kinds of things appeared around that time, carbon filtering. So I came, and then I was introduced to the modern technology at that time, modern. I think I could use computer, IBM 1130 Pro, you don't know what it is, but I could see the PID control to make a nice quarter damping response when I saw the response. I really feel that control theory came closer to me. And then the 1970s was the time when the computation capability really started going up. At the beginning, control theory was space exploration for big chemical plant, and it was not for vehicle, tabletop, manipulator, and so forth. But the DSP, large scale integration technology, they make it change. It can be used for any small systems. So I noted I just grew up that kind of technology. And also around that time, mechanical systems, there were lots of interesting development. The robotics was really becoming, robot becoming very popular and computer hard disk drive. And also the automated vehicle control area, the control is more and more needed. So that was very natural for me to think about the control of mechanical systems based and implement the modern technology to really see what is advantage. And that's how my laboratory started. So at the beginning, I was doing some microprocessor based chemical process control and so forth. Then adaptive welding process control, robot control, that kind of thing was added and automated driving came in. And then my group became grew. I always put control methodology in the sensor because my home ground is always control. But then this technology is surrounded by all kinds of mechanical systems. And often the problem that we challenged was motivated by solving some of the target area problems, like machining. From machining, we developed adaptive control methodology and we found that methodology can be used, robot and so forth. In that way, there was some very interesting interaction between application and the theory development. So now this is my sort of activities. I use the same photo that I was sent here. And presently, still I'm doing mechanical systems control, but two major target area, target topic is intelligent robot and autonomous vehicles. Those are the two currently very interesting challenging area to apply methodology control. But it's not only control all kinds of things, AI, machine learning, image processing, those things are all coming in. And in fact, I realized that what I'm doing is so-called mechatronics. I gave some talks in recent years about mechatronics. Mechatronics is the integration of decision-making control theory and electronics information technology like computer and then to apply them to the mechanical systems. And this area is really evolving, growing and growing. 1970 happens to be the time when I was a student and I was starting my career. But then it's new methodology, new device technology was introduced as time goes on. So at the beginning I used PDP-8 computer, PDP-7 computer ploy, which doesn't mean much to you young people, but I started with those so-called mini computers. And then LSI integration DSP. And if I just keep this trend more recent things, in 2020 now we have AI machine learning methodology coming and IoT, GPU and so forth. And there are lots of interesting target problems. Now from methodology point of view, when the new method comes in, I don't think that new methodology is replacing the old one. I think to me, I have a big toolbox full of really methodologies. And instead of big hammer like approach, just use one methodology to solve everything. I look for what will be the best way to combine, take advantage of any methodology in my box. Sometimes that approach could be a combination of several things. So currently, for example, I encourage my students to look into how model-based control and machine learning can be blended, integrated to solve some exciting new problems. So from now on, I would like to introduce some recent research at mechanical systems control lab. Okay, two pillars. One is intelligent industrial robot. I will introduce several projects. And autonomous driving, I will introduce several projects. There may be too many topics, so I don't want to go into deep in each topic, but I would like to give you some flavor of what kind of things are going on. If somebody asks me too much details, probably I cannot even answer. So let me start with some robot control. Okay, so this one, safe and efficient robot collaboration systems for next generation intelligent industrial robot. This was an NSF project lasted for a very long time and we ended in 2022. And here what we looked into is how we can really bring the robot out of the cage and let robot collaborate with human in a very productive way. So human robot collaboration was a big topic. And when that happens, we have to ensure safety. And so safety was a big issue. So I'll talk about that aspect. And other topic will be come after this. Oops, I think I'm pushing the wrong button. Sorry. Human robot collaboration. Okay, so motivation was some flexible automation and core robot type of idea. So because of that, we decided to look into how human and robot can help each other. And that happens in the factory automation and even the automated vehicle. Currently, autonomous cars and manually driven cars share the same infrastructure. So the question is how those two may collaborate well. So in that sense, problem is very similar. And in this project, NSF project, what we came up with something like safe and efficient robot collaboration systems, CELOX. So CELOX idea, it has got several modules. I think with the robot systems, it has environmental monitoring module. Essentially, this one look into the environment, especially the human collaborator, what he wants to do, what he's doing to ensure that robot and human do not encounter each other, collide and so forth. Then when both of them are there, what will be the most reasonable best way to let human and robot collaborate? So making task plan is another big task. And getting good task plan and what's going on in the environment, in real time, we have to decide safe and efficient motion planning and control. So that was the third module. And first module and third module has strong control content. So I would like to emphasize on the first module and third module. And this is human monitoring, environment monitoring, especially human. Looking at the scene, what model it's doing is based on motion, we observe the past K number of data, then based on that, we will predict how human will be moving. So motion prediction, future motion prediction based on past observation. And we made some neural network approach for this motion prediction. And by introducing one of the layer to be adaptive, we introduce standard adaptive control methodology there. And we noted that the network becomes quite flexible and versatile to work for many type of humans. So network, neural network itself can be developed by offline, but we left some online aspect, adaptive feature. So it's the same software, same neural network work for the different environment, the different person. Okay, then task three is a real time motion planning and optimal, the safe motion planning and control. And for this part, essentially it's nothing but optimization game. So we formulated the problem as optimization. In the middle, there is a robot dynamics and all the human dynamics at the bottom. So how they will be, the robot should be controlled based on what human is doing, observing human. So this is a difficult block diagram. And we minimize the robot cost function subject to a number of constraint, dynamic constraint and maybe the input constraint and so forth. But most important constraint is to ensure safety. X belonging to that capital XS means that robot and human are in the safe region. Okay, so as long as this is ensured, safety is assured. Now this safety problem is a very tricky problem to analyze in advance. So if we remove this requirement safety constraint, the other part is becomes a standard optimal control problem. So by standard optimization approach, we can find some good solution. And that's what essentially Chandu's dissertation was. She devised the so-called convex feasible set algorithm. It's nothing but optimization algorithms, minimize JX, start from some initial trajectory. X in this case, the whole trajectory. And then do something searching to make the J function go smaller in a convex feasible set. Because whole region, if you try to do optimization once, it's non-convex, so it's not feasible. So make the convex region and do some feasible set of the optimization there. And then you come to some next XK plus one. And make sure whether we made some substantial move or very only very small move from the previous optimal path. And if the change is minimal, we say, okay, convergence has taken place. And we say that, okay, we can find, we found a optimal path. Idea is conceptually starting from one set, minimize, then the new convex set is defined, and finally reach to the minimum. Okay, this blue line indicates the contour of the constant, the J value. If it starts with somewhere which does not belong to the convex feasible set, we can shift this initial point inside of this convex set and then move and end up at the same place. Okay, so this is an iterative algorithm and turns out to be very efficient and implementable. And we could find some good path which does not make any collision with at least stationary obstacles. Gray thing indicates stationary obstacles. But when dynamic obstacles are, say, robot for robot, human needs a dynamic obstacle. Human may be walking around. So that to make it safe with the human presence, we have to do some more real-time computation. And for doing that, Shandu came up with the safe set algorithms. So we have one axis robot, the other axis for human. And as long as we are in blue region, it's safe. So we are operating at some region. But if it's predicted that the human will move down, if we robot stays at the same place or same trajectory, collision will take place. So in that case, it's better to robot also step back so two can stay in the blue region. And for this, so she defined so-called safety index. As long as safety index is negative, 3 equals 0 is the level set. If 3 is negative, it's safe. So if 3 tends to go large, 3 is positive, we would like to make sure that 3 dot is less than 0. So always bring back to the safe region. Okay, so this turns out to be a very efficient algorithm also to ensure safety. So the structure became that we have convex feasible set algorithm to decide some global motion avoiding static obstacle. Then introducing this human robot to stay in the safe region, we use the safety controller. And safety controller normally runs at a very fast rate to ensure safety. So the graphical explanation is if this UR, the robot controller stays outside of the safe set of control, essentially this UR should be pushed back to the safety region and actually the closest from actually optimum UR. So that was really the idea. And putting everything together to show this validity of the approach, we did some demo. So in this demo, there are two robot involved. One is mobile robot, actually several, two human workers involved and they are collaborating, one worker with robot and another worker, robot is assisting another worker. Camera is used to record what's going on but not for control. This is just for recording purpose. So in the left human and robot doing some assembly, computer assembly and robots do is helping human worker to carry, to move, give some item to another worker. So this is sort of the ending of this project, NSF project. Now another project I would like to introduce is safe online gain optimization for Cartesian space control. This is essentially impedance control. Impedance control is really used for the robot interacting with environment. Question is how do we get it adjust impedance and if we have time varying in environment, how we may actually change program impedance. So there are lots of work already done adaptive control, reinforced reinforcement learning, optimization and so forth. So I think each has got some limitation, so we try to go beyond that limitation and what we have proposed is essentially impedance gain is taken as a sort of manipulate signal, control input that we can adjust and for doing that we have to have a dynamic equation with impedance gain as a control input. Then based on that equation we do some online tuning and on top of that we wanted to also address safety. Optimal impedance gain in real time but establish safety, if the robot go outside the safety region we would like to embed some impedance, it's very resisting to go beyond that boundary. So we would like to accomplish both the impedance gain optimization and impedance gain adjustment for safety and we wanted to do the same two things. And for first optimization part equation was written in terms of u as u is at the bottom as you see here all kinds of impedance gain starting from the robot impedance, Cartesian space impedance control equation, we massaged this equation into the bottom. So u is input impedance gain and we make adjustment and once we write this equation in this form essentially we optimize u such that certain objective is achieved. Objective is achieved is really the time integral minimized but instead of error itself we said e dot, velocity multiplied by time, this is a good thing to optimize. So optimize this quantity actually we keep doing optimization every certain second. We can make the transition to move in this case baby example problem mass is contacting some compliant surface. If we do some right optimization we get the response which is shown by the blue line in the plot. If we just do the manual tuning the best performance is green and if we are very naive the red line is just bouncing around. So we can have ensure some good performance by this optimization and for the ensure safety essentially the collision avoidance constraint robot stay inside region essentially we have constraint hx should be greater than zero and if we can find u that this condition is always satisfied problem is done but normally h equation does not include u so we don't have how we know how we should select u to make sure hx is greater than zero then there is some very convenient methodology called control barrier function so we utilize that approach essentially look into the derivative of h and multiplication of alpha h if that's greater than zero that's sufficient for h to be zero if h is starting from positive so all we need is to take derivative of h if this derivative of h thing does not produce u we can keep going to higher order derivative and finally in this example after second derivative we find u so we can find condition to make sure h is always positive so this is a one example plastic board contact okay needs to start this is adaptive variable impedance control if so this is already doing fairly good job and then there is another movie which should be running okay this is constant gain best best line if gain is fixed constant bouncing always takes place so it's not good the last one is proposed method proposed method works very nice i think i just i ask you to believe that there is some nice movie there okay okay i think some movies hard okay then then collision avoidance constrained without collision avoidance constrained constant baseline robot is easily pushed outside of this square area but if we have collision avoidance constrained safe on go weak as coming to edge impedance become so high so robot resist the all kinds of motion okay now this is efficient seem to real transfer for this is for also contact rich manipulation contact rich manipulation is very popular so if we try to do this kind of manipulation robot need to physically engage with the environment and manipulate object by applying suitable control force traditionally the robot programmer essentially adjusts the impedance gain and so forth to do this kind of thing but we can do by some learning based approach so this is what show we are showing is how what we can do by learning based approach and actually the framework has got two part we do by offline offline learning then in online phase so we just introduce the sensor feedback the force is measured by sensor and based on sensor output we further adjust admittance control gain essentially impedance control gain if we do this type of adjustment it turns out that it's a very nice and robust approach we have comparing direct deployment without the online part it's not doing well manual tuning is works for some particular setting but it's not transferable but proposed method works nice and also works in other type of environment also so we did some interesting way of learning for how the gain should be adjusted by learning to show that some versatility we tried to apply this method for the screwing problem essentially find the right place this twisting action is something else but we can successfully do this screwing type operation another one is contact aware model based learning from visual demonstration for robot manipulation via differentiable physics space simulation and rendering this is fairly also recent work and there are lots of YouTube video on human doing all kinds of nice skills then we ask question how I can imitate this and more specifically three question is what is the movement of the object we have to analyze the movement of the object and then after that I have we have to figure out how should I make contact to the object and once we figure out that whether we can really do implement the idea in real time okay so essentially see how this part okay the first part what is the movement of the object it's nothing but the image processing rendering game so he utilized the right methodology for doing that and how should a robot contact the object he came up with hierarchical structure in the lower part essentially robot has got a figure out two things how it makes contact with the object and how to move so those are the two basic skills and at higher level thus the robot contact or skill controller will figure out how those two should be be coordinated and actually that will make the real nice the imitate the motion but to implement that in real time the second block how should a robot contact the object figuring out the approach it's too heavy for the real time implementation so for final implementation he used the neural network imitation learning based on the thing he learned from the second block and he implemented and perception module task figuring out module final implementation so it did pretty good now the last one robotics things i would like to introduce is very very recent distributed multi-agent interaction generation with imagined potential games so this one we just submitted to the next the american control conference i think major contribution come from the first two authors ring fen phd student and pin yoon one she's just finished masters work but she's very skillful implementer of some idea so they work together to do this kind of thing essentially distributed multi-agent interaction takes place in problems involving mobile robot and autonomous vehicles so it's common to robot and vehicles and unless we do good coordination we end up with either so-called deadlock or collision but if human dead and blue are both human they normally figure out how to handle the situation without really even communication without speaking they naturally find some good way to avoid that so the charge goal was set generate human-like interaction behavior in cooperation require scenario for non-communication multi-agent simulation so he did lots of simulation and proposed method is essentially based on game theory so agent assumes the cooperation exists and predict plan based on the imagined cooperation so essentially each agent solve some equation game equation to minimize certain common objective but without no communication and normally when two agent why even multi-agent are doing their best and each reach to a point that further change does not make sense and so that's something called Nash equilibrium so idea here is a agent imagine the cooperation game existed between other agent and itself so formulate a potential game as an optimization problem and goal of the game is of course no collision no crash but each agent can reach to the goal point so avoid conflict so the agent use essentially the current state each agent use current state so assume that the other have essentially the same safety policy distance and based on that the one agent the make some optimum decision and the other agent essentially makes the optimum decision but his parameter may be different from another guy so that's that's why some they can decide end up to nice cooperation each agent assumes that current state position heading angle velocity and target position that known but have to estimate other okay so other things and essentially this looks like a game equation involved is minimum of this performance index p s that has to be minimized over the problem duration and dynamic constraint is a discrete time dynamic constraint x u has some constraint and also some distance safety consideration that kind of constraint exist and in solving this it looks like a more predictive control approach we gather measurement correct environmental information and each agent tries what will be the best trajectory what is the best control input and it turns out that's the iterative LQR is the right methodology to to solve this problem and finding this like use the first step of the optimal control input and go back to the first step correct and so forth challenge here is when you solve in the second step this ILQR you have to make sure that its use can be solved in real time so ILQR is now well documented and essentially ILQR is iterative solving linear quadratic problem very classic problem linear system quadratic cost function but in this game what we have is not necessarily quadratic dynamics is not linear so you need to do some approximation and iteratively approach to the optimal solution okay so that's a kind of thing we have to do okay so this is dynamic equation even unicycle dynamic model for each agent becomes non-linear okay and then dissolved deadlock under distributed and no communication so each solving and then nicely avoid collision and each get to their goal and this method was compared with some other method and success rate of the proposed method is 100 percent there is a case that I can imagine that this will still not solve but as far as simulation is concerned the two students told me that it's 100 percent no deadlock no collision and some interesting case okay so realistic and various behaviors and actually this we can handle more than two agent so without collision each is moving to the place where they wanted without collision and this is another multi many agent case so I like the work because it was fairly clear what they are trying to achieve and quite visible the effect of what they have done and high success rate and IPG can generate diverse and realistic behavior and I think this work continues but it's interesting thing now I think as the remaining 10 minutes or something I would like to introduce autonomous driving research summary okay so we have done DQ control for many years in there was a program called California pass partners for advanced transit and highway and in the context of automating the highway operation I was involved in DQ lateral control in that case this is a very standard diagram we have sensor state estimation controller plant and we embedded magnet in the middle of the lane and steering was done to follow the magnet then more recent autonomous driving more than that motion planning decision making perception prediction and sensing state estimation is more general direction tracking and localization and map data may be utilized so it becomes a really interesting integration of many aspects from perception prediction planning control and hd map data pipeline simulation and test so my group has been looking into all aspects some aspect has got more strong control component some aspect doesn't really show much control in my image processing but we generated some hardware to do experiment and if you are I would like to emphasize how we may have utilized both model based control and machine learning take advantage model based control we have good mathematical foundation and lots of things is known but machine learning essentially based on experience others human expert okay we do imitation learning and so forth instead of just defend finding it out from model we can do all kinds of learning approach the reinforcement learning model machine learning we can learn the strategy directly so question is how we may combine those two for the best and we have shown some successful case of combining two and one is zero short deep reinforcement learning driving policy transfer for autonomous vehicles based on robust control and essentially this is a typical up agent environment typical reinforcement learning group and we done this kind of policy for some the learning scenario okay so training scenario but when you actually apply the policy to the test scenario there may be some extra factor which was not properly taken into constellation disturbance vehicle is different dynamic exchange and so forth so how do we make it more robust to the disturbance to the dynamic exchange and we as a human if you learn the driving of Mercedes you can drive almost any cars right even you can drive even big truck with digital learning so human queue do this kind of adjustment quite easily so essentially we say that okay controller should be able to do this adjustment we have zero shot robustness interpretability all possible and one way we looked is when we do machine learning what we learn should be something really more generic invariant to the detail of the vehicle dynamics so from environment if we learn some policy the how do we make define a waypoint future trajectory we should be following this trajectory once we are learning that kind of things that trajectory probably can be utilized wide set of vehicles so it can be applied dynamic exchange may exist the wind may be blowing but that's the kind of thing that existing control theory can do a good job robust control theory even disturbance observer all kinds of things can be introduced there so we used robust reinforcement learning with baseline and we added robust controller so on the top baseline robust reinforcement learning policy and robust controller added in the middle and if we don't add robust controller essentially that crushes and won't work and we checked the robustness for two things one is for dynamic variation parameter variation the other one is side force coming from either wind or super elevation and this dead is when we have robust controller added to the basic non reinforcement non policy blue is robust controller is not added okay without performance reward it really quickly drops okay this is another case that we successfully integrated planning and control or the reinforcement learning and model-based control essentially we ask what is the important one during the planning what is important in action okay so this is essentially more model predictive control we have model predictive control essentially we learn some for trajectory over the horizon we learn policy but when we implement we just apply the first step and then go to the to the optimization again so we we structured the what we should plan learn learn and what we should do in real-time control integrated planning and control modules policy layer essentially learn so it's an approach if we policy layer we have to solve this model predictive control like control like thing over long flies if you try to do in real time this takes too much time to compute the time it's very heavy computation heavy on the other hand execution layer if we have surrounding vehicle to avoid and so forth model predictive control is very fast easy to implement so policy layer we just train neural network and for the execution there we applied predictive model predictive control in real time and this is essentially overtaking scenario the comparing the case we have just model predictive or the without model predictive control and with predictive control the upper plot is when model predictive control is added to the policy we learned from the neural network bottom without the policy the neural network based policy can bring car to crash and car following case it's the same thing that if we don't use the model predictive control module added car can come close and start following but essentially drive into the the previous car cause crash okay so that's model predictive control part is a very important part courtier's autonomous car is an interesting topic what probably I should I have no time okay so essentially how we can bring courtesy to the autonomous driving that's the question okay so that's something I have prepared for today's talk and final slide is acknowledgement and funding comes from various places the agency the federal california agency as well as private institute and some hkclr is hong kong government and in private industry like fannach and so forth and I have been benefiting to be a Berkeley faculty fantastic colleague and most also work essentially I'm reporting here is done by my student okay so if I'm surrounded by very marvelous student I think my student I call it as much as not autonomous vehicle but autonomous student so what I have to do is to be I'm a central computer okay watching I'm trying to coordinate all the intelligent vehicles intelligent student okay thank you very much thank you thank you doctor me for very great talk it was very impressive so now we can open up for two questions from the audience thank you professor for such a fantastic talk I have a question more into the like philosophical area the new arising of machinery and techniques is you know overcoming everything and very ubiquitous so many researchers are trying to replace all the control approaches for a single machine learning block right I think that this is time consuming and not very realistic I believe that a combination is in placing will be in place for a long time what are you envision for for the future of control with the machine learning techniques are they going to be a long as a single block or a combination thank you I think still the trick may be that how we can make best use of the machine learning combine with model learning I don't think it comes to a point that using a some language program okay I have to design a controller for my robot which has got two joint give me a control okay doesn't come to that point still I have to put in my thinking intelligence so model based control probably in that case so I think it is an exciting area I think if we totally ignore machine learning it's a mistake because it's a very nice powerful technique but I don't think I will be replacing my my nice control methodology for that matter very classical even LQ with that type of control by AI okay any more questions from audience thanks for your presentation which is very impressive and significant I just have a general question about the controller so as in the industry most of the applications are still using the PID controllers I'm just curious about like why the research about the more advanced or fancy controller are still important and what's the significance of that thank you I think it's that the PID controller actually it's amazingly robust and very simple so I think for not complicated the requirement probably with people like PID controller it has been used for many years reliable but I think once the project the control becomes a little more complicated we would like to achieve we go for immediately react to this situation that that kind of starts the thing coming PID is not enough if PID at least people has to add some other module to PID so PID plus more advanced control module maybe one mode to go but if you do that you don't I don't think there is a strong need for stick to PID I think the PID itself can be replaced by more state feedback control or module based control and so forth but PID is a good controller I would never throw away PID from my box I thank you for the presentation kind of circling back to talking about machine learning and control in your experience or at least in your opinion what do you think are some of the low hanging fruits where we can kind of apply machine learning to control techniques without losing those nice first order guarantees that we get from control I think I don't this is a figure out how the machine learning methodology is feeling out but I think it's still depending on what you decide on input to the machine learning module what should be some latent variable what should be the out variable that choice is still ours right so if we don't make that good choice there still good thing is not coming out okay I'm talking with students students must have gone through that kind of struggle and they are probably reporting me the best results they have got so in that sense but it's just like using standard control theory okay so what do you provide as input parameter to that module is I think extremely important depending on that whether you get good one good result but it makes sense it's kind of like hyper parameter tuning those finicky parts of control thank you more questions probably I would take the advantage so Dr. Tommy you talk about all those fans who are about terms driving and using all this machine learning algorithms but vehicles where transportation is such a safety critical application what do you think the role of machine learning in the aspect of controlling the vehicles and to guarantee the safety of all those vehicles especially a lot of those machine learning algorithms that are not predictable they are in a black box so what's what's repeating I think it's a very interesting question right some people argued that the autonomous driving because what the advantage is it removes human from the driving group okay if you look at the accident cause statistics the human it's a big reason that accident took place so by autonomous by automation if human is gone it that itself is a big advantage that was our one argument I had but there are some other counter argument to that that's not quite true right because if we just look at what happened to even the way more vehicle or even the Tesla vehicle or the pilot the accident happens and essentially that's accident it's still traced back to the human error software is never be the error free it's quite actually the error prone okay so in that sense you never removed the the most serious element in the accident chain and if the AI is utilized not for autonomous driving just for the home heating and so forth one day home heater didn't work was not such a big deal but the autonomous driving it's I am completely agree with you safety is very important so I whether we can never reach to a stage that even something happens whether there is some even AI loop to resolve this situation unless that's kind of guarantee comes we I probably I cannot 100% trust so the AI but on the other hand if because of that reason if you say that okay I'll never involve the automated vehicle I think you are very losing very important part okay so accident I I meet accident anyway right and I think accident rate is very low so whether I just trade off my all in complete all convenience I can benefit from the AI just for that fear so it's I think it comes to down to at moment the best answer becomes more philosophical okay what what do you believe what is important for you right thank you thank you thank you for the wonderful talk I wonder looking back on a very long and productive career are there problems that you wish you would have worked on or would have worked on earlier looking back yeah are there problems that you wish you would have worked on or worked on sooner maybe lots of problems but I think my I actually that I lived with us always stream when the opportunity comes I took always that opportunity I I because it's simply that opportunity didn't come I didn't work on I think if I it would have been nice I did something like preview control for my phd some people used preview control but predictive control became much more popular right predict as a idea looks very similar so in that sense if I formulate a little bit more general like predictive control probably more people know about me but it's simply I didn't have that kind of opportunity so I don't have any in some sense I miss this opportunity and I regret I don't have that kind of thing I think I'm quite happily reached to my stage and I think I have been very lucky thanks so hi professor Tomitsuka thank you for your presentation and my question is when I look at listen to your presentation it seems that robotics is closely related with vehicle and there are lots of similarities but apparently there should not be a seamless integration for the control algorithm from robotics directly to the vehicle so my question is concept except from the requirement of real-time execution in terms of solution space how do you think the solution space in robotics differs from that in vehicle control thank you I think my my robotics world something I discussed is in a way very focused narrowed I think it's an industrial robot staying on the top of the table and so forth but the robotics also includes all kinds of other applications including mobile robot okay the mobile mobile robot moving on the not only the factory floor even the whole hospital medical applications so if you broaden to that general robotics actually robotics and the autonomous driving has got lots of the same problem to share right so I think I more currently start more interested in doing more on mobile robot mobile robot research is now it's becoming fairly simple to do there are lots of reasonable hardware that I can buy and to do experiment I don't used to be that when the hardware does not exist I think clever clever student also assembled something but now some good easy hardware is available so if we grip her I think I do think the gripper it's a fairly sophisticated gripper is available at reasonable cost and I think mobile robot is in that sense true okay so I would like to move into that area but I think if I do mobile robot research I think that I have to buy at least several just one is not enough because I would like to see how do they interact I think actually it's a human mobile one mobile robot interaction maybe it has a lot of problem but so I think it's a robot research is very very exciting field and I thought I have done it's very small part okay and I don't claim for me I think it was a lot of fun but if I do robotics research I certainly would like to see now other topics also hi professor thank you for a great presentation I I personally find that sim to real and your work on integrating the simulations and you're optimizing your controllers are very fascinating in your view what is the most uh useful aspects of integrating the simulation or the graphical representation of your controllers behavior in your process so actually that's that's very important step I think the most exciting thing is actually I can see visually often so graphic interface is very important I think to me I think even textbook there's a lot of plot used to be when I learned control okay there are lots of step response plot okay dumping change and step response pattern change did that way to me I think when I was looking at just plot it was still dry but if the computer even computer if the computer can make this plot in my in front of me that's very visual and I think visual things always helps and I think probably it's not your question but I think I mentioned to this to several other colleagues so I just mentioned is of course the we need very good background of the mathematics to understand control theory and also very good simulation skills right but before simulation you need to do a lot of good programming skills programming skills including how you can make use of the open software to your research I'm always impressed by young student it's just for them I think that they are they grew with whatever the software like they don't need computer manual to to use computer so they naturally use very all kinds of fancy open software okay the handling solution like like one example is ILQR I know the theory of ILQR what kind of loop iteration you have to go through if I have to write a code to do that it's big big job I but for students it's such a very easy job they just look for something open software says okay this open software can be used this part is missing so I supply this part and then it's very impressive things simulation right so I think for young generation I think probably they don't need to have to worry about programming skills but I would say programming skill is how definitely has to be in our toolbox to be a very efficient effective control engineer okay thank you thank you professor Tamizuka I'm hoping to to dive into this area of research and it's all new for me so it's eye opening to see all of the the things you and your team have achieved so my question now it's with the limitation that I've heard maybe I'm wrong but about like the limitation in the fabrication of the chips and the rise of quantum computing and all of these I was wondering what do you think are the challenges to keep on working or and like making all of these models faster let's thank you computer chip yeah yeah so what are some of the limitations that is like well you talk about the last five decades right and now maybe from what I could see is like a lot of things are getting to a plateau right and I was hope like wondering what do you think are some of the challenges I think that the the real-time computation is always important right but depending on how do you want to apply presently the people are more and more interested in kind of complicated algorithm for the real-time use then how you can really implement in a time efficient efficient execution right it's kind of all the example may not be necessarily good where we are doing adaptive control of robot manipulate and we use direct robot arm which shows need for adaptive control people talked about and we implemented using some the processor I think there's a DSP processor and I we could show that the difference between PID control and adaptive control and we said oh finally okay this is a proof that adaptive control it's really a good approach to the direct try robot control okay so then soon after little bit more powerful the processor appeared and when I we implemented PID control at much faster cycle performance was so super so we couldn't beat PID right so it's always kind of race how fast you can do execution so that that part is really fascinating and from now on I think presently I part of the thing I don't know whether you I explained detail good enough but if I try we try to implement something we learned by the reinforcement learning actually that that is okay but I think yeah this is strategy that we learned from ILQR if we implement that one as is it's very difficult so time time constrained so we just try to implement in different way and so I think that's kind of challenge what is the you it's mostly more people more than PID fairly complicated big computation how you can do efficiently okay so that may be still an interesting topic eventually it may not be a big issue okay thank you we'll probably take the one last question from the back oh hey thank you so much for inspiring talk I wanted to ask a question on the future directions of combining the model base and the model free which is that data driven because I think I was wondering the goal of the marger is in the direction of learning the physical to learn about the physics or learning about the existing control technique which guarantee the stability which use the physical parameters right so the our technique like Lyapunov or control Lyapunov gave you the guarantee given the simplified model but do you think the goal should have more towards the learning the physical let's say how to respect the physics so that we have more complicated model that we can solve it with the model base or is it towards using the technique of the stability guarantee method sorry the question is weird but no you have an answer to that question yeah yeah but uh but I was I was wondering in a way that which direction we should go to march in a way how we keep this model base nice theory which is basically for me I think it is respecting the physics and so should should we move towards in terms of more or let's say using the model base control as always the lower end the like low level controller to keep it when we do the the combinations or is there other broader directions yeah I I think it all depends on what kind of problem you would like to solve so if you completely abandon model base control from any anything I don't think it's a right approach but if you have this is a field that I have to always use model base control so that's that's not right also so I think the right answer probably exists what problem you have got okay so instead of general statement I think it's more more condition base okay so what is your target system what is the problem you are trying to do and they based on the best method is somewhere combination of both or just going to one direction okay if you have physics model there is no reason to throw away physics model right but that that's not necessarily means that you you use the up of method always model based on your mathematical model could be a very important to simplify your machine learning quote right thank you okay for all the audience if you have further questions feel free to reach out to dr. Tomitsuka we will host one our social event here so we'll be providing light refreshment and you can chat with others reach out and also I want to thank Marcia and Dara for our organizing event and also the staff from the hall of music that's your organization but most importantly thank you for dr. Tomitsuka to accepting our invitation and flying all the way from San Francisco to Purdue to just give this talk to our faculty and students thank you very much thank you very much