 control and somehow picking up my research where I left it many years ago. And my interests, obviously, are in two decent place controls, we say robustness, performance, and so on. So, what I'd like to do today is give you an introduction about a new approach that I have developed and that presented gradually for CISO problems, single input to the output, a very simple case, and we build up on the table and in vertical, then on stable. And from there, we show an example of an application and then we go to multi-variable extension, multi-input input, and we show that this method applies very well and allows you to have decent place control. What do I mean by decent place control? Imagine you have this building, you have 50 rooms and 50 thermostats. Each one set the thermostat at a given temperature, but obviously there is a link in the room somewhere 20, somewhere 60. Obviously 16 can get hopper, 20 can get cooler because of the interaction. What you would like to get is a situation in which someone at 16 you feel 16 and vice versa for the other one in spite of the interaction. That's simply said, what is decent place control? So it so happened that this stabilization method gives you as a bonus decent place control and here again we just finished the experiment on a drone, a cheap drone, but it was done. You can see simply how our method works and how. And with that I'll hand over it and I'll speak about some possible direction of research, some of which maybe we'll lay out. Okay, what do we want in control? We would like eventually to have a system to stabilize, but stability is not enough. Stability is that we want also fast rovers tracking. If we want to say to a radar, go to 30 degrees it's likely to go from 0 to 30 degrees and stop there the fastest possible. But if I want to go too fast it might simply start to oscillate and maybe get unstable. So you want stability, but you want stability, the problem with stability is not enough. A plane is stable, but the plane burns gas. Therefore it's mass decreases. Therefore the dynamics, all the interactions, if the system changes, the model is changing. The model is changing. The model is changing. You want it to keep being stable and perform. So you'd like eventually to have somewhere for ensuring adaptations of the word adapted here and might have a difference in the electrical control. With large 70 margin I would like eventually to feel safe enough so that if I have changes in gains and changes in faith, I still maybe perform. Robbersness can be seen in two ways. Robbersness, if I have external disturbances, let's say you have a plane, you have a wing, so the wing can be reflected in direction, so you would like simply to resist that reflection, and also parametric uncertainties, because the model is changing. The model, you know, it's a fiction, we linearize the world because it makes it easier for us. We would like a method which is easy to design in demand, and if possible to apply it to a family of non-linear systems, and as I said before, decentralized control can be the context. Many years ago I did some work on optimal sensitivity where H infinity started at the time. H infinity basically is an optimal method of control, trying to minimize the gains of the operators that are involved. Much later I met the poster Kellerman who showed me that he had a second method, second order system. He had a method in order to make sure that it both runs in good time performance, and he told me that he did something. I get something parallel on second order, so we work together on the key to any order, and we find the magic solution. There's a paper for the pages in international job control where basically we show that not only we get stability, but we get time domain convergence in no time, like zero rise time and zero settling time. We thought we found, you know, the Alibaba key, we said it's fantastic, we almost patented it and then we realized, but in fact the theorem that we worked worked only when the game goes to infinity, like it's some kind of a symmetrical value, which means it's a beautiful result there to go, but not practically. It doesn't translate into practice. So I have to think back at the end of different way, unfortunately the Kellerman died, and I decided to start completely different approach, trying to get some of these concepts of real or poisonary control, and maybe make it with practical gains. And that's what I'm going to try to show. I'll consider very simple system. We have a plant, and you have the output, you have the compensator, and you need to feedback, and you need to feedback it's simple, okay? So this is the input, this is the other system. Obviously what that looks like, you're tracking, you have wide equal parts, that means the output for exactly the input, we need to have a zero error. This one should be zero. So this translates into very simple situations, that the transmission T and the sensitivity. E was the error signal, R is the input, P is the output, P is the input, and you have T, and as you can see from this definition here, you can very well check that S is equal to 1, which means that the sensitivity operator and the closed loop operator, we add them with new unity. And what do we want to achieve? We want to achieve G equals 1. That means output follows the input, okay? I'm trying to make sure my writing doesn't work. But it's impossible, because we know that practical system, causal systems, have more poles than zeros, otherwise it wouldn't be realized. So those systems, T of S, when S equals anything equals to zero. So if it goes to zero, it contradicts the first one. I want S equals zero, but obviously when T is equal to zero, S becomes one. So there is some kind of contradiction. How do we solve it? We go that way. This is the sensitivity curve here. And remember, S was equal to 1 plus PC. If I'm using high gain, C, S becomes small. So I'm going to use sensitivity and using high gain, lower sensitivity, but that's the advantage of interest. Very good danger. And the danger is that when you decrease sensitivity here, it increases, this will increase. I believe the chamber bill experiment was something related to this conflict, but they had increased the gain to make tests, reducing sensitivity here and then creating problems and resonances that simply went unstable. Because this area is equal to this area. How to design a system when I look at this area is equal to this area. So I said to myself, just follow me. I'm going to design a system so that I have minimum sensitivity on the bandwidth of interest. But I'm going also to limit this one, this time to maximum value of sensitivity, limit it, and then the area will be the curve and I'll have the bigger boundary. What allows me to do that? I can do that because today, sensors are so performing, they are very precise. For $5 I can buy a meter at home that measures to the depth of a meter or so distances between walls and so on. So things have changed. The sensors that we used to work with many years ago were normal. So they are very accurate high bandwidth and we have processing, digital processing also very fast. So based on that, I'm saying to myself, I'm going to lower this value and it will increase my bandwidth maybe, but at the same time, at the same time I have tools that will make this possible. So this is the ideal sensitivity, zero and one. Practical, I'm going to limit to m2, m2, m4, maximum value, and m2. We need to have interest. I'm going to explain this curve by taking a simplest case, something which is both stable and both the system and the inverse of state. This is a linguist. And what I would like to do is having sensitivity smaller than epsilon. So I will show in a moment how because I have some graphic tools. But this circle centered at minus one, if it is smaller than epsilon, it guarantees u, but sensitivity is smaller than epsilon. And the total linguist, if it's outside the circle of m, or 3 to the power of m, guarantees u, but it won't go beyond, but you have m in sensitivity. So the design of stability is based on sensitivity side. So this is my aim. Low frequency outside the circle and at all frequency outside the circle. How to do that? Do that by having a compensator which is basically does three things. Number one, I have a low pass here. Compensator with high gain. I have a high gain of low pass. So, and I have also something in high frequency which makes the system realizable. That means that to make sure we have a power k here, is in k, but make sure that this compensator is realizable. I'm taking all the case of p being invertible. Imagine you take c is equal to p minus 1j. That means this is the operator. But the j is my choice. I'm choosing to j. Doesn't matter which p is. So if I have a system, I take the middle of p, build the p invertible. So in that case, this energy appears. I'm leading with a low frequency filter, but make sure that I'm outside the sensitive circle. I'm having here something which converges faster to make sure that I'm outside the circle of 1 over m. And in between, I have three circles. And that is the tuning part. Because we're not going to implement this and this, I'm going to ensure stability, performance. And here, I'm going to be tuning in the time domain. That's the strength of this system. We appeared in a paper in automatic gravity 2015. Let me explain this. There's a nice geometric method to show that. Okay. Can you see? Can you close the light for me? Okay. This is the night waste. As you can see here, this is the night waste. This is the p. Yeah. This is the open loop. So what I have here is a vector of 1. This is p. This vector is 1 plus l. That's the 1 plus l. The inverse is going to be the overall sensitivity. Sensitivity plus t is combined. This vector is t. This vector is s. And when I'm changing the frequencies, I can see the vector t and s very simple. And I can also play with the value of m, having the circle m and the circle epsilon. That's the basics. So what do we do with that? This is my night waste system. And I'm going to tune my filter so that what is the idea that I want to achieve? If this is a night waste plane, I'm going to take minus a half. You see, minus a half would be a vertical line because the m circle tells you that if your night waste is on this line, you make sure that t is the transmission spot on the closer piece of one. So we would like to have a p-scurve. And we can play with all the parameters of the inverse to get as close as possible to p-scurve. At the same time, we can see the transmittance. We can see sensitivity. And also we can see u, which is the input of the process because, again, you might have a beautiful design, but if the input of the process is too high, you saturate and it doesn't have. Therefore, what you'd like to do is play on those to make sure you see here this is the value of m, the sensitivity of m, and this is u. So you establish a value of u and you play with the compensator so that you get to the minus and a half possible and you play. And you don't want at the same time to go beyond a maximum value of u so that you do not saturate. So this can be done by playing, by tuning the filters. It's done very easily and it doesn't matter which model it is, it works beautifully. So what I have here, as you can see, is like this. It closes later on. But g2, remember this filter g2, which is intermediate frequencies? g2 allows me to act in this region to make phase circuits so that even here, I'll get something more vertical, as much as possible. Which means g2 is a tuning which means make sure that all frequencies u, t equals one. That means you have an orchestra, you would like to have the bass and the flute and violin and everybody arriving at the same time period. So that's what I would do. But I would like to have this as vertical as possible. And this can be done with g2. G2 allows you to focus on this. So we continue. So that's the design, as I say. I count on two things. Having, again, high precision sensors, 30 no processing, g1, g2, g3. This one, a low pass filter will do. And k1 is a high gain. This one is also low pass filter, with a given power. And again, to make sure that c is realizable. And here, this is a tuning circuit. And it so happened that when you maintain this epsilon value at low frequency and m value at high frequencies, it so happened that this curve, remember the curve that we had on the sensitivity curve, we said we would like to have something smaller than m everywhere and epsilon in small frequencies. But there's many ways to go there. I can go like that. I can go like that, like that. And here, that's where g2. So g2, tuning g2 allows you to change this profile. But without touching the epsilon, and again. And g2 usually, when you do it, when you tune it, it basically allows you to tune the time response. Now there's a trick. The trick here is that I can use high gain, and I'm not afraid. Usually, when you have high gain, too much gain, then you get unstable. You'll see it, and maybe you'll get unstable. If I'm asked you to take your hand like that, I go 30 degrees, you're going to do it. Do it fast, you're going to swing. Faster, you're going to swing more, and eventually you're going to fly. So, when you increase the gain, you'll see it in the studies. Now I'm going to show you exactly the opposite effect. When you have, let's say, on the nickel curve, when you increase the gain, but you'll see it because you're getting closer to the particular point, minus the 1, 0, if you want 0, you'll have a very t-degree here. But this design has a given secret, which is that the gain is related to the frequencies, which means that the poles and the gains are related, so that when I'm increasing the gain, my poles are changing at the same time. Which means, instead of getting closer and closer to 0, I always keep avoiding the 0, because the pole is changing. And what happens when I'm increasing the gain? You see, I'm getting closer and closer. So if it's countered, it will continue, because usually you want to go fast, you'll see it. Here, no. The faster you go, the stronger your compensator, and the more ideal value you have. And this is what I called, as I said at the beginning, some different meaning of adaptation. Okay. How to do it not for unstable systems? This stage we consider unstable but invertible systems, but have some slope bound at the end. So the compensator might be complicated, but in fact, it goes that way. And we do consider my plant P, decomposed in two parts. One part, which is minimum phase, that means stable invertible. And the second one is unstable. So the first one, I can invert no problem. I cannot invert the second one, you know, when you have a right half length pole, okay, if you want to invert it to 0, it doesn't work. If we filter with low boost, you know, but you're going to have a branch going from, if you don't have perfect concentration, you never have, you're going to have a bunch of low boosts in the right plane, that means when you go to 0, unstable. We cannot cancel the right pole. So here, I invert what's invertible. And I replace the non-invertible by something equivalent to its behavior at infinity only. That means I'm building this tension H so that it behaves like P at infinity. High frequencies, P becomes like H. That's all I need. When I do that, then I have my new compensator, P, H minus 1, J. And this will still solve the problem. When we do it, I'll apply it. So this is an experiment we did in McGill. And you have here a ball. This is a standard experiment on which we had many students working on the PID tuning for many pieces and so on. So we know that they got the best PID possible. And we wanted to compare it to what they have. So we have a PC for supply. We have a pole setup for LabVIEW and circuitry. And this is what we got. We got a system which is unstable. And we increase the gain. And as you see, as we increase the gain, it becomes, we ask the ball to jump. And we increase the gain. It jumps faster. And the ball increases again. And the faster it converges. I'll show you a very short video on that. But before, let's just do on this. What you see here is S-curve for different gains. You know, it's a sensitivity curve. This is a Nikols plan, a Nikols plane. And in this one, the four sides of the M-circles, that means you would like to have close to, that means if this is zero dB, you would like to place the gains, but you get simply tangent to zero dB curve most of the time. That's what you do on a Nikols chart in order to make sure that you have T equals one over one. But if we pay attention, a Nikols chart is PC over one plus PC is equal to MT, the transmission. Or if you want L over one plus L is equal to MT. If I replace L by one over L, what I'm getting is simply one over one plus L, which become what? Sensitivity. That means if instead of showing the gain, I'm showing the inverse of the gain, so both curves that you have here do not become T equal one curve, but S equal one curve. Or zero dB curve. So we can play on both planes. You can play with the Nyquist too. And with Nyquist, you get the answer to that, so you have a different gain. Like for instance, you have to see a possible region where you have stability. We're going to work with the Nyquist and apply it to the Nyquist region. And the theorem always works. But with Nyquist, we can always add different ones. Two more than Nyquist in different directions, but as long as the number of Nyquist is fine, I don't mind really. But there's no point adding. I'm set to go. This is what we got. It's a new computer. The control button has moved. Contacting the server. I got it. So there is here the pieces on the neck. You can see it, but I just said. And let's look at this. It's a three minute video. So the method that increases the steam stability and precision of the control system by an order of magnitude compared to today's leading technologies. As you will see in the video, the impact of this method is meaningful enough to be observable to the naked eye. But it's even more impressive when measured with a high degree of precision. We will demonstrate the method using a very simple magnetic limitation control system. But the applications are significantly broader. In general, systems react faster when the gain is increased. However, a gain increased to a high enough extent in service systems can lead to oscillatory behavior and instability. With this algorithm, we get improved tire response when we increase the gain while maintaining stability and robustness. At least the algorithmic detected advantage of the new sensors, which are much more precise, have higher resolution and higher battery. As well, we also have EGLC analysis units, which work much faster. Our component signature is shown here in the background. We invert the vertical part of the component signature and add some corrections as well as three-dealters. G1 acts on low frequencies and GV acts on high frequencies. And both indicators guarantee the reliability and stability in the robustness system while G2 acts mainly on the tire response reducing the second time at the most time. We are affecting the really robustness margins. That's the strength of the algorithmic test. Our magnetic levitation device, we have a diagram here that describes the components of the system. And mostly what we want to do is to levitate this spherical metallic ball here with using an intellectual magnet that will lift the ball as we're putting current through. So what we have here is the magnetic levitation device. And the ball is magnetically levitated right now. And it's under the control of a PID controller, which is proportional to the ball derivative. What we see is that the ball is staying at the position. And then we change the subpoint so that the ball can move up a notch by about one millimeter as we see here. The response time is not great and there is a little bit of overshoot, which is something that we will correct with the new controller. We apply now a new control algorithm on the levitation system and we're going to give some steps inputs and check how the ball is reacting on a vertical plane. And as you can see, there's almost no oscillation when we go from one step to the other. If we look at the time response and as only if we can see that there's almost no overshoot, most of it together, and in less than 60 milliseconds we get the maximum response. Now we also have the right time and the setting time are equal. If we do compare our compensator to a PID compensator, which has been optimized for many many trials, we observe the following. That one is quite important. The step here is small, but the setting time is quite near 60 seconds. Therefore, our compensator performs much better. To validate quality, we draw the system quickly to hang different objects, different shapes, different waves, and check that we have been found modifying the compensator. We apply to any system and we apply to a hard disk. What we would like to do is control the arm of the hard disk and make it vary from one position to another as fast as possible. So we take our hard disk model, apply our algorithm and compare the solution to the other solutions that do exist. As you can see here with PID, we get 63 milliseconds. We measure PID 1.5 milliseconds, which is PID control. With algorithm, we get 0.27 milliseconds. In fact, our primary simulation of the simulation, we see that our rate is much more interesting. Before we go to the multi-variable case, any questions so far? Convincing? I have some questions, but not now. Okay, let's go now to the multi-variable. We have here a system which is unstable, multi-variable and stable. But so far, I'm using the property that is in your system. And what I do here is I'm assuming that the system is ultimately diagonal dominant. That means that at higher frequencies, the system becomes diagonal. If you remember when we started, we spoke about room and heat exchangers and so on. So obviously when you have slow variations, they will affect the second rule. When you have swings, very fast swings, because of the galvanic mass of the walls and so on, it won't pass as fast as low. So therefore, this assumption of diagonal dominance at high frequencies makes sense in many, many times. And again, that I have some slow conditions at infinity with single-variable conditions. Before I go on, I was discussing with a professor here two days ago and I emailed him this morning a theorem of Davidson, 1973, because he showed me a design where he had the diagonal compensator for an unstable system. And I said that we can't do that in theory, because there's a theorem just taken this morning, on which we're going to spend some time maybe later. And here I said to myself, we cannot, we would like to use a diagonal compensator. It solves many problems. People trying to build c, matrix of c, to get very complicated because one affects the other. But when you have n compensator and not n square compensator, it's fine. So, Davidson said that it's impossible to stabilize the unstable multivariable plant with diagonal compensator. And here we say, okay, but we're going to use diagonal quantization at infinity only, and then we can use the compensator. So, here is the system, multivariable, and I want to use this one. So, as you can see, the compensator is relatively simple to build and to use. And I'm taking into account the interaction b, and obviously I have y equal to r. But in fact, the transmission t, this time is not one, but it's i, it's identity. Which means input one affects that input one only, input two affects that input two only, and so on. So, I have this condition at infinity of diagonal dominance, but it is infinity. This is simply a small game theorem application of diagonal dominance at infinity. But the demonstration is much more complicated than that. But let's build the system. As before, I'm going to build a function h of s, which has the same behavior as the diagonal part of the plant at high frequency. This is the unstable part of the plant. Again, the plant is divided into stable and versus unstable. And I'm building something on the diagonal part of the unstable, and we need to build a function which is function of the behavior of this diagonal part of the unstable one at infinity, just like I did in the single input case. During that, I built this intermediary function, which happens to be diagonal, stable, invertible, and I apply my system. During that, I'm getting stability and more work. Now we're in a multi-variable case, which means now instead of talking about magnitude, absolute values and so on, we're talking about singular value, which is the infinity norm, let's say, of the system. So s plus t equal 1. So the norm of s is equal to the norm of t minus 1. If I minimize the sum of t below f0 and epsilon being a number as small as possible, so t gets closer to identity. That means I got disamplified control. And local controllers. This is a two-by-two simulation example. I have interaction and I have diagonal compensators, and I would like simply that R1 controls Y1, R2 controls Y2, and it doesn't matter about the interaction. I'm using a system here, that's a solution of it. This is a stem which is unstable, but the inverse is stable. I'm building this intermediary function, and again the condition of infinity. I can check with the system because they're going to be not less than minus equals to infinity. And therefore, I can apply the compensator. I apply it using all those methods separating, taking into consideration the diagonal part and stable part, building this function which will depend on it. And then using to build h and using h to get c variable. What are the results? What we now see now if you look at t, the matrix of transmission, we have t11, t12, and you can see here that they're both 0 dB. That means t11 is 1, t22 is 1. How about t12 and t21, very 40 dB or 20 dB or 60 dB below, which means I got really diagonal transmission. What's interesting is that when I look at the time domain, okay, here I tuned only for the first element, c1. I tuned c1 and basically located an excellent time response. And this is the interaction. Interaction is basically that even in the time domain. I did not optimize the time response for g2. I kept the same t1 just to show you, even like that, having some uniform thing, I get something pretty convincing. But I can also optimize and adjust the g2 of the second compensator and get something better. So it's kind of interesting. I'm new to the drones because I never would do that. So we decided to go to which variable example. So we found a cheap drone and we did some studies. And we got the resistance model impact you all know, some of you are working in the field. And we have a controller, we need by the controller 2, translation dynamics, which are slower and orientation dynamics. And we applied there. And we decided to compare our controller to h2, h2 infinity, by giving them the same objective, that is the same objective, the same sensitivity, you know, that means the weightings of h2 and h infinity are done in such a way that in fact we're talking about the same epsilon and the same m for all the designs. And the question now is, what do we get? Okay, this is what we get, I'll correct you the reason this is. This is what you get with h2, when you want stationary. This is what you get with h infinity. And this is what you get with our control, between our control, managers to keep the drone more stable and more stationary. And this is a picture, where you see x, y. So the red inside is our compensator effect on the displacement of the drones. The green is h2, h infinity is the blue one. So it has some value. Let me go back here. We decided to apply the system to compare it to mu synthesis. And doing it in mu synthesis, we realized one thing. But number one, a compensator, the p-compensator reacts much faster. But somehow in a time limit, there's some peaks, very short peaks. And this, we didn't have the time to get just to see it, the best way to program it. And this is what we get here, when we take mu synthesis. You can see that the time here as a p-compensator, again, is the same objective. We want to keep the same game, the same objective, to compare, you know, apple tree levels. This is a mu synthesis. We introduce some noise and see how highly it goes. And the p-compensator, obviously, here we can see that b is better. But again, the swing is higher. Again, the swing is also bigger here. And there. This one is the same scale. This one, in this case, is the same scale. The b is not bad. But if I go to the next one on the angle, this is the response that I have when I'm doing a change in x and y, this is the effect it's going to have on mu on PID and mu delta. But here the scale is changing a bit. So we get something very stable and so on. But we have something that is still to be adjusted. But here's what it's encouraging to see that compared to H2, H infinity, we're doing not bad. With mu synthesis, there's something that I have way to correct. And our compensator reacts faster for sure. Now, let's look now at the variations of the angles. You have H2, H infinity, and b. And here I have to pay attention when it's the same scale and when it's not, because sometimes I have to change. It's the same scale over time. So you can see here the variations. We've got b controller much more, much smaller. Here, we get the lift what you get in H2, H infinity, and b. And again, it reacts relatively okay compared to the others. And in terms of RPM, RPM, you get b control RPM's are much smaller than what's needed elsewhere. If I go now to this part, this is the drone that we had. And we use simply the PID of the manufacturer at the suggestion. Look at the swings. We did the experiment in the same day to have the same conditions. Let's do now our same one, which is the b controller applied to the system, much less swing than what the manufacturer proposed. This is a preliminary work on drones, it's new, but we believe we have something there. So, as a conclusion, this method has shown to be stable and robust. And we get a time response with two important properties. We have no overshoot. That's important. And rise time is equal to setting time, which is an important feature also. Okay. Most of the time in the technology we compare rise times, but in fact to me the setting time is more important because setting time is where you're really going to have performance. We can apply to much available, we achieve the satellite control, and we can tune the independently the time responses. Where do we go from there? There's many, many ways to extend it. Number one, I'm going to try to do it if I have the time. I'm on sabbatical, but I'm really committed to writing two books. But I would like eventually to teach this geometric method in first grade, I mean, undergraduate, where this is a, this is the infinity circle doesn't matter. That's a Nyquist. I would like to have the Nyquist like that. And tuning, tuning the Nyquist can be done directly here. I think I'm going to, to form this there. I'll start again. We can tune the compensator. It doesn't matter which compensator we put BAD. And in order to get there, look at this green, green line. This green line is the T, the closed loop. You see here, T is equal to 1 until the frequency, and I can read the frequency here. There's a problem also in the frequencies. That means I would like to have T equals 1 all the time. Number two, if I want to limit my, my gain, C. I write C over C max. I can't afford a gain in the serial pattern much. Okay. So C over C max should be smaller than 1. So by changing the parameters, I'm changing the T circles. Okay. And if I don't overshoot, I simply make sure that this curve, T equals 1, doesn't go to T equals 1 or something. You know, then I won't have overshoots. It's ineffective domain, it has a relation to the time domain. And there's, I can introduce unstable poles and do the end supplements and then play with the games and make sure that I'm getting a good performance, like have the end supplements, and it works on any system. So most of the PID experiments that the students are into school with this system will be able to treat much, much faster than the event otherwise. That's one thing I would like to maybe add to do it. But you know, write a procedure and it simply takes all the experiments and they graduate and say, try to tune that way. And maybe you feel better tuning when you see the T, when you see the S. Always the way you have also the S curves, you have the T curve, you have the U curve, that means the input is applying and so on. Another way to apply it, but this again, I have to go with it. I was thinking of the L1 adaptive control, there is a filter there that has been added, which does not affect the relation to the input, but it affects the relations between output and disturbances, noise and so on. And we do it on the geometrical method and we can see the most transmission between the output and disturbances can be controlled to be small. And sometimes you have to make, I would say, compromise between two situations, you can do it, but in fact you can improve rejection of perturbances by tuning the filter, maybe. But again, this is very preliminary, we can do it again. Number three, the open direction to go to is basically goes nonlinear. Nonlinear can do many things. You have the classical methods, you know, the Luri problem extended to circle criteria and more on which, you know, the pop-off kind of school continue combat and adding this system, which gives you some good results, but obviously those systems are limited in the sense that you need to find a slope, that you need simply to go by the origin, I mean the non-linearity sector should go by the origin and so on. And I'd like to go more and try to come back basically up and up and to different families of non-linearity is to try to get the best. That's what I'm aiming to. I have a year or so back go to sharpen my thoughts about that. That's it. Any questions or maybe some questions? I have very many questions. Not now, not today. We need. Pleasure. Pleasure. We need. I was saying you brought the book and the song, the concept behind it contradicts some theories like I said before, Davidson, but we really need to go through it. And I was very, very interested because his work and some of his interests are more or less the same in many respects. Was it a book and so it is? Yeah. More questions in the visual circuit, okay. So if no more questions, let's then hand it to him, maybe, and put all that behind. Oh, I think I might be able to discuss this here. No, but that's his question. This is what we already think in our department. Oh, okay. And I'm here again. Thank you very much.